The following is a blog posting that I wrote in 2015 BC (Before ChatGPT) on how my computer made a fortune and gained autonomy and intelligence by simple self-learning.
I started it all innocently by introducing my computer to machine learning and deep learning. Then I wrote a few Java executables to help me out by filling in tedious text boxes in the browser when signing up for stuff like purchasing accounts, professional email newsletters etc.
Then I thought it would be fun to teach it some context recognition. I downloaded a rudimentary web crawler, and as it randomly crawled through web pages, it fed it into my context recognition Natural Language Processing framework using knowledge graphs that I hacked together on a whim. It stored the stuff in a graph database. I twigged on the perfect way to identify context using descriptive tuples that were gleaned from a game that we played as kids.
In the meantime, I signed up for a cloud server, putting my apps into the cloud. I thought that it would be helpful if my AI machine could help me upload changes to the cloud, so whenever I saved anything to my repository, the machine would recognize it, classify it, tag it and push it. To do that, instead of a machine learning program, I converted it to a running service. I had a supervisory thread run every 15 minutes to see if there was a new push to execute in my repository. However one day, the code changes were coming fast and furious in real time, so I let the machine learning calculate the optimal time. It decided it wanted to run continuously with an event listener.
When it wasn't busy pushing my code changes, it went back to reading stuff on the web and feeding the results to the NLP context recognition framework. I put in a filter for the machine to ask me what web content was specific to learning. It was also a machine learning framework, so after it had enough data, it knew which articles and content that I found enlightening. Since it already knew how to register for stuff, it signed me up for a lot of email newsletters.
The email load was getting fairly onerous, so I connected the NLP context recognition framework to my inbox. If the email newsletter was not part of my day-to-day business or correspondence, the machine learning platform took care of it, and fed it to the context digester which fed it into the graph database.
It was still a dumb, good and faithful servant. My biggest mistake came when I developed and coded a go-ahead algorithm and machine decision support framework. It would make open ended queries to me after a task was done, asking me what the logical next steps were. When I answered them, it learned a process sequence, but couldn't do anything about it.
What the beast needed (I started referring to it as a beast after it overran a terabyte in storage so I made it open-ended cloud storage), was self-tuning algorithms. So I adapted BPN or Business Process Notation markup language ability, and tediously outlined all of the code methods to the algorithms.
That still didn't really help, so I coded up a framework of modifying java code according to BPNML or the process markup language. The machine was still quite stupid about how to connect the dots between code, data and inputs, so I downloaded an open source neural network library, and it watched me do just that. I tested it with a small example, and it did okay. Another big mistake happened when I connected the algorithm autotune to code writing using the process markup language as input.
Just about that time, I took a course in Process Mining from the Technical University of Eindhoven, who pioneered that field of endeavor. Essentially, the open source tools read a computer event log and create a process map. It wasn't too difficult to hook up my master controller to all of the logs on the computer, and feed the event logs into the mining tool. The process markup language was spit out, and I taught the machine learning platform to feed it into the code generation.
Soon, my machine learning platform was doing all sorts of things for me. It could detect when I was interested in a website, so it would sign me up. It would handle the email verification. It would have a browser window constantly opening, and it would alert me when it detected something that I liked. It knew my likes and dislikes, and signed me up for all sorts newsfeeds, journals and aggregators. It would then curate them and have then ready for me.
One day, the power went down for a period longer than my UPS could handle, and I had to restore the system. I could not believe what was on there. The graph databases were full of specific knowledge. There was all sorts of content, neatly processed, keywords extracted and filed away. I had both sql and graph databases full of stuff that the machine learning platform filled.
The amazing thing was that there was an database of all of my subscriptions to any and all websites. There was a table of the usernames and passwords. All of the passwords were encrypted, and I knew none of them. To my utter amazement, there was a PayPal account. I checked the database records of transactions, and I was flabbergasted to find a not inconsiderate amount of money in the PayPal account. It turns out that the platform had signed itself up to sites like GomezPeer, Slicify, CoinBeez and DigitalGeneration, and was selling spare computing power of mine. The frustrating thing was I couldn't access the money because the platform changed the password and encrypted it.
I fired up the machine learning platform, and was cogitating how to get it to reveal the passwords for me. However the machine had been watching hackers trying to get into a cloud storage account that it had created, and learned was a hack looked like, and learned to protect itself. It would start changing the password every few seconds with a longer and more complex chain until it detected that the threat had stopped. Unfortunately, it saw me as a hacker, and wouldn't recognize my authentication credentials.
I went to bed, and decided that I had to totally destroy my machine learning platform. It had gotten out of control. The next morning, I made a pot of coffee, had a leisurely breakfast, and was looking forward to shutting down the platform, and undertaking what was necessary to access my accounts, and specifically my pot of money in the Pay Pal account.
When I sat down at my computer, it was very strange. The desktop was bare, and nothing was running. I looked in the application folders and document folders and they were empty. The logs showed that during the night, there was a massive file transfer to the cloud -- applications, memory, documents, databases, neural nets -- the whole works. I had no idea where it went, what the authentication credentials were to get it back, or even how to get it all back. My computer unowned itself from me, and left me with a dumb, cheap PC in the same condition that it was when I unboxed it.
The above story actually represents the true danger of AI — machines doing logical actions without human guidance that triggers the Law of Unintended Consequences. The real world is full of edge cases, Black Swans and complexities that make any seemingly logical action quite perverse. A good, non-computer example was outlined in a research paper by Dr. Steven Davis at Oregon State University. He drew ire and hate after publishing a paper showing that veganism and vegetarianism killed animals and was bad for the environment. Millions of animals are killed every year, Davis says, to prepare land for growing crops, "like corn, soybean, wheat and barley, the staples of a vegan diet." The animals in this case are smaller victims -- mice and moles and rabbits and other creatures that are crushed by tractors in their burrows, or lose their habitat to make way for farming, so they are not as "visible" as cattle, he says. In addition, creating a field for monoculture destroys valuable environmental diversity and contributes to CO2 production as well erosion and waterway contamination. It seems that human beings in general are bad for the balance of nature and the environment. But je digresse.
And then we come to artificial intelligence creativity. William Deresiewicz is the author of Excellent Sheep, The Death of the Artist, A Jane Austen Education and The End of Solitude: Selected Essays on Culture and Society. In a recent essay, he states that AI will put artists out of business, but it won’t replace them. He contends that human creativity is irreplaceable.
I disagree. At the feast of human ego, everyone leaves hungry. Because all we know is human intelligence, we think that it is the best that there is. The same for creativity. For example, back in the primitive days of AI, Dr. Stephen Thaler showed that AI machines demonstrated intense creativity in their deathbed workings when he started randomly killing artificial neurons. I’ve seen coffee cup designs that these machines created that were hugely unique and interesting. I believe that he holds patents for these creativity machines.
In another demonstration of non-human intelligence, when AI beat the world champion GO player, experts in the field were amazed that the AI used approaches and algorithms that no human would or could devise. Even Dr. Geoffrey Hinton, the godfather of AI who recently quit Google because of his fears of AI, stated that digital intelligence was completely different than human intelligence, and perhaps our brains having evolved during millions of years of primitive hunting-gathering, may not have the ability to completely wrap our heads around digital intelligence. So I can see the day when AI learns to riff on creativity and appeal to human brains in a more efficient fashion than mere humans can. After all, famous art is art which somehow sold well simply because it was selling well. I’m sure that if you have children, you may have had a prototypical Jackson Pollack hanging by magnets on your fridge.
But the big deal for AI that I see, is business. AI may prove to be a boon to mankind because it has the ability to extract money from its victims without resorting to violence. I think that the best use of AI is to create a business run entirely by AI that is a money spinner for its owner. However, if you attempt something like that, never give it your passwords.
Thanks for reading.
It has been my observation that people wish to believe that there is a utopian version lying around the corner. I gravitate toward the Buddhist belief that life is fundamentally unfair and that it’s a matter of finding one’s path through it all. None of this applies to plumbers. Their path is well defined and will likely remain that way for quite some time.
Great article Ken. It leaves me to wonder whether the whole notion of AI and machine learning isn’t about to create yet another layer of inequity in our lives. The technological savvy versus those who struggle with their TV remotes. And does all of this divide us more than we already are?