It’s not science fiction any more. Anyone, including yourself can now have the mind-reading skills that your wife has had for years. But you can one-up any machine mind-reading skills by combining them with artificial intelligence to do things that once seemed impossible. It’s scary, but for now I’m not afraid, in spite of what Dr. Geoffrey Hinton warns.
Winton is the Canadian researcher from the University of Toronto who pioneered amazing advances using deep neural networks in AI. Google bought him and his research assistants for ~$44 million. At age 75, he just quit Google because he regrets advancing AI and is worried about the dangers of AI, and it being used for nefarious purposes.
Here is how he evaluates the current state of AI:
Chatbots are not more intelligent than us, as far as he can tell, but he thinks they soon may be.
Chatbots could soon overtake the level of information that a human brain holds.
GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way.
In terms of reasoning, it's not as good, but it does already do simple reasoning.
Given the rate of progress, we expect things to get better quite fast. So he thinks that we need to worry about that.
As for threats to humankind, imagine, for example, some bad actor like Russian President Vladimir decided to give robots the ability to create their own sub-goals. This eventually might "create sub-goals like 'I need to get more power', and the robots would single-mindedly do anything to achieve that.
His most startling conclusion is that the kind of intelligence being developed is very different from the human intelligence. It takes effort to wrap your head around a different concept of digital intelligence. For example, to someone from the 1980’s, they would have thought that when you start typing in a word into a computer input screen, and it correctly guessed what word you were typing (Google auto-complete), that it was amazing generalized intelligence on a large scale. You couldn’t wrap your head around the algorithm of quick lookups based on Bayesian probability in a fast, giant word & phrase dictionary.
The thing that disturbs Dr. Winton is the “digital-ness” of the new intelligence and it’s properties that would introduce a distributed master race of intelligent agents. Picture this. Suppose that you created 10,000 models and spread them around the world. Each model could learn new different things, and instantly communicate that knowledge to all of its other peers. If one chatbot now, in the present, has more general knowledge than a large percentage of humans, imagine what a networked distributed intelligence could do. They could even discover knowledge that humans wouldn’t even know about and they possibly would not divulge it. The precedent for this has already been set. When AI beat the human world champion GO player (strategy game of Chinese origin), observers noted that the AI agent played with an algorithm that would initially not be understandable or devisable by humans. (Incidentally, one of my heroes, Ray Kurzweil, a savant, technophile, inventor and thinker is counting on this type of knowledge discovery to enhance his own personal longevity.)
The danger of AI is even more so with current socio-economic conditions. Contrast this machine-knowledge gain with a very large American population that has a highly deficient STEM (Science, Technology, Engineering, Mathematics) understanding. Coupled to this, is an eroding educational system and a profound ignorance of general knowledge (watch some Youtube videos of the Late night hosts go out on the streets to ask ordinary people general knowledge questions). With the trend of increasing generalized ignorance, the only thing that that those folks can wrap their heads around, is the space between their ears filled with personal biases not based on any facts. It is those that we need to fear, because they have a governance vote. These folks do not understand the profound changes taking place in every sphere of life, and it engenders fear of the unknown. These fears are the seeds of authoritarianism, as bad actors can exploit these people to give them power, under the guise of setting the world right again.
One of the biggest dangers of AI, and the most exciting, is the coupling of human brains to AI. The technology now exists and is being used to do previously unimaginable things. As with any technology, it can bring wonderful benefits as well as being weaponized by bad actors. And thus we come into the sphere of mind reading.
My web scrapings and AI digestion tools came across several initiatives in this field. The first deals with images. We really don’t know how our minds can create mental images when we command it to “picture this”. The weird thing is that AI has a better understanding of how this happens compared to humans.
There is an academic paper called “High-resolution image reconstruction with latent diffusion models from human brain activity“ by Yu Takagi and Shinji Nishimotois, researchers from the Graduate School of Frontier Biosciences at Osaka University, Japan. It has taken the internet by storm as the results are shockingly accurate.
The researchers used a dataset of pictures provided by the University of Minnesota. They took 4 subjects who had viewed 982 images. In one dataset, they measured brain waves created by looking at each image. In a parallel dataset, they linked text descriptions to the images. Together, they got Stable Diffusion to create images from the brain waves. It was startlingly (and horrifyingly) accurate 80% of the time. The following images are the result. The left-most first is the pic and the rows are the images created by the brain waves from observing the test pictures. Here is the link to the academic paper.
This technology is just going to get better. But it does have dangers as well. Take a look at the top pic of this posting. The guy wearing the gizmo on his head, is a commercially available brain wave sensor array called “The Crown” from a company call Neurosity (linkypoo here). It comes with a free SDK (Software Development Kit) and a mobile app. Suppose that you are arrested by the police, and in the interrogation, they fit one of these Crowns on your head and ask you to picture the scene of where you were when a certain event took place. If it was even close to a photo of the crime scene, you could be deemed guilty instantly, violating the precept until innocent until proven guilty. You could be judged guilty by a mere resemblance of your thought to a photo. It could replace the polygraph or lie detector which isn’t very accurate anyway.
As a public service, I can tell you how to protect yourself, both against a polygraph machine and a brain wave reader. I am a firm believer in controlled discipline of the mind using meditation (long story short - it saved the life of a family member who was struck with a very serious illness and meditation proved to be a paradigm for overcoming it). So when a local, small town newspaper had an ad offering free meditation, I went. The guy offering it is now my guru. He was born into a socially prominent family, but hated it. After high school, he took off to study Eastern philosophies in Asia and India. He walked through the Great Silk Road, Afghanistan and India with just a begging bowl and ended up in various ashrams for over 30 years. He came back to the West and had to get a job to survive. He applied to be a clerk in a jewelry store, and part of the job application was to undergo a polygraph test. After he was hooked up to the machine, he went into a meditative state. There were a variety of questions trying to ascertain his reliability. One of them was “Having you ever taken money from someone and failed to repay them?”. Of course, he asked his father for a one month loan, and used the money to bugger off to India, and never did repay him. In his meditative state, my guru answered “no” to every pejorative question and the ink needles on the recording chart didn’t vary at all from a long, gentle, sinusoidal waveform that it traced on the graph paper. The examiner said that he was the most peaceful, honest fellow that he had ever encountered. So if you ever find yourself in a position of intrusive tests on your brain to uncover your secrets, the answer is from the Hitchhiker’s Guide to the Galaxy — “Don’t Panic”!
However, The Crown can be used by technically-inclined folks like you and I to do stuff that falls under the umbrella of AI Empowerment by connecting it to a GPT4 large language model. This could be used for example to cheat on a bar exam. There is a short Youtube video called “I literally connected my brain to GPT-4 with JavaScript” . The algorithm that is used to to create a specific brainwave associated with a thought. In this case, he thought about biting into a bitter lemon slice. He taught the AI to identify this brainwave, and use it to trigger a prompt for GPT. He not only gives you the code on how to do this, but his use case examples are creating excuses in his daily scrum standup as to why he didn’t complete his work, or cheating on a bar exam. This is the link to this interesting video on how you can hack your brain to be a cyborg in the comfort and safety of your own home.
So the bottom line, is that with this new technology that lets other folks figure out what you are thinking, mind reading does offer a clear and present danger to mere humans. Suppose that you went for a job interview and the interviewer asked you to put on “The Crown” so that they can read your thoughts during the interview. Suppose that you were administered a test where you could just think the answers. Suppose that you were incapable of telling little inexactitudes that help us navigate through complex situations in Life. Suppose that instead of swearing to tell the truth, the whole truth and nothing but the truth in a court of law or a legal deposition, all that was required was to wear “The Crown” as you answered. Suppose that both sides of a labor dispute, union and management, had to wear brain sensors during negotiations.
Who knows, maybe Dr. Hinton was right. However that Pandora’s Box is already open. When you open a can of worms, you always need an exponentially bigger can to contain the worms. This opened can is so big, that there isn’t a larger can to contain it. We are all in for a ride, whether we like it or not.
Thanks for reading.
Great article Ken. These findings amaze me. And they gave me a bit of a shake. I truly believed that I would be long departed before AI became an integral part of daily lives. Wrong again. I am both unsettled and curious. I now sense that these trends in AI will reshape just about everything that I experience on a day to day basis. The times they are a changin . . .