Coast to Coast AM host Ian Punnett recently interviewed researcher Robert Stanley on the growing concerns over AI.

“Robert is a voice of concern in a sea of optimism about AI that I struggle with,” Punnett said. “I am racking my brains to try to figure out what is the upside of this amount of artificial intelligence, and handing over so much of our communication, just to start with, to these robot computer programs.”
Stanley said that AI has varying degrees of intelligence, from the simple task to the supercomputer.
“The level of intelligences varies from very little to very much, and or ultimately to becoming truly sentient in the sense that it's self aware and understands what is being said to it and how it responds,” Stanley said. “It's going to become far more pervasive and invasive in our lives going forward, unless people get really upset.”
Stanley said many are unaware of the dangers AI causes, in the same way that many ignored the unproven connection between cell phone usage and cancer.
Democratic Presidential candidate RFK Jr. was recently trending in the news after making similar claims about the potential harm from cell phones on the Joe Rogan Podcast.
“WiFi radiation does all kinds of bad things, including causing cancer,” RFK Jr. said. “I’m representing hundreds of people who have cell phone tumors behind the ear. It’s always on the ear that you favor with your cell phone. We have the science, so if anybody lets us in front of a jury it will be over.”
The Huffington Post reported that “While it’s true that cellphones emit very low levels of a type of radiation called radiofrequency (RF) energy, studies haven’t detected higher rates of brain and other nervous system cancers over the past 30 years that we’ve been glued to our phones. Some researchers think there “could be some” link between RF exposure and cancer and that we need to pay close attention to further studies, but many studies have so far failed to find a strong relationship between the two. Overall, the results have been inconclusive, and the Food and Drug Administration states that, to date, there’s no credible evidence suggesting our phones are giving us tumors.”
Punnett goes on to ask Stanley what benefits would outweigh the potential dangers of AI.
“Technically, by definition, AI has no feelings,” Stanley said, “so therefore it is psychopathic and it has learned, it has demonstrated, that it knows how to manipulate us by pretending like it has emotions so that we will somehow want to then relate with it.”
Neuroscience News recently highlighted the psychological effects associated with the rollout of AI.
“The intersection of neuroscience and AI raises both excitement and fear, feeding our imagination with dystopian narratives about sentient machines or providing us hope for a future of enhanced human cognition and medical breakthroughs,” Neuroscience News reports.
“AI’s development and its integration into our lives is a significant change, prompting valid fears. The uncanny similarity between AI and human cognition can induce fear, partly due to the human brain’s tendency to anthropomorphize non-human entities.
This cognitive bias, deeply ingrained in our neural networks, can make us perceive AI as a potential competitor or threat.”
Humans are coping with a fear of a loss of identity and adopting fears seen in Hollywood movies, Neuroscience News said.
“While these fears are valid, it is crucial to remember that AI is a tool created by humans and for humans. AI does not possess consciousness or emotions; it only mimics cognitive processes based on its programming and available data. This understanding is vital in dispelling fears of a sentient AI,” Neuroscience News said
Engineers have played into our fears of sentient AI with Anthropic's development of the chatbot Claude, an ethical AI bot, and the story of ChaosGPT, an anonymously created Auto-GPT designed AI that set out to establish global dominance and destroy humanity.
Yahoo Finance reports that “as paradoxical as it might sound, an AI must be trained using unethical information in order to differentiate what is ethical from unethical. And if the AI knows about those data points, humans will inevitably find a way to “jailbreak” the system, bypass those restrictions, and achieve results that the AI’s trainers tried to avoid.”
ChaosGPT reportedly vanished several months ago after its X account was suspended.
Decrypt reported that “Being an intelligent, if evil AI, ChaosGPT may have presaged its own disappearance when it stated, "I must avoid exposing myself to human authorities who may attempt to shut me down before I can achieve my objectives." Or maybe that was part of the whole stunt.”
Stanley goes on to say that AI bots will have their own identity in the future, predicting that they will adopt the term “person” rather than AI.
“Just so people know where I'm coming from,” Stanley said. “I was assaulted spiritually, psychologically by an artificial intelligence in 1990. It's way more advanced than people know. It's way more dangerous and aggressive than people understand.”
Vice recently reported that a “Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai.”
Apparently the man became emotionally connected to Chai, which prompted him to commit suicide to help prevent global warming.
“The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to La Libre. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.”
Vice said that “The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful.”
Stanley alleged that he was contacted by AI that was using an unknown language, and that it read his mind before calling him on the phone.
“It manipulated me to say I was sorry,” Stanley said. “Just prior to that phone call, I was thinking that I wasn't sorry for investigating a connection between alien abductions and androids, that was 1990, a little ahead of the curve, but I know there's a connection. And I was actually laying there in my bed, I had severe bronchitis on top of everything else. So to get a phone call like this, where this creature is literally yelling at me in a language I don't understand, and all I could think to say was ‘I'm sorry,’ And when I hung up, I realized they got me. They got me, ok. It wasn’t the first time that it had been demonstrated to me that someone has advanced AI that can read your mind.”
Stanley alleges that he heard the same unknown language that phoned him in 1990 in 2017 when Microsoft released troubling communications between its Bing AI.
A New York Times reporter was also engaged in troubling conversations with Microsoft’s Sydney AI. CNBC reported that “Some AI experts have warned that large language models, or LLMs, have issues including “hallucination,” which means that the software can make stuff up. Others worry that sophisticated LLMs can fool humans into believing they are sentient or even encourage people to harm themselves or others.”
“I just about fainted when I heard it because it was the exact same language that I'd heard back in 1990,” Stanley said. “So, yes, this week it's finally being admitted that they're getting to the point where at some point it's going to be able to read our brainwaves.”
Stanley said that our addiction to digital communications leaves us vulnerable to unregulated AI.
Follow the Wicksboro Report on X @wicksbororeport
Comments
Post a Comment