Image Credit: AP
Geoffrey Hinton, a scientist who won the Turing Award, left tech giant Google last month after spending over a decade in creating generative artificial intelligence [AI] programmes. He issued a warning about the dangers that his life's work might pose to humanity. Recognised as a key player in the development of AI, Hinton claimed in a lengthy interview with the New York Times that he made the decision to leave the company as Google and Microsoft engaged in a de facto arms race in Silicon Valley.
The contentious technology served as the foundation for generative AI programmes like ChatGPT and Google Bard as industry giants dipped their toes into a new scientific area that they believe will shape the future of their own businesses, RT reported.
Taking to the microblogging platform, Twitter, he said, "In the NYT today, Cade Metz implies that I left Google so that I could criticise Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly."
In the NYT today, Cade Metz implies that I left Google so that I could criticise Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Hinton's stated to the NYT that he left Google in order to talk freely about information regarding technology that he now believes is a threat to humanity. He said, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
Chatbots that are accessible to the general public, like ChatGPT, have shed light on Hinton's worries. Others have cautioned of conceivable repercussions as they connect to the propagation of web-based inaccurate information and of their impact on employment, while some consider them as simply additional internet accessories.
More than 1,000 tech industry luminaries, including Elon Musk, published an open letter to draw attention to the "profound risks to society and humanity" that the current version of ChatGPT, developed by San Francisco's OpenAI, poses. Hinton's position on the possible misuse of AI is also quite obvious, and despite the fact that he didn't sign the letter, he said, "It's hard to see how you can prevent the bad actors from using it for bad things."
“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton told the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”