×Exclusive HealthCuisineSustainabilityMagazine

Will Artificial Intelligence surpass human intelligence?

Will Artificial Intelligence surpass human intelligence?

Will Artificial Intelligence surpass human intelligence?
Kamol Kamoltrakul

The advent of AI technology has unlocked a fascinating, yet controversial, possibility—the "resurrection" of deceased individuals. This process involves inputting a person's voice, image and other personal data into an artificial intelligence model, which is then trained to generate a digital avatar capable of interacting as though the deceased person were still alive. However, as AI technology has become controversial, debate has been increasing on this topic in countries around the world. Furthermore, this technology is now being used by criminal organizations to create fake identities to commit crimes, like online fraud, that have caused enormous economic damage to many people.


The chances of artificial intelligence causing human extinction within the next 30 years have increased. According to Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in Physics, in remarks to a RT reporter on world news on 28 Dec 2024, “Artificial intelligence could lead to human extinction within three decades with a likelihood of up to 20%”. This marks an increase from a 10% risk, his estimate just a year ago. Hinton believes that AI systems could eventually surpass human intelligence, escape human control and, potentially, cause catastrophic harm to humanity. He advocates dedicating significant resources to ensure AI safety and ethical use, also emphasizing an urgent need for proactive measures before it’s too late.


During an interview on BBC Radio 4 on the same day, Hinton was asked whether anything had changed since his previous estimate of a one-in-ten chance of an AI apocalypse. The Turing Award-winning scientist responded, “Not really, [just an increase from] 10% to 20%.”


This led to the show’s guest editor, the former chancellor Sajid Javid, to quip “You’re going up.” The computer scientist, who quit Google last year, responded: “If anything, you see we’ve never had to deal with things more intelligent than ourselves before.”


The British-Canadian scientist, known as the “Godfather of AI”, highlighted the challenges of controlling advanced AI systems. “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing? Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.” 

 
READ MORE: 
He went on, “Imagine yourself and a three-year-old. We’ll be the three-year-old, compared to a future AI that would be smarter than people.”


Hinton noted that progress has been, “much faster than I expected,” and called for regulations to ensure safety. He cautioned against relying solely on corporate profit motives, stating, “The only thing that can force those big companies to do more research on safety is government regulation.”


In May 2023, the Center for AI Safety released a statement signed by prominent scientists in the field, including Hinton, warning, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among the signees was Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and Yoshua Bengio, considered an AI pioneer for his work on neural networks.


Hinton believes AI systems could eventually surpass human intelligence, escape human control and, potentially, cause catastrophic harm to humanity. He advocates dedicating significant resources to ensure AI safety and ethical use, also emphasizing an urgent need for proactive measures before it’s too late.


Yann LeCun, Chief AI Scientist at Meta, has expressed views contrary to Hinton’s, stating that the technology “could actually save humanity from extinction”.
However, AI should be used more cautiously not to displace human intelligence in daily life which is in trend today. For humans to survive is to think before the power to think is diminished. We can’t deny the technology is progressing; at the same time, the stakeholders are many and powerful. No one knows if there is a link to this tragic phenomena; hopefully not.


Suchir Balaji, 26, of San Francisco, a vocal critic on AI, was recently found dead in his San Francisco apartment, according to the San Francisco Office of the Chief Medical Examiner. In October, the 26-year-old AI researcher raised concerns about OpenAI breaking copyright law when he was interviewed by The New York Times.
Suchir Balaji, who after nearly four years working at OpenAI, quit the company when he realized the technology would bring more harm than good to society, he told the New York Times. Balaji’s main concern was the way OpenAI allegedly used copyright data, and he believed its practices were damaging to the internet. He became increasingly vocal about what he described as potential fair use violations by generative AI technologies. 


Business Times reported on 23 October that Balaji posted on X (formerly Twitter), "Fair use seems like a pretty implausible defence for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they're trained on.”  He expressed concern about the implications of AI mod