AI pioneer leaves Google and warns of risk to humanity

Spread the love
ai risk to humanity

Scientist Geoffrey E. Hinton sees risks in the use of artificial intelligence systems by malicious people and defends a pause in the development of technology.

British scientist Geoffrey E. Hinton, one of the pioneers in the development of artificial intelligence (AI), resigned from the technology company Google and said that he did so so that he could talk about the dangers of this technology without having to worry about the impact it would have. your statements would have for your employer.

Hinton, often called the “godfather” of AI, said he now regrets having dedicated his career to developing the technology. “I console myself with the normal excuse: if it hadn’t been me, someone else would have done it,” he told The New York Times, in an interview published this Monday (1/05).

He joins other experts who have already warned of the risks of AI in the face of launches such as ChatGPT and the investments of large technology companies in this sector. “It’s hard to figure out how to keep bad actors from using it for bad things,” he told the New York Times.

On Twitter, Hinton said that Google has always acted very responsibly and denied that he resigned in order to criticize his former employer. According to the New York Daily, Hinton communicated his resignation to Google last month.

For Hinton, the current speed of AI development is frightening. “See how it was five years ago and how it is now,” he commented.

Threat To Humanity

In the short term, he said he fears that the internet will be flooded with fake texts, photos, and videos and that it will no longer be possible for people to distinguish what is real from what is fake.

He added that further down the road, AI could replace many workers and even become a threat to humanity.

“The idea that these things could become smarter than people, some people believed in that. But most people thought that was a long way off. I thought it was a long way off, 30 or 50 years or maybe more. Obviously, already I don’t think like that anymore,” he said.

For this reason, he defended, as other experts have already done, that research in this sector be stopped until it is well understood whether it will be possible to control AI.

In March, a group of experts called for a pause in the development of AI systems to allow time to ensure they are secure. The open letter, signed by more than a thousand people, including businessman Elon Musk and Apple co-founder Steve Wozniak, was motivated by the release of GPT-4 software, an even more powerful version of the technology used by ChatGPT.

Related Posts

Leave a Reply