Dr. Geoffrey Hinton has spent his whole life working on AI and has recently cautioned on the technology’s danger.
An AI developer referred to as the ‘Godfather of AI’ has stepped down from Google to enable him to talk more about the threats associated with this technology.
Hinton’s Exit from Google Raises Concerns Regarding AI Ethical Credentials
Before the resignation, Dr. Hinton had spent more than a decade working on machine learning algorithms at Google. His nickname was attributed to his work on neural networks. Nevertheless, a tweet posted on 1st May showed that he vacated his position at Google to enable him to speak concerning the threats of AI.
According to a New York Times interview, this major concern with the technology concerned its utilization in flooding the internet with counterfeit photographs, texts, and videos to a point where a majority of people will lack the ability to distinguish what is true from what is not.
Artificial Intelligence Threats Humanity and Social Order by Replacing Workers
Another major concern is technology replacing jobs. He believes that in the future, AI’s capability to learn unanticipated behaviors from the huge amount of information it evaluates can significantly threaten humanity. Dr. Hinton also voiced concerns regarding the ongoing AI arms. In this case, efforts are to improve the technology for utilization in lethal autonomous weapon systems (LAWS).
Hinton also demonstrated some regret concerning his life’s work by soothing himself with the excuse that someone else would have if he had not done it.
Researchers and Tech Leaders Demand Cessaton of Advanced AI Development
Various stakeholders, including tech organizations, regulators, and lawmakers, have shown concerns regarding AI Development. In March, through an open letter, more than 2600 tech researchers and policymakers suggested the need for a brief cessation of AI development owing to its associated risks to humanity and society.
In April, 12 EU officials signed such a letter, and a current draft bill categorizes the tools based on their risk levels. Further, Britain has also extended 125 million dollars towards supporting a task force developing ‘safe AI.’
Criminals Leveraging AI to Deliver Deceptive Information
Sham pranks and news campaigns have been associated with AI. In this case, its tools are being utilized to deliver the wrong information. For instance, some media outlets have been misled into publishing false information, and a German outlet has also utilized technology to produce an interview.
On 1st May, Binance asserted that it fell victim to a smear campaign originating from Chat-GPT and even shared proof from a chatbot asserting that the chief executive officer, Changpeng ‘CZ’ Zhao, belonged to a Chinese Communist Party youth organization. Despite the bot asserting that its source of information was a LinkedIn page and a Forbes article, the article does not seem to exist, and Zhao does not belong to Zhao.
Daily Mail and Independent Tricked by Pranks Concerning Canadian Actor
Last week, several media outlets across the globe, including the Independent and Daily Mail, were tricked by pranksters. The latter printed and afterward erased a story concerning Saint Von Colucci, an alleged actor from Canada.
It was claimed that Colucci had passed on following a plastic surgery operation aimed at making him resemble a South Korean pop star. This news originated from a press release concerning the actor’s demise that was delivered by an entity claiming to be a public relations organization and utilized images similar to those generated by AI.
Schumacher Interview Generated From ChatGPT Plunges Die Aktuelle Into Potential Legal Suit
Die Aktuelle, a German outlet, published a GhatGPT-generated interview to produce a chat with Michael Schumacher, a Formula One driver who experienced a grave brain injury following a skiing accident in 2013. The family claimed that it would sue the outlet.
Beyond the use of AI to fulfill generative images and text, it has been cited by various EU regulators for violating privacy rights in personal data. The matter remains an ongoing issue that will ultimately shape the future use of AI.
Editorial credit: JRdes / Shutterstock.com