Friday, April 12, 2024
HomeAI News & UpdatesElon Musk Warns of 20% Risk: AI Threatens Humanity, Yet Action Needed 

Elon Musk Warns of 20% Risk: AI Threatens Humanity, Yet Action Needed 

Even though there is a one in five possibility that AI would turn against people,Elon Muskis confident that the risk is worth it.   During the four-day Abundance Summit earlier in the month, during a “Great AI Debate” lecture, Musk revised his former risk evaluation of the technology when he said that I believed there was a possibility that it could destroy humanity. Geoff Hinton and I agree that it’s roughly ten to twenty percent. 

 However, he continued by saying that the likely favorable situation exceeds the unfavorable scenario. 

 Musk failed to explain how he determined the risk.  

P(DOOM): WHAT IS IT?  

 Business Insider was informed by Roman Yampolskiy, an artificial intelligence security expert and the executive director of the University of Louisville’s Cyber Security Laboratory, that while Musk is correct to suggest that artificial intelligence poses an existential threat to humanity, his analysis may be too cautious.  

 Yamploskiy stated that “actual p(doom) is significantly greater in my own opinion,” referring to the “probability of doom,” or the possibility that artificial Intelligence (AI) takes over humanity or triggers an event that ends humanity, like developing a new biological weapon or bringing about a massive cyberattack or nuclear war.  

 The New York Times dubbed (p)doom the macabre new figure sweeping Silicon Valley, citing several tech leaders who estimated there was a 5–50% possibility of an AI-driven apocalypse. According to Yamploskiy, the risk is “at 99.999999%.”  

 Yamploskiy stated that the only way out is never to construct powerful AI in the initial stages since it would be hard to control. 

 Yamploskiy continued, “I’m not sure why he believes it’s an excellent move to explore this technology either.” It makes no difference if he is worried about rivals arriving sooner because unchecked superintelligence is evil regardless of who creates it. 

“Like an intelligent kid befitting a god.” 

 Musk stated in November of last year that there was a potential for technology to go wrong, but he stopped short of saying he thought it might end humankind. 

 Despite his support for AI regulation, Musk established xAI, a business that was established last year to enhance the technology’s potential. OpenAI, which Musk and Sam Altman co-founded before Musk left the board in 2018, is rivalled by xAI. 

 At the summit, Elon Musk predicted that by 2030, digital intelligence will surpass humans’ total intelligence. In a few of the most particularly forthright words he has ever used in public, Musk recognized the potential danger to the entire globe if the growth of artificial intelligence continues upon its current course, even though he still believes the potential benefits exceed the risks. 

 You develop an AGI in a way. According to Musk, parenting an artificially intelligent general intelligence child is similar to growing a child, yet one that is extraordinarily intelligent, almost like a child of God. The way the child is raised matters. Musk made this statement on the 19th of March during an event in Silicon Valley. An essential component of artificial intelligence safety is having an AI that is as investigative and truth-seeking as possible. 

 Musk stated that growing intelligence in a way that drives it, to be honest, is his “conclusion” on achieving security for Artificial Intelligence. 

 Regarding the most excellent method to protect people from technology, Musk stated, “Never encourage technology to fabricate information, regardless of the reality is unpleasant. It is crucial. Keep the artificially intelligent system honest. 

According to The Independent, researchers have discovered that if artificial intelligence starts to deceive humans, there is no way to undo dishonest behaviour with the tools available for AI safety. 

 According to the study quoted by the site, current safety training approaches would not ensure safety. They may even offer a misleading impression of safety if a model displayed fraudulent conduct because of misleading essential alignment or model poisoning. 

 Even more concerning, the researchers noted it’s now possible for artificial intelligence to naturally become dishonest rather than having to be trained to lie. 

 Hinton, who is frequently referred to as the “Godfather of Intelligence” and who provides Musk with the fundamentals for the danger evaluation for the technology, told CNN that if it becomes a lot smarter than humans, it will be exceptionally skilled at manipulating since it will have learnt it from us. Furthermore, there are few instances of a less intelligent being controlling a more intelligent one. 

 Musk’s representatives did not instantly answer a Business Insider appeal for feedback. 

 

 

 

 

 

 

 

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments