Post by account_disabled on Mar 10, 2024 7:19:35 GMT
Elon Musk and a thousand other people have signed a letter calling for a 6-month moratorium on artificial intelligence developments . Among the signatories are a few important scientists (including Yoshua Bengio, one of the pioneers of deep learning), many entrepreneurs (such as the co-founders of Spotify and Apple) and some scholars. The promoters are members of the Future of Life Institute , a major think tank that promotes long-termism , a curious line of thinking that is only concerned with the risks of long-term human extinction, including AI, so not climate or of diseases that today devastate the least developed countries. The letter reads “Should we let machines flood our news channels with propaganda and falsehoods? Should we automate all jobs, including the most satisfying ones? Should we develop non-human minds that could eventually outnumber us, outsmart us, make us obsolete, and replace us? Should we risk losing control of our civilization?” . To avoid these hypothetical disasters for humanity, the signatories ask that all research laboratories immediately suspend for at least 6 months the training of AI systems more powerful than Gpt-4 in order to jointly implement a series of shared security protocols, to make AI systems “ safe, interpretable, transparent, robust, aligned, reliable and loyal ”. At the same time, it calls for government action to regulate and control the development of AI and avoid “disruptive economic and political disruptions (especially for democracy) that will be caused by AI”. It seems to me that the letter has the merit of sparking the debate on the consequences of this technological acceleration, but is exaggerated in tone and also proposes unattainable solutions.
Apocalyptic tones are those that fear the development India Mobile Number Data of a General Artificial Intelligence capable of becoming self-aware and supplanting humans. Science fiction. More concrete are the risks, only mentioned, linked to its use for disinformation purposes ( see the photo of the Pope with the duvet ) or surveillance. The moratorium solution is obviously unachievable (who's checking?), but also not very credible given that it comes from people who are suffering from OpenAI's successes. Among these Musk who, after being among the founders, left the company because he was unable to obtain full control, and Emad Mostaque, CEO of Stability AI, a competitor of OpenAI.
However, debate is always good as long as it is directed towards concrete objectives. In this sense, there is anticipation for the approval of the European regulation on artificial intelligence, the IA Act, which aims to classify artificial intelligence systems based on risk and ban the most dangerous ones. The other day I was called to comment on the letter to RaiNews. Good vision.
Apocalyptic tones are those that fear the development India Mobile Number Data of a General Artificial Intelligence capable of becoming self-aware and supplanting humans. Science fiction. More concrete are the risks, only mentioned, linked to its use for disinformation purposes ( see the photo of the Pope with the duvet ) or surveillance. The moratorium solution is obviously unachievable (who's checking?), but also not very credible given that it comes from people who are suffering from OpenAI's successes. Among these Musk who, after being among the founders, left the company because he was unable to obtain full control, and Emad Mostaque, CEO of Stability AI, a competitor of OpenAI.
However, debate is always good as long as it is directed towards concrete objectives. In this sense, there is anticipation for the approval of the European regulation on artificial intelligence, the IA Act, which aims to classify artificial intelligence systems based on risk and ban the most dangerous ones. The other day I was called to comment on the letter to RaiNews. Good vision.