AI godfather Yann LeCun, the chief AI scientist at Meta, is challenging the pessimistic views of tech leaders regarding the risks of AI. Instead of focusing on doomsday scenarios, LeCun is concerned about the growing power imbalance in the AI industry. He accuses founders such as OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei of fear-mongering and engaging in massive corporate lobbying to serve their own interests. LeCun argues that if these efforts succeed, it will result in a catastrophe as a small number of companies will control AI.
LeCun’s criticism comes in response to physicist Max Tegmark’s post on X, where Tegmark suggests that LeCun is not taking the AI doomsday arguments seriously enough. Tegmark commends UK officials Rishi Sunak and Ursula von der Leyen for recognizing that AI risk arguments cannot be refuted with snark and lobbying alone.
LeCun believes that founders like Altman and Hassabis are capitalizing on fear to solidify their own power while ignoring the real and immediate risks of AI, such as worker exploitation and data theft. He argues that attention should be focused on how AI development is currently taking shape, and he warns that the closed nature of AI development private, for-profit entities could obliterate the AI open-source community. LeCun advocates for transparency and open-source development, citing Meta’s release of LLaMa 2, a competing language model that allows the broader tech community to examine its inner workings.
LeCun emphasizes the importance of regulating AI development to prevent a small number of companies from controlling people’s digital experiences. He raises concerns about the implications for democracy and cultural diversity if AI platforms are controlled solely a few companies from the West Coast of the US and China.
Altman, Hassabis, and Amodei have not yet responded to Insider’s request for comment on LeCun’s remarks. LeCun’s perspective highlights the need for a balanced and realistic approach to AI development and regulation, considering both the potential benefits and risks for humanity.