Elon Musk’s concern over AI’s risks may prompt tougher safety standards under the Trump administration, according to scientist Max Tegmark.
Elon Musk’s increasing influence on a potential Donald Trump administration could lead to more stringent safety standards for artificial intelligence (AI), according to renowned AI researcher Max Tegmark. The scientist, who has worked closely with Musk on addressing the dangers posed by AI, believes that the billionaire’s advocacy could prompt the president-elect to adopt measures that curb the unregulated development of advanced AI technologies.
Speaking at the Web Summit in Lisbon, Tegmark shared his perspective on Musk’s ability to shape policy in the future. Despite AI not being a prominent issue during Trump’s campaign, Musk’s vocal concerns about the dangers of AI—particularly artificial general intelligence (AGI), which matches or surpasses human intelligence—could have a lasting impact on US AI regulation.
Tegmark explained that Musk’s past support for AI safety legislation, such as his backing of California’s SB 1047 bill, demonstrated his commitment to regulating AI before it evolves into something potentially catastrophic. The bill, which proposed requiring companies to stress-test large AI models before their release, was vetoed by California Governor Gavin Newsom, who argued that it could stifle innovation and drive businesses out of the state. However, Musk’s stance in favour of the bill highlighted his growing concern for AI safety.
“I do think that if Elon manages to get Trump’s ear on AI issues, we’re more likely to get some form of safety standards, something that prevents AGI,” said Tegmark, who is a professor at the Massachusetts Institute of Technology (MIT). He further emphasised that Musk’s influence might be pivotal in helping Trump recognise the high stakes of an unrestrained race towards AGI development, describing it as a “suicide race.”
Musk, who has repeatedly warned of the existential risks associated with AI, was one of over 30,000 signatories on a letter last year calling for a pause in the development of powerful AI technologies. His growing influence in Silicon Valley and his recent victory in securing significant political backing could pave the way for a more regulatory-focused approach to AI under the Trump administration.
Musk has been a vocal advocate for AI safety throughout his career, and his support for initiatives such as the Future of Life Institute—an organisation dedicated to addressing the risks of advanced technologies—further demonstrates his commitment to regulating AI. The institute, which Musk has backed both financially and publicly, works to raise awareness about the potential dangers of AI and promote research into safer AI practices.
Tegmark, whose work at MIT focuses on the long-term implications of AI, believes that Musk could play a crucial role in shaping policy to ensure AI is developed responsibly. “Elon Musk came out and said I’m for it, I want the regulation,” said Tegmark, referring to Musk’s backing of the SB 1047 bill. “I do think it’s not completely implausible he could persuade Trump that AI needs to be controlled.”
As Musk’s personal wealth continues to grow, especially following Trump’s recent victory, it seems likely that the billionaire will continue to wield considerable influence over US policy on AI. Whether this influence will lead to concrete regulations remains to be seen, but experts like Tegmark are hopeful that it will result in a more cautious approach to AI’s future.
While the potential for stricter AI regulations may be on the horizon, the exact nature of those regulations—and whether they will materialise under a Trump administration—remains uncertain. For now, the conversation continues to revolve around the balance between innovation and safety, with Musk’s influence poised to play a key role in determining how that balance is struck in the coming years