Microsoft President Brad Smith recently emphasized the need for improved risk management and regulation in the field of artificial intelligence (AI). Speaking before US lawmakers in Washington D.C., Smith urged governments to expedite their actions and corporations to take responsibility in the face of the rapid development of AI technology.
Microsoft has proposed implementing safety measures, such as "safety brakes," for AI systems controlling critical infrastructure, as well as establishing a comprehensive legal and regulatory framework for AI. Smith highlighted the potential risks associated with AI, including threats to privacy, automation-induced job losses, and the proliferation of convincing "deep fake" videos spreading scams and disinformation on social media platforms.
Despite Microsoft's own involvement in AI development, including work on specialized chips for OpenAI's ChatGPT, Smith asserted that the company is not evading responsibility. Microsoft pledged to undertake its own safeguarding measures regardless of government regulations.
Smith supported OpenAI CEO Sam Altman's suggestion of licensing AI developers and restricting high-risk AI services and development to licensed data centers. The launch of ChatGPT has prompted calls for stricter oversight of AI, with some organizations advocating for a temporary halt in its development.
Earlier, the Future of Life Institute published an open letter signed by influential tech leaders like Elon Musk and Steve Wozniak, urging a "pause" in AI development. Smith's remarks echo growing concerns about the need for proactive risk mitigation and regulatory measures to ensure the responsible and ethical deployment of AI technology.