Skip to main content

Watch the 27th annual Global Conference, broadcasting live worldwide, right here on our website Monday, May 6–Wednesday, May 8, 2024.

Tech Regulation Digest: Global Powers Move to Regulate AI

Tech Regulation Digest: June 2023
Global Powers Move to Regulate AI

Overview

Global powers are moving to regulate artificial intelligence (AI), with China and Europe making the first moves. US regulators and AI industry leaders have been working on a regulatory framework for the US over the last few months, but nothing concrete has been developed. Stakeholders and US lawmakers hope to act quickly to ensure that the US can match its competitors and keep up with the unprecedented pace of change.

Background

On May 16, 2023, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law heard testimony from OpenAI CEO Sam Altman, along with prominent psychologist and AI researcher Gary Marcus and IBM Vice President Christina Morgan, about efforts to establish safeguards for AI. This hearing was one of the first public conversations between regulators and the industry about the specific AI regulatory measures to consider.

Broadly, the AI experts agreed with members of the Senate on the potential dangers of unchecked generative AI and the need for regulation. “If this technology goes wrong, it can go quite wrong,” The New York Times quotes Altman, “We want to work with the government to prevent that from happening.”

This perspective was also reflected in the conversation among industry leaders at the 2023 Milken Institute Global Conference. During “Governing AI: Ethics, Regulation, and Practical Application,” Kai-Fu Lee, CEO of Asia’s leading AI venture capitalist firm Sinovation Ventures, noted, “I’m very much pro-regulation. I think we’re at a point where the dangers of not regulating are very serious … Maybe regulation is what it takes for the large technology owners to feel that there’s pressure to fix the problems.”

The US is not the first to consider AI regulation. Although China has not adopted any comprehensive law governing AI, Chinese regulators already rolled out rules governing deepfake technology and have established disclosure requirements, as reported in Axios. Since 2022, Chinese users have had the right to know when they are being offered AI-generated content and the ability to turn off services that utilize an AI algorithm to recommend services. In April 2023, the Cyberspace Administration of China released draft Measures on the Administration of Generative Artificial Intelligence Services. If adopted as drafted, this regulation would include safeguards such as a requirement to obtain consent before an AI algorithm could use personal content or intellectual property and other requirements ensuring all AI content reflects China’s “core socialist values” according to the Hong Kong Free Press.

Much, but certainly not all, of what is included in China’s draft measures align with what those in the West have called for: Stricter protections for intellectual property and privacy, transparency in AI development, and an algorithmic discrimination clause. The European Union’s Artificial Intelligence Act, first proposed in April 2021 and currently being debated by the European Parliament, would feature a classification system to determine the level of risk an AI algorithm would pose to individuals. This way, AI used for riskier endeavors, such as medical devices, would be more regulated (or banned if considered unacceptable) than those that utilize AI for entertainment purposes. All systems considered high risk would be subject to strict oversight on data quality, human involvement in the model, transparency, and general accountability measures.

The debates surrounding the proposals in China and Europe inform the US approach to AI regulation. While much of the recent conversation has been industry-led, in October 2022, the White House released the “Blueprint for an AI Bill of Rights,” the primary proposal from the Biden Administration for regulating AI. The AI Bill of Rights would include many provisions mentioned above, such as consumer control over how their data and personal information are used in AI training, algorithmic discrimination protections, AI-automated system disclosures, and safety protocols.

But the AI Bill of Rights would not cover all AI regulatory issues, such as targeting specific use cases (precision regulation), establishing independent testing labs, or creating a new cabinet-level agency, all of which have been proposed by industry leaders, according to Forbes. In May 2023, Microsoft released its “Blueprint for Governing AI,” which it hopes will serve as a foundational roadmap to US AI regulation. The report urges the US to utilize the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework as the basis for safety protocols, a framework Google has also supported. This would include an executive order that requires AI vendors to implement the NIST framework to work with the US government. The document also encourages a licensing regime for the highly capable AI models, something Altman echoed.

Why Is This Important?

It is evident that the US feels pressure to keep up with AI innovation and provide the industry with clear guidelines and the public with robust safeguards. As reported in The New York Times, Senator Richard Blumenthal noted during the subcommittee meeting that the US has not kept up with the pace of change, saying, “Congress failed to meet the moment on social media.” Industry leaders hope to collaborate with regulators to avoid the mistakes of the past and protect consumers from the risks of generative AI. Concerns about the harms of AI range from spreading misinformation to job displacement to the much more existential fear of an AI matching (or surpassing) human intelligence, a concept known as artificial general intelligence (AGI).

Already, some companies have reported impacts from generative AI models like ChatGPT, with education service Chegg seeing shares drop 50 percent and IBM reporting that 30 percent of its workforce could be displaced by AI in the next five years, according to Forbes. Of course, while AI will displace certain jobs, the World Economic Forum reports that high-tech roles such as data scientists and machine learning specialists are expected to increase rapidly at the same time.

In late May, Altman, along with two other AI company CEOs and executives at Microsoft and Google, compared the risks of AGI to extinction-level events such as nuclear war in a letter issued by the Center for AI Safety. Altman is not alone in this fear, but it does not appear to concern most of those who work in AI development. An April 2023 Stanford University-led survey found that 36 percent of AI researchers “agreed” or “weakly agreed” that AGI could lead to nuclear-level catastrophe.

Even with those fears, many industry leaders are confident in the potential of the many innovations AI has yet to breed. At the Milken Institute Global Conference, Ashton Kutcher, actor and recent owner of a $240 million venture capital fund for AI investment, expressed significant optimism about the potential for AI in several fields. With a focus particularly on removing barriers to access in the legal and medical fields, he noted during Head in the Clouds: Embracing the Potential and Promise of Artificial Intelligence, “What we’re about to see is a reinvention of skilled labor markets. What’s really promising and exciting is the opportunity that brings to people en masse.” There is also excitement about the potential in industries like financial services, where AI can automate tasks related to fraud detection and even provide personalized and relevant financial advice to people who may not have been able to receive personal financial support previously.

What Happens Next

In May 2023, Senator Michael Bennet introduced a bill that builds on previous legislation to establish a federal agency dedicated to regulating AI and other digital services, a move supported by OpenAI and Microsoft. Because this would conflict with the existing work at regulatory agencies like the Federal Trade Commission, Department of Justice, and CFPB, there is debate over whether a new agency would create more bureaucracy, as reported by CNN.

In June 2023, Senate Majority Leader Chuck Schumer announced his SAFE Innovation Framework, which provides a potential legislative roadmap for a comprehensive bill. Schumer hopes this bipartisan effort can establish regulatory guardrails, align AI with “democratic values,” and spur US-led innovation. US lawmakers and regulators hope to keep up with international competitors and the rapid pace of innovation but have taken no concrete action on a comprehensive regulatory framework thus far.

Check our Tech Regulation Tracker for important developments in AI regulation moving forward.