What the U.S. Can Learn From China About Regulating AI – DNyuz

What the U.S. Can Learn From China About Regulating AI

On Sept. 13, U.S. Senate Majority Leader Chuck Schumer will hold a closed-door AI Insight Forum to inform how Congress should approach regulating artificial intelligence. Alphabet’s Sundar Pichai, Tesla’s Elon Musk and representatives of U.S. civil society and labor organizations will attend.

On Sept. 13, U.S. Senate Majority Leader Chuck Schumer will hold a closed-door AI Insight Forum to inform how Congress should approach regulating artificial intelligence. Among the attendees will be Alphabet CEO Sundar Pichai and Tesla CEO Elon Musk, as well as representatives from U.S. labor and civil society organizations.

But for grounded insights into how to regulate AI, Schumer’s team should look in an unlikely place: China.

Over the past two years, China has enacted some of the world’s earliest and most sophisticated regulations targeting AI. These regulations appear to be at odds with what U.S. officials hope to accomplish. For instance, China’s recent generative AI regulation mandates that companies uphold “core socialist values,” whereas Schumer has called for legislation requiring that U.S. AI systems “align with our democratic values.”

Yet those headline ideological differences blind us to an uncomfortable reality: The United States can actually learn a lot from China’s approach to governing AI. Of course, Washington shouldn’t require that AI systems “adhere to the correct political direction,” as one Chinese regulation mandates. If we look past the ideology of the rules and learn about the structure behind the rules as well as the way China implemented them, then we will be able to gain valuable insights. These lessons on structure and process could prove invaluable to U.S. leadership as they navigate the maze of AI-related issues in coming years.

The most obvious difference between China’s regulation and the new congressional approach is their scope. Schumer is leading a push for comprehensive AI legislation that would address the technology’s impact on national security, jobs, misinformation, bias, democratic values, and more. This approach may be commendable for its ambitious nature, but it is nearly impossible to cram solutions to these issues into one piece of legislation. These problems are only just beginning to take shape, and each one may require a different approach.

By contrast, the Chinese government has taken a targeted and iterative approach to AI governance. China, instead of rushing to pass a law covering all AI applications at once, has developed regulations that address specific concerns. This has enabled China to gradually build new regulatory and policy know-how as it implements new regulations. And when China’s initial regulations proved insufficient for a fast-moving technology like AI, it quickly iterated on them.

The Chinese Government began with two regulations that targeted AI applications, which threatened to undermine a key priority: the control of online information creation and distribution. Algorithm-driven news apps, including one created by TikTok’s parent company, ByteDance, were eroding the Chinese Communist Party’s ability to prioritize which news stories got put in front of Chinese readers.

So, in 2021, China’s cyberspace regulator rolled out new rules governing the recommendation algorithms used to personalize content for users. The regulator ordered that companies ensured recommended content did not violate censorship and granted Chinese users with new rights such as being able to disable algorithmic recommendations or delete tags to customize content. The regulations even reinforced the rights of gig workers whose schedules and compensation are determined by algorithms–an attempt by Chinese regulators to address public outcry over exploitative labor practices by algorithm-driven food delivery companies.

At the same time the Chinese government became increasingly concerned with the effects of deepfakes. So, in 2022, it imposed a set of rules covering “deep synthesis,” a Chinese term for synthetically generated images, video, audio, and text–what we today call generative AI. The regulation contained many boilerplate ideological controls, but it also mandated that companies apply digital watermarks and conspicuous labels to synthetically generated content, a policy idea recently pushed by the White House.

However, just five days after China’s deep synthesis regulation was enacted, OpenAI changed the game when it released ChatGPT. The Chinese regulation technically covered AI-generated text, but it was designed with visual content in mind. Large language models such as ChatGPT presented new issues, so Chinese regulators quickly set to work crafting a new generative AI regulation addressing those concerns. They released a draft regulation in April and a finalized version in July. Even that finalized regulation that went into effect in August is still labeled as “interim,” allowing for further iteration on it as the technology evolves.

That quick turnaround was made easier because China had used the previous two regulations to begin building out its regulatory toolkit for AI. Key among these tools was the algorithm registry, a government database for gathering basic information on algorithmic systems. Companies deploying algorithms in regulated fields must disclose what datasets they were trained on, whether they utilize biometric information, and the results of a “security self-assessment” conducted by the companies.

The Registry was first created for the recommendation algorithm regulation and then reused by deep synthesis regulations and generative AI. Similarly, the requirement to label AI-generated content first appeared in the deep synthesis regulation and was then included in the generative AI regulation.

Along the way, Chinese regulators have been learning from and iterating on these requirements. The regulators are assessing what they do not know, and what data is useful to them. This is a learn-by-doing strategy that emphasizes the importance of getting started with specific issues before trying to create sweeping regulations.

From China’s experience, Schumer and colleagues could learn a lot. Congress, instead of trying to pass a massive umbrella AI law, should first tackle one or two specific issues, such as the misinformation threat from realistic deepfakes.

In crafting that targeted regulation, policymakers can build up their understanding of the technology and effective interventions. Then, they can start creating new tools such as watermarking technical requirements and model audits that will be used in future legislation. This will allow them to be more responsive to the AI challenges of tomorrow.

As governments around the globe experiment with AI governance, they have a great opportunity to share their experiences. My conversations with China’s AI community members have consistently asked about recent proposals from the United States or Europe. They analyze these debates, and pick out ideas that could be adapted to the Chinese context. China’s AI policy community is committed to learning about the U.S. government approach and implementing it in China.

That willingness to learn from a rival can be a major advantage in geopolitics. If policymakers in the United States can manage to do the same, it might just give them a leg up in the competition to shape the future of AI, both at home and abroad.

The post What the U.S. Can Learn From China About Regulating AI appeared first on Foreign Policy.