For more than a century and a half, an organization that has been referred to as “the most important agency you’ve never heard of” has been making technology global.
In its latest iteration, as the International Telecommunication Union (ITU), its global regulations now underpin most technologies we use in our daily lives, setting technical standards that enable televisions, satellites, cellphones, and internet connections to operate seamlessly from Japan to Brazil.
The next big technology may present the organization with its greatest challenge yet. Artificial intelligence systems are being deployed at a dizzying pace around the world, with implications for virtually every industry from education and health care to law enforcement and defense. Governments around the world are scrambling to balance benefits and bogeys, attempting to set guardrails without missing the boat on technological transformation. The ITU, with 193 member states as well as hundreds of companies and organizations, is trying to get a handle on that rowdy conversation.
“Despite the fact that we’re 158 years old, I think that the mission and mandate of the ITU has never been as important as it is today,” said Doreen Bogdan-Martin, the agency’s secretary-general, in a recent interview.
Founded in 1865 in Paris as the International Telegraph Union and tasked with creating a universal standard for telegraph messages to be transmitted between countries without having to be hard-coded into each country’s system at the border, the ITU would subsequently go on to play a similar role in future technologies including telephones and radio. In 1932, the agency adopted its current name to reflect its ever-expanding remit — folding in the radio governance framework that established maritime distress signals such as S.O.S. — and was brought under the aegis of the United Nations in 1947.
Bogdan-Martin, who took office in January, is the first woman to lead the ITU, and only the second American. Getting there followed months of campaigning to defeat her opponent, a former Russian telecommunications official who also worked as an executive at the Chinese technology firm Huawei, in an election that was widely billed as a battle for the future of the internet, not to mention a key bulwark for the West in the face of an increasingly assertive China and Russia within the U.N. (Bogdan-Martin also took over the ITU leadership from China’s Zhao Houlin, who had served for eight years after running unopposed twice. )
“It was intense,” Bogdan-Martin acknowledged. Ultimately, she won with 139 out of 172 votes cast.
Russia and China have been at the forefront of a competing vision for the internet, in which countries have greater control over what their citizens can see online. Both countries already exercise that control at home, and Russia has used the war in Ukraine to further restrict internet access and create a digital iron curtain that inches closer to China’s far more advanced censorship apparatus, the Great Firewall. In a joint statement last February, the two countries said they “believe that any attempts to limit their sovereign right to regulate national segments of the internet and ensure their security are unacceptable,” calling on “greater participation” from the ITU to address global internet governance issues.
“I firmly support a free and open, democratic internet,” Bogdan-Martin said. Those values are key to her biggest priority for the ITU–bringing the internet to the 2. 9 billion people worldwide who still haven’t experienced it. “Safe, affordable, trusted, responsible, meaningful connectivity is a global imperative,” she said.
Getting that level of global consensus on how to regulate artificial intelligence may not be as straightforward. Governments around the world have taken a variety of approaches–and not always compatible. The European Union’s AI Act, set for final passage later this year, ranks AI applications by levels of risk and potential harm, while China’s regulations target specific AI applications and require developers to submit information about their algorithms to the government. The United States is further behind the curve when it comes to binding legislation but has so far focused on light-touch regulation and more voluntary frameworks aimed at allowing innovation to flourish.
In recent weeks, however, calls for a global AI regulator have grown louder, modeled after the nuclear nonproliferation framework governed by the International Atomic Energy Agency (IAEA). Proponents of that idea include U.N. Secretary-General Antonio Guterres and OpenAI CEO Sam Altman, whose advanced chatbots have catalyzed much of the hand-wringing around the technology. But some experts argue that comparisons to nuclear weapons don’t quite capture the challenges of artificial intelligence.
“People forget what a harmony there was between the [five permanent members of the Security Council] in the United Nations over the IAEA,” said Robert Trager, international governance lead at the nonprofit Center for the Governance of AI. When it comes to AI regulation these days, those members “don’t have the same degree of harmony of interest, and so that is a challenge.”
Another difference is the far wider application of AI technologies and the potential to transform nearly every aspect of the global economy for better and for worse. “It’s going to change the nature of our interactions on every front. We can’t really approach it and say: ‘Oh, there’s this thing, AI, we’ve now got to figure out how to regulate it, just like we had to regulate automobiles or oil production or whatever,'” said Gillian Hadfield, a professor of law at the University of Toronto who researches AI regulation. “It’s really going to change the way everything works.”
The sheer pace of AI development doesn’t make things any easier for would-be regulators. It took less than six months after the launch of ChatGPT caused a seismic shift in the global AI landscape for its maker, OpenAI, to launch GPT-4, a new version of the software engine powering the chatbot that can incorporate images as well as text. “One of the things we’re seeing with the EU AI Act, for example, is it hasn’t even been passed yet and it’s already struggling to keep up with the state of the technology,” Hadfield said.
But Bogdan-Martin is looking to get the ball rolling. The ITU hosted its sixth annual AI for Good Global Summit last week, which brought together policymakers, experts, industry executives, and robots for a two-day discussion of ways in which AI could help and harm humanity–with a focus on guardrails that mitigate the latter. Proposed solutions from the summit included a global registry for AI applications and a global AI observatory.
“Things are just moving so fast,” Bogdan-Martin said. “Every day, every week we hear new things,” she said. “But we can’t be complacent. We have to be proactive, and we do have to find ways to tackle the challenges.”
And although total consensus may be hard to achieve, there are some fundamental risks of AI that experts say countries will be keen to mitigate regardless of their ideology–such as protection of children–that can form something of a baseline.
“No jurisdictions, no states, have an interest in civilian entities doing things that are dangerous to society,” Trager said. “There is this common interest in developing the regime, in figuring out what the best standards are, and so I think there’s a lot of opportunity for collaboration.”
The post It Was Set Up to Regulate the Telegraph. Now It’s Grappling With AI appeared first on Foreign Policy.