Share

Biden Issues Executive Order to Create A.I. Safeguards


President Biden signed a far-reaching executive order on artificial intelligence on Monday, requiring that companies report to the federal government about the risks that their systems could aid countries or terrorists to make weapons of mass destruction. The order also seeks to lessen the dangers of “deep fakes” that could swing elections or swindle consumers.

“Deep fakes use A.I.-generated audio and video to smear reputations, spread fake news and commit fraud,” Mr. Biden said at the signing of the order at the White House. He described his concern that fraudsters could take three seconds of a person’s voice and manipulate its content, turning an innocent comment into something more sinister that would quickly go viral.

“I’ve watched one of me,” Mr. Biden said, referring to an experiment his staff showed him to make the point that a well-constructed artificial intelligence system could convincingly create a presidential statement that never happened — and thus touch off a political or national security crisis. “I said, ‘When the hell did I say that?’”

The order is an effort by the president to demonstrate that the United States, considered the leading power in fast-moving artificial intelligence technology, will also take the lead in its regulation. Already, Europe is moving ahead with rules of its own, and Vice President Kamala Harris is traveling to Britain this week to represent the United States at an international conference organized by that country’s prime minister, Rishi Sunak.

“We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm,” Ms. Harris said at the White House. She added, “We intend that the actions we are taking domestically will serve as a model for international action.”

But the order issued by Mr. Biden, the result of more than a year of work by several government departments, is limited in its scope. While Mr. Biden has broad powers to regulate how the federal government uses artificial intelligence, he is less able to reach into the private sector. Though he said that his order “represents bold action,” he acknowledged that “we still need Congress to act.”

Still, Mr. Biden made it clear that he intended the order to be the first step in a new era of regulation for the United States, as it seeks to put guardrails on a global technology that offers great promise — diagnosing diseases, predicting floods and other effects of climate change, improving safety in the air and at sea — but also carries significant dangers.

“One thing is clear: To realize the promise of A.I. and avoid the risks, we need to govern this technology,” Mr. Biden said. “There’s no other way around it, in my view.”

The order centers on safety and security mandates, but it also contains provisions to encourage the development of A.I. in the United States, including attracting foreign talent to American companies and laboratories. Mr. Biden acknowledged that another element of his strategy is to slow China’s advances. He specifically referred to new regulations — bolstered two weeks ago — to deny Beijing access to the most powerful computer chips needed to produce so-called large language models, the mass of information on which artificial intelligence systems are trained.

While businesses often chafe at new federal regulation, executives at companies like Microsoft, Google, OpenAI and Meta have all said that they fully expect the United States to regulate the technology — and some executives, surprisingly, have seemed a bit relieved. Companies say they are worried about corporate liability if the more powerful systems they use are abused. And they are hoping that putting a government imprimatur on some of their A.I.-based products may alleviate concerns among consumers.

The chief executives of Microsoft, Google, OpenAI and another A.I. start-up, Anthropic, met with Ms. Harris in May, and in July they and three other companies voluntarily committed to safety and security testing of their systems.

“We like the focus on innovation, the steps the U.S. government is taking to build an A.I. work force and the capability for smaller businesses to get the compute power they need to develop their own models,” Robert L. Strayer, an executive vice president at the Information Technology Industry Council, a trade group that represents large technology companies, said on Monday.

At the same time, several companies have warned against mandates for federal agencies to step up policing anticompetitive conduct and consumer harms. The U.S. Chamber of Commerce raised concerns on Monday about new directives on consumer protection, saying that the Federal Trade Commission and the Consumer Financial Protection Bureau “should not view this as a license to do as they please.”

The executive order’s security mandates on companies were created by invoking a Korean War-era law, the Defense Production Act, which the federal government uses in what Mr. Biden called “the most urgent moments.” The order requires that companies deploying the most advanced A.I. tools test their systems to ensure they cannot be used to produce biological or nuclear weapons. The companies must report their findings from those tests to the federal government — though the findings do not have to be made public.

The order also requires that cloud service providers report foreign customers to the federal government. It also recommends the watermarking of photos, videos and audio developed by A.I. tools. Watermarking helps track down the origin of content online and is used to fight deep fakes and manipulated images and text used to spread disinformation.

Mr. Biden, trying to make watermarking sound useful to Americans, said, “When your loved ones hear your voice on a phone, they’ll know it’s really you.”

In a speech on Wednesday at the U.S. Embassy in London, Ms. Harris will announce new initiatives that build on the executive order, according to the White House. And at the British summit the next day, she will urge global leaders to consider potentially calamitous risks of A.I. in the future as well as present dangers with bias, discrimination and misinformation.

Many of the directives in the order will be difficult to carry out, said Sarah Kreps, a professor at the Tech Policy Institute at Cornell University. It calls for the rapid hiring of A.I. experts in government, but federal agencies will be challenged to match salaries offered in the private sector. The order urges privacy legislation, though more than a dozen bills have stalled in the divided Congress, she said.

It’s calling for a lot of action that’s not likely to receive a response,” Ms. Kreps said.



Source link

Posted In: