The Trump Administration Wants to Regulate Artificial Intelligence

To prevent the United States from falling behind competitor nations like China, when it comes to the development of artificial intelligence-based technologies, the Trump administration has proposed vague regulatory guidelines that would limit potentially innovation-stifling governmental “overreach.”

The news comes amid the Consumer Electronics Show (CES) in Las Vegas, the largest annual trade show for the technology industry. That makes sense, given that each year, CES includes a slew of vendors that demonstrate AI-based tech.

In a blog posted to the White House website and shared as a Bloomberg op-ed, Michael Kratsios, chief technology officer of the U.S., wrote that it’s a “false choice” to have to choose between moral values and advancing emerging AI technology.

“As part of the Trump Administration’s national AI strategy—the American AI Initiative—the White House is today proposing a first-of-its-kind set of regulatory principles to govern AI development in the private sector,” he wrote. “Guided by these principles, innovators and government officials will ensure that as the United States embraces AI we also address the challenging technical and ethical questions that AI can create.

Once finalized, the guidelines will be compulsory. That means that when a federal agency proposes new regulations on AI, they will need to show that the regulation follows suit. The new guidance has three prongs: to ensure public engagement, limit regulatory overreach, and promote trustworthy technology.

To satisfy the first goal of the new policy guidelines, the Trump administration is encouraging federal agencies to provide opportunities for the public to comment on any AI rule-making. In other words, government agencies want to hear not only from academics, industry leaders, nonprofits, and think tanks, but also the general public.

Not surprisingly, given the federal government’s hands-off policy guidelines for autonomous vehicles proposed by U.S. Secretary of Transportation Elaine Chao back in 2018, the guidelines are a “light-touch” regulatory approach, according to Kratsios. That means the White House is directing federal agencies to avoid preemptive rules that could hamper AI innovation or growth.

“Agencies will be required to conduct risk assessments and cost-benefit analyses prior to regulatory action to evaluate the potential tradeoffs of regulating a given AI technology,” Kratsios wrote. He added that, as AI continues to evolve, so will the need for government agencies to be flexible in their rule-making.

Perhaps the most important point, though, is that the administration wants to promote the development of what Kratsios refers to as “trustworthy AI,” which is pretty relevant given some ethically questionable uses for AI. These include China’s social credit system, face detection technologies that are poor at recognizing anyone who is not white, and collaborations between universities that specialize in AI and the U.S. Army.

Kratsios says regulators must consider “fairness, transparency, safety and security” when coming up with rules, and evidence for those policy decisions should be verifiable and objective, based on scientific evidence.

Given these are guidelines and not actual policies, the new AI framework is pretty open-ended and non-specific. It’s a good start, Rayid Ghani, career professor of machine learning at Carnegie Mellon University’s Heinz College, tells Popular Mechanics, but it’s imperative that more concrete rules be put into place eventually.

There are two possible ways to accomplish that, Ghani says: 1) Update existing guidelines in specific problem areas, such as in hiring, elections, human services and transportation; and 2) create more generic AI regulations that are compulsory, not just guidelines.

“Ideally we need a combination of them,” Ghani says. “More importantly, we need people (and training for them) who can implement and audit those regulations the next level down. Industry typically wants the freedom to do/try things quickly and we need some regulation where they need to be transparent and answer a set of questions before they can trial something that’s going to affect certain aspects of people’s lives.”

In the case of self-driving cars, there was—and continues to be—a clear lack of regulation at the time when an Uber self-driving SUV struck and killed a pedestrian in Tempe, Arizona. The fallout has been immense and the subject of various National Highway Transportation Safety Board meetings and debates in other public forums. Still, it’s mostly up to states to come up with regulations, which could potentially be a disaster if we one day see 50 separate sets of rules for the vehicles.

Kratsios made sure to mention in his op-ed that foreign adversaries like China are working on advances in AI at a breakneck pace. He wrote that the U.S. must protect civil liberties and the best way to counter dystopian approaches to AI—like governments and companies in other nations “deploying their AI technology in the service of the surveillance state, where they monitor and imprison dissidents, activists and minorities, such as Beijing’s treatment of the Muslim Uyghurs”—is to ensure that the U.S. and its allies remain the top global hubs of AI innovation.

In other words: Any AI lawmaking should not hamper innovation. But Ghani says there’s a clear difference between stifling innovation and protecting civil rights. As a society, he says, we must decide where to draw that line.

“We need to define different levels of regulations for different types of systems, depending on how deeply can they hurt society and lead to furthering inequities,” he says. “For low risk systems, perhaps we can give more freedom. For high risk areas, such as criminal justice, public health, employment, etc. we need to focus on the potential harm more.”

What Could AI Lawmaking Look Like?

For his part, Ghani has his own ideas of what the next steps should be for the Trump administration. He tells Popular Mechanics that he’d like to see the following:

  1.  Defining the set of things an organization (government or industry) should publicly release when deploying an AI system. This set will vary based on the impact this system can have on peoples’ lives and must be audited by an agency or team responsible for making that call.
  2. Defining what risks need to be highlighted and coming up with a mitigation plan for each of those risks.
  3. Building an agency or body that can be responsible for compliance. How will government agencies check if these regulations are being complied with? What will be the audit process look like? How will we build a team that’s capable of completing these audits?

In any case, it looks like the Trump administration’s first step will be to gather thoughts from people like Ghani to help establish more specific protocols.

“By working together, we will shape the policies that guide how AI is developed and deployed so that all people and communities can enjoy the benefits and opportunities it provides,” Kratsios said.

Leave a Reply

Your email address will not be published.