Missy Cummings sitting in a chair
Missy Cummings, a professor of engineering and computer science, in a driving simulator at George Mason University in Fairfax, Virginia on Feb. 10, 2023.Photograph: Lexey Swall/The New York Times/Redux

Politicians Need to Learn How AI Works—Fast

Tech regulation has often disappointed. Automation expert Missy Cummings hopes a course to teach policymakers about artificial intelligence can help.

This week, US senators heard alarming testimony suggesting that unchecked AI could steal jobsspread misinformation, and generally “go quite wrong,” in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But the hearing also saw agreement that no one wants to kneecap a technology that could potentially increase productivity and give the US a lead in a new technological revolution.

Worried senators might consider talking to Missy Cummings, a onetime fighter pilot and engineering and robotics professor at George Mason University. She studies use of AI and automation in safety critical systems including cars and aircraft, and earlier this year returned to academia after a stint at the National Highway Traffic Safety Administration, which oversees automotive technology, including Tesla’s Autopilot and self-driving cars. Cummings’ perspective might help politicians and policymakers trying to weigh the promise of much-hyped new algorithms with the risks that lay ahead.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. “We're in serious trouble in terms of the capabilities of these cars,” Cummings says. “They're not even close to being as capable as people think they are.”

I was struck by the parallels with ChatGPT and similar chatbots stoking excitement and concern about the power of AI. Automated driving features have been around for longer, but like large language models they rely on machine learning algorithms that are inherently unpredictable, hard to inspect, and require a different kind of engineering thinking to that of the past.

Also like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology that still has many unsolved problems. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.

After billions spent on the technology, self-driving cars are still beset by problems, and some auto companies have pulled the plug on big autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

In one sense, it’s good to see governments and lawmakers being quick to suggest regulation of generative AI tools and large language models. The current panic is centered on large language models and tools like ChatGPT that are remarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts. 

At this week’s Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI. “My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman said during the hearing.

Drawing on her NHTSA experience, Cummings warns there is a significant risk of regulatory capture if major AI and tech companies have too much influence over the direction that regulators take. “I heard it said more than once inside of NHTSA that they needed to proceed extremely cautiously because they didn't want to move markets,” she says. “That is probably the best sign that regulatory capture has happened. If a market moves, so be it.”

Cummings also says that politicians and regulators badly need to get up to speed on developments in AI and become more familiar with the technology and how it works. To that end she says she is looking at ways to train people to investigate accidents involving AI, and is creating a course at George Mason aimed at policymakers and regulators who want to be better informed about the technology.

Some venerable AI experts believe the latest AI systems may only not replace office workers and spew disinformation but also pose an existential risk if they get too clever and willful. Cummings is more worried about the risk that, once again, we will put too much faith in a technology that isn’t fully baked.

“People everywhere are going to start writing code with generative tools,” Cummings says, taking just one example of the likely consequences of the current boom. “There's going to be a whole new career field for computer scientists to just root out bad code,” she says. “I will have job security for life.”