AI Regulations Should Go Through Congress, Not Lina Khan’s FTC or Industry Groups

Artificial Intelligence, or AI (it’s two words, as Vice President Kamala Harris likes to say) is here to stay, and its effect on various aspects of our lives and economy is being felt and will be felt in ways we can’t yet predict. It’s for that reason that any regulations governing the use of AI need to go through Congress and not through federal agencies like the FTC or industry groups.

One industry that’s already acutely aware of the potential effects of AI is the entertainment industry, where the use of AI is a major issue in two current strikes. Writers who don’t want their scripts replaced by AI-generated scripts (but really, would we be able to tell the difference in 90 percent of the scripts?) and actors who don’t want studios to be able to create AI images in their likeness raise reasonable alarms, and it doesn’t seem that the strike is slowing down the pace of adoption by the studios, who are hiring AI Project Managers at insanely high salaries (Netflix is offering $900,000).

We’ve also seen issues with AI in politics, with deep fake videos being deployed by the two leading candidates on the GOP side, and endless AI-generated memes being shared. This raises concerns about people or campaigns using AI to publish false information about opponents and avoiding responsibility, though the publication of false information in political campaigns is hardly a new phenomenon.

Of course, the usual suspects on the left see this as yet another opportunity for massive government regulation. In July it was reported that the Federal Trade Commission had opened an investigation into ChatGPT maker OpenAI; the Washington Post published the agency’s 20-page Civil Investigative Demand letter, which highlights the extremely broad view Lina Khan takes of FTC’s scope. As James Czerniawski wrote at the New York Post:

Whether it’s attempting to redesign consent decrees with Meta, block mergers by Microsoft, or launching overly broad privacy investigations into Twitter, the FTC’s actions undermine the institution’s credibility. They also call into question whether these moves are really in the public’s best interests — or motivated by a partisan scheme to scapegoat tech giants.

It’s clear to anyone listening to Khan’s testimony before Congress over the last two years or the agency’s communications that the motivation is partisan. It’s couched as privacy and “customer protection,” though:

In the case of OpenAI, the FTC has two claims. One surrounding data scraping, another around publishing false information about people – though really the folks we’re talking about here are public figures since ChatGPT doesn’t have information on ordinary individuals.

Regarding the former, it is already legal to scrape publicly available information on the Internet, which is primarily what ChatGPT does. The platform — at least for the moment — appears uninterested in pirating data belonging to ordinary users, but rather in acquiring knowledge needed to become a better resource for them.

It is very clearly not within the FTC’s scope to regulate publishing false information about anyone; there are a multitude of legal avenues available for people who feel they’ve been harmed by the publication of false information, whether it was generated by AI or not (and there are already lawsuits winding their way through the courts on the matter).

If someone creates content using ChatGPT and posts it to their own website, or to Twitter, or wherever, it seems logical that that user, not ChatGPT, is responsible for the content if it’s false or defamatory. ChatGPT already has warnings that the content it generates might not be accurate, placing the onus on the user to fact-check it.

To its credit, OpenAI isn’t hiding the ball when it comes to ChatGPT’s flaws. On its site, ChatGPT reminds users that it might “produce inaccurate information about people, places, or facts.”

For non-paying users who prompt the app for information about more recent events, the app makes clear that its knowledge cutoff is from 2021. Users who accept whatever ChatGPT spits out as fact are not only misusing the tool – they’re ignoring the app’s clearly stated limitations.

Some have argued that AI-created works qualify for legal immunity under Section 230 of the Communications Decency Act, and that any legislation denying those protections would hamstring the industry. Just rolling AI into Section 230 won’t keep people like Lina Khan from pursuing a regulatory scheme, or the White House from issuing Executive Orders to regulate the industry – and those regulations are sure to be just as harmful to the industry and more difficult to change.

The best way to deal with the issues generative AI brings to society and industry is through new legislation, not rolling the industry into a law that’s already badly in need of reform, or allowing lefty ideologue federal regulators to have their way with it, or allowing industry consortiums to claim they’re self-regulating through communal “best practices” guidelines.

By pursuing entirely new legislation regarding AI, the American people can see what is happening and there can be a process of proposals and amendments. One bipartisan proposal before Congress right now, sponsored by Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) would end Section 230 immunity for generative AI:

The law would open the door for lawsuits to proceed against social media companies for claims based on emerging generative AI technology, including fabricated but strikingly realistic “deepfake” photos and videos of real people.

That’s a great start.

Sen. Gary Peters (D-MI) has submitted three AI bills this session:

His bills focus exclusively on the federal government, setting rules for AI training, transparency and how agencies buy AI-driven systems. Though narrower than the plans of Schumer and others, they also face less friction and uncertainty — and may offer the simplest way for Congress to shape the emerging industry, particularly when it’s not clear what other leverage Washington has.

One thing that’s definitely unwise at this point in time is any type of omnibus AI regulatory bill. The industry is too new and there are too many unknowns and too many potential unintended consequences of both the technology and any regulation to go full steam ahead with regulation.