AI Regulation: More of the Risk, Less of the Benefit

ClueBot must be stopped; Made via Stable Diffusion

Citing anonymous sources, the New York Times reported on May 4, US president Donald Trump is considering an executive order that would require pre-release  “national security vetting” of all new Artificial Intelligence models.

The following day, the US Department of Commerce revealed that three three large AI firms — Microsoft, Google, and xAI —  have already agreed to submit their models for “pre-deployment evaluations and targeted research.”

For obvious reasons, those developments trigger my strong aversion to regulation of anything, at any time, and at any level of government. I can indulge a little — very little — bit of sympathy for the position Trump’s in though:

Individual state governments have been rolling out AI regulation proposals for some time, and that’s just not According to Hoyle.

AI, at least if connected to the Internet, is clearly an interstate and/or international commercial activity, and the US Constitution clearly and unambiguously assigns the power to regulate such activity to the federal government. Specifically to Congress, but in the absence of congressional action, I can see why Trump would want to preempt the illegal state-level schemes with something of his own.

I just wish that something of his own was “none, period.” Here’s why:

“Whatever can happen,” Augustes De Morgan wrote in 1866, “will happen if we make trials enough.”

To which I must add, if “we” don’t make trials enough, someone else will.

AI will inevitably be pushed to whatever, if any, limit it has.

If American researchers can’t legally do it, Chinese researchers will do it.

If Chinese researchers can’t legally do it, Swiss researchers will do it.

If every government on the planet imposes pesky regulations on doing it, people who don’t care about pesky government regulations will do it.

It can happen. Therefore it will happen.

I don’t wear rose-colored glasses … or at least, in poor metaphor mix, I consider those glasses half-full. We can plausibly expect both good and bad things out of AI developed to its limits.

Those  of us who are allowed to avail ourselves of the most advanced AI possible will disproportionately reap whatever rewards it produces.

Those of us for whom maximal AI is forbidden fruit will be more vulnerable to AI’s dark sides.

Since I like rewards and loathe punishments, I prefer to belong to the former group. So should you.

King Canute understood that he could not effectually command the tide. Our rulers should heed the lesson.

Thomas L. Knapp (X: @thomaslknapp | Bluesky: @knappster.bsky.social | Mastodon: @knappster) is director and senior news analyst at the William Lloyd Garrison Center for Libertarian Advocacy Journalism (thegarrisoncenter.org). He lives and works in north central Florida.

PUBLICATION/CITATION HISTORY