Opinions

Biden’s AI Executive Order Fails to Acknowledge the Troubling Nature of the Technology



It’s natural to seek reassurance against scary monsters this time of year.

I’d like to imagine that’s why President Joe Biden chose Monday to sign a new executive order on artificial intelligence.

Though the order does comfort me about some of the lesser AI imps and sprites, I fear it does not address the true AI demon.

AI is no imaginary phantom.

It is here, it is already causing trouble, and it has the potential to create vastly more.

And like the scariest movie villains, it is quickly growing in power.

But (like the most nuanced movie villains) there is still hope of turning it into a force for good.

Let me be clear: Though I’m leaning on horror-movie tropes, I am not saying current AI is any literal kind of villain intending to do humans harm.

Right now, AI is just a powerful tool that — like most powerful tools — is “dual use”: It can help bring about good or evil ends, deliberately or accidentally.

From what I gathered from the White House fact sheet, the executive order is comforting when it comes to regulating tool AI.

I’m delighted to see, for example, it aims to address AI’s impact on misinformation, algorithmic bias, education and wealth inequality with reasonable first steps.

(And I can’t remember the last time the executive branch encouraged citizen privacy and encryption!)

What haunts me, though, is the day AI stops being a tool and starts becoming an agent: roughly, something capable of acting in the world towards its own ends.

In the robot movies, such agentic AI comes to hate or resent us, and we then beat it in a fight.

Both tropes are dangerously misleading.

First, AI doesn’t have to hate us; it just has to have goals slightly misaligned from our own — and given its fundamentally different nature, this is almost inevitable.

As Stuart Russell points out, we don’t hate gorillas, but when we want something the gorillas don’t want, it’s the gorillas who inevitably lose.

Second, it is much more probable AI would quickly reach “superintelligence” rather than stopping at exactly the human level.

And we would have no more chance of winning a war against a superintelligence than gorillas would have in a “war” against us.

It is tragically hard to summarize the arguments AI is a real existential risk and tragically easy to dismiss mere strawman versions of them, as I’ve written in these pages.

Suffice to say many smart people (do you count Stephen Hawking as smart?) have been deeply concerned.

And sadly, I think the executive order does not go far enough to address existential-level risk from AI.

The relevant points are to require AI labs to share their safety testing while the National Institute of Standards and Technology and others develop new safety tests.

This is probably a good start, but many of us who research this think we have no great idea, even in theory, how to test for agentic AI with misaligned goals.

Here’s a taste of the problem: Suppose intelligent aliens are heading our way from another solar system, and we get a chance to interview them first.

How could we check whether they’re dangerous?

If they are considerably smarter than we are, they’ll be able to anticipate any test we can think of and rig the results.

Even if we were allowed a molecular-level view of everything and everyone on their ships, we should not be sure we could spot trouble.

Like the classic horror films, we could confidently waltz into our own worst nightmare.

It’s worth adding some measures look positively counterproductive through the lens of existential risk.

You can imagine why training AI to look for exploitable software bugs would unnerve me.

And blowing more resources into AI development just fans the fire — especially when combined with talk of “American leadership,” which can just taunt other countries into an AI arms race that’s more likely to lead everyone into reckless development.

I flaked on my Halloween costume this year, but it occurs to me it’s not too late to do one of those phoned-in, conceptual costumes: If I go as “misaligned agentic AI,” I can be very scary just by looking like a totally ordinary, harmless fellow who passes all safety tests.

Steve Petersen is a professor of philosophy at Niagara University.



Source link

TruthUSA

I'm TruthUSA, the author behind TruthUSA News Hub located at https://truthusa.us/. With our One Story at a Time," my aim is to provide you with unbiased and comprehensive news coverage. I dive deep into the latest happenings in the US and global events, and bring you objective stories sourced from reputable sources. My goal is to keep you informed and enlightened, ensuring you have access to the truth. Stay tuned to TruthUSA News Hub to discover the reality behind the headlines and gain a well-rounded perspective on the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.