AI Governance – Will it be enough?

AI Governance – Will it be enough?

Is setting up a set of rules or philosophical direction going to be enough to contain AI?

In the short term, maybe. In the long term, I think not.

Why

Let’s remember that AI can already perform calculations much faster than the human brain giving it the ability to perform functions faster than humans. I’m not saying humans can’t do the same functions, it just takes us longer. In some cases, much longer. Keep that in mind.

High Level

From a very high level we know the following:

  1. AI can perform calculations much faster than humans can. This is raw processing.
  2. AI can perform functions faster than humans can (here functions would be a series of operations and calculations to solve a given problem)
  3. We’ve gone from raw calculations to performing functions to problem solving.

Governance – Countries

If humans were to create a set of rules or philosophies to adapt when implementing AI, the first question I would have is whether the ‘world’ would follow suit. You may have a country or even a set of countries that would adopt, another set that outright refuse to adopt and of course a country or set of countries that would simply lie about what they’re doing. So even getting all the countries of the world to agree would be a challenge. Monitoring the agreement would be another challenge.

Governance – AI Itself

Above is only the human side of things. Let’s remember that we don’t really know, with 100% accuracy what or how AI is actually deriving it’s results. We have a lot of very smart people throughout the world that can best guess this given their knowledge of mathematics, computer algorithms, deep learning, etc. The bottom line, however, is even the best of the best aren’t 100% sure.

Above doesn’t really surprise me either. AI can solve issues in ways we haven’t even thought of. They can make inferences, detect patterns, etc. in ways we’ve never thought of.

That’s my dilemma. AI is smarter and faster than the humans who created it. It performs things in ways we don’t fully understand. So will AI governance really work? I strongly doubt we’ll get every country in the world to abide by any set of regulations, but the big question is whether AI itself will. If AI itself disagrees or outright refuses to follow, will we be able to stop or prevent it?

What To Do

Things will get interesting. Will we need kill switches to shut down AI? Will AI determine ways to bypass any attempts to kill it or shut it down? Will self preservation of itself block attempts from humans to control it?

AI can and will do many wonderful, fantastic things. If you’re a fan of science fiction, you’ll see some amazing things you’ve only just read about. The time is coming and it’s coming quickly. All I can say is let’s take our time. That last statement concerns me as well. The field of AI will be nearing a trillion dollar industry by 2030. We all know that when that kind of money is involved, bad things can happen.

Let’s slow down. Perhaps we can form a global AI policy group that could even (surprise, surprise) use AI to monitor the advancement of AI throughout the world. (yeah, I get the irony)