Calculations Show It’ll Be Impossible To Control a Super-Intelligent AI

schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we’re unable to comprehend it, it’s impossible to create such a simulation. Rules such as “cause no harm to humans” can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. Part of the team’s reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it’s logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not — it’s mathematically impossible for us to be absolutely sure either way, which means it’s not containable. The alternative to teaching AI some ethics and telling it not to destroy the world — something which no algorithm can be absolutely certain of doing, the researchers say — is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence — the argument goes that if we’re not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we’re going in.

Read more of this story at Slashdot.

Source:
https://science.slashdot.org/story/21/01/15/2128235/calculations-show-itll-be-impossible-to-control-a-super-intelligent-ai?utm_source=rss1.0mainlinkanon&utm_medium=feed