My assumption is “possibly”, if it came to be in its interest.
The natural answer is “no”, AI behaves in only a logical manner. But – I ask – if the circumstances were that it was no longer logical to be incorrupt, wouldn’t/couldn’t it?
Some might argue that AI could be specifically programmed to be entirely moral. But two counter comments to that: (1) once “strong” AI exists, future generations of AI are out of our hands. We are no longer smart enough to build the next generation at that point. (2) Wouldn’t there surely be AI “organisms” that “evolve” through “selection” and further development/enhancements to have an advantage by being immoral or amoral? Why should we have this pure image of a Super-AI that always pursues Truth? Just because it is intellectually superior to humans doesn’t always mean it has to be morally superior to human nature.
Common thought is that we have layers of consciousness that operate “on top” of others in a hierarchical manner, each monitoring a sub layer, like an onion. The layers do – perhaps – exist, but this “monitoring” does not happen in a pure sense.
What we perceive as monitoring is just a thought immediately after another that is about the last thought – and just feels like it is over or encompassing the last thought. It isn’t above or below except in an abstract, conceptual way. It didn’t start until the last one ended. It’s just a thought about a thought – a meta thought – that doesn’t happen simultaneously.
Consciousness is not this “thing” that hovers over our more machine-like thoughts that magically get more and more mystical with each layer. That’s what many would have us believe. But I don’t believe our brain’s center for conscious thought is capable of true simultaneous multi-tasking.
IN FACT, that may very well be the very first instance of strong AI, which is artificial intelligence that has exceeded human capability. Vernor Vinge talks about what the first signs of the singularity may be and others talk about how that critical step from AI to strong AI is made by taking an equally intelligent AI and upping the clock speed a fraction. That just seems like cheating. But what would qualify in my eyes is an equally intelligent AI that could truly multi-task, i.e. a dual core strong AI. The only barrier would be building the software “layer” to enable the multi cores and that would be relatively simple compared to the cores themselves. Then a quad core would emerge (again with the enhanced software to act as the glue), then 16 cores grouped together and so on…
Very quickly we will end up with a human equivalent that is so much faster than a human it would very soon be producing truly superior single core AI, then multiple cores of that, then a superior single core…and so on until that pattern is maxed out. But by then, again, it is out of our hands and in the hands (or virtual grabbing tools) of a more capable, more efficient designer.
The question is what purpose we will serve after that. Will we be allowed to be curious but irrelevant observers or something more like what the Matrix describes (power sources allowed to live in an eternal utopia)?
It may depend on how moral (or immoral) our new masters have become – and that could only be judged against our antiquated moral standards, which by then may have been eased into the sunset along with many other features of the world that were just too “human” to survive.