Today, I asked a psychologist how can you design an IQ-test to assess a man more intelligent than the designer? He answered me, the same way you can design a chess game program, the designer cannot win it!
However, I'm not sure as a beginner if the question can be answered here, but it is interesting for me if we can write a program can evolve and learn by itself such that a human (even the programmer) cannot predict it. I hope the answer to be false, otherwise, there may be viruses or worms in the future with not predictable behavior, controlling the human society!
Artificial intelligence agents behave within some programmed space (a chess-playing agent is inside the chess-playing space).
Agents cannot leave the programmed space. A chess-playing agent is unlikely to take over the world any time soon. It is predictable in this sense.
The behaviour in this space is somewhat predictable (this behaviour is after all based on well-defined mathematical equations) (these equations are usually quite complex, so not easily predictable, but possible), but there is usually some randomness involved, which is obviously not predictable.
Note "intelligence" is not the same as predictability. Researchers have been trying to make AI truly intelligent for a long time, with (arguably) slow progression.
EDIT:
Note that some agents can have its programmed space be the entire world. This doesn't enforce a lot of boundaries.
By 'programmed space' I don't mean what was programmed into the agent as much as what the agent is programmed to observe or do. If an agent can only see a chess board and only make chess moves, how will it ever become more than a chess-playing agent?
True evolution may allow agents to extend their programmed space, but I'll have to think about whether this is actually possible.