Yes, there is a continuous tug-of-war between functional AI, and adversarial AI. The entire security community is grappling with the dueling powers of machine intelligence. Yet, adversarial AI must be able to test multitudinous behaviors, discovering hacks through massive experimentation. AI doesn’t just ‘automatically know’ how to manipulate other AI. It learns, and that learning is very slow.
Further, manipulation would require substituting new, corrupted data, for the original data. An adversary would have to gain direct access to that data stream; they would have to hack into the computer hardware upon which the AI ran, as an act of espionage. And, such a hack would be noticeable, because it would create a discrepancy with the publicly available data upon which the AI operates. So, similar to Russians hacking our power grid, an adversary could infiltrate and momentarily corrupt an AI, while ongoing and persistent corruption would be far more difficult — trickier than campaign financing for a corrupt politician. I trust the AI more than the Koch brothers. Which would you prefer?