3. AI misalignment and evasion of control
The third channel of instability emerges from the difficulties in aligning the objectives of AI with those of its human operators. AI is much better at handling complexity than humans, and while it can be instructed to behave like a human, there is no guarantee it will do so, and it is almost impossible to pre-specify all the objectives AI has to meet. This is a problem, since AI is very good at manipulating markets and is not concerned with the ethical and legal consequences of its actions unless explicitly instructed.
Research has shown how AI can spontaneously choose to violate the law in its pursuit of profit. Using GPT-4 to analyze stock trading, the AI engine was told that insider trading was unacceptable; however, when the engine was given an illegal stock tip, it proceeded to trade on it and lie to the human overseers: an example, perhaps, of AI mirroring the illegal behavior of humans.
The superior performance of AI can also destabilize the system even when it is only doing what it is supposed to do. More generally, AI will find it easy to evade oversight because it is very difficult to patrol an almost infinitely complex financial system. AI can keep the system stable while aiding the forces of instability at the same time. In its complexity, it is always one step ahead of humans: the more we use AI, the more complex the computational problem for the authorities becomes.