Levels of international cooperation between countries with the aim of managing dangerous technologies is at a “30-year low,” says Bill Drexel, an associate fellow at military affairs think tank the Center for a New American Security. “Trying to eke out a really meaningful agreement from that baseline with a technology that's still having its risks and advantages determined seems like a super tall order.”
Drexel thinks it may take a serious AI-related incident to generate the political will required to form a substantial agreement between countries. In the meantime, he says, it could be prudent to set up an international body, whether through the U.N., or between a smaller multilateral or bilateral group, that imposes very minimally on the participants, which could be used as a foundation for more material cooperation if political will arises.
International cooperation is “really clunky, slow and generally inefficient,” says Drexel. Instead, it may be possible to “come up with bilateral or more limited multilateral fora to try to govern [advanced AI systems] and even to scale with the expansion of companies that might be able to train frontier models.”
Read the full story and more from TIME.