The Secretary of Commerce has been tasked with defining the AI models that are sufficiently dangerous to qualify for these requirements. As it stands, experts don’t know how to do this, says Paul Scharre, executive vice president and director of studies at the Center for a New American Security, a military-affairs think tank.
In the meantime, the requirements will apply to models that are trained using an amount of computational power above a set threshold of 100 million billion billion operations. No AI models have yet been trained using this much computing power. OpenAI’s GPT-4, the most capable publicly available AI model, is estimated by research organization Epoch to have been trained with five times less than this amount. However, the amount of computing power used to train AI models has been doubling every six months for the last decade, according to Epoch.
A Biden Administration official said that the threshold was set such that current models wouldn’t be captured but the next generation of state-of-the-art models likely would, according to Scharre, who also attended the briefing.
Computational power is a “crude proxy” for the thing policymakers are really concerned about—the model’s capabilities—says Scharre. But Kaushik points out that setting a compute threshold could create an incentive for AI companies to develop models that achieve similar performance while keeping computational power under the threshold, particularly if the reporting requirements threaten to compromise trade secrets or intellectual property.
Read the full story and more from TIME.