Executive Summary
DeepSeek AI (“DeepSeek”) caused a stir in the AI community and roiled the markets by training its large language model (LLM), DeepSeek-R1, with fewer resources and in less time than major competitors such as OpenAI's GPT-o1. This surprising accomplishment was not the product of a single breakthrough. Instead, the story behind DeepSeek’s success blends strategic hardware procurement, in-house compute expertise, and incremental software and training innovations that combined to achieve unexpected efficiency. Implications for policy include:
Export Control Policy
DeepSeek leveraged its hardware efficiently, but it still required highly sophisticated chips. DeepSeek leveraged gaps and lags in export control policy to obtain the required hardware. Policymakers should consider alterations to export controls that address these vulnerabilities.
Innovation Policy
External constraints, some imposed by the United States, incentivized innovation in domains that better-resourced market players neglected. Policymakers should consider policies that encourage multiple paths to innovation (and discourage herd mentalities) among domestic developers, startups, and technical thought-leaders. Doing so will, as Vice President Vance put it in a recent speech at the Artificial Intelligence Action Summit in Paris, "ensure that American AI technology continues to be the gold standard worldwide and we are the partner of choice for others."
Industry Trajectory
DeepSeek’s efficiency breakthrough raises questions about the future balance between optimization and brute-force scaling. If efficiency outpaces hardware-driven gains, AI development may democratize, fostering broader participation. If scaling remains paramount, market concentration will persist, reinforcing international competition. Policymakers should monitor how these dynamics shape industry structure and strategic advantage.