The US AI Safety Institute stands on shaky ground | TechCrunch
One of the only U.S. government offices dedicated to assessing AI safety is in danger of being dismantled if Congress doesn’t choose to authorize it.
The U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, was created in November 2023 as a part of President Joe Biden’s AI Executive Order. The AISI operates within NIST, an agency of the Commerce Department that develops guidance for the deployment of various categories of technologies.
But while the AISI has a budget, a director, and a research partnership with its counterpart in the U.K., the U.K. AI Safety Institute, it could be wound down with a simple repeal of Biden’s executive order.
“If another president were to come into office and repeal the AI Executive Order, they would dismantle the AISI,” Chris MacKenzie, senior director of communications at Americans for Responsible Innovation, an AI lobby group, told TechCrunch. “And [Donald] Trump has promised to repeal the AI Executive Order. So Congress formally authorizing the AI Safety Institute would ensure its continued existence regardless of who’s in the White House.”
Beyond assuring the AISI’s future, authorizing the office could also lead to more stable, long-term funding for its initiative from Congress. The AISI currently has a budget of around $10 million — a relatively small amount considering the concentration of major AI labs in Silicon Valley.
“Appropriators in Congress tend to give higher budgeting priority to entities formally authorized by Congress,” MacKenzie said, “with the understanding that those entities have broad buy-in and are here for the long run, rather than just a single administration’s one-off priority.”
In a letter today, a coalition of over 60 companies, nonprofits, and universities called on Congress to enact legislation codifying the AISI before the end of the year. Among the undersigners are OpenAI and Anthropic, both of which have signed agreements with the AISI to collaborate on AI research, testing, and evaluation.
The Senate and House have each advanced bipartisan bills to authorize the activities of the AISI. But the proposals have faced some opposition from conservative lawmakers, including Sen. Ted Cruz (R-Texas), who’s called for the Senate version of the AISI bill to pull back on diversity programs.
Granted, the AISI is a relatively weak organization from an enforcement perspective. Its standards are voluntary. But think tanks and industry coalitions — as well as tech giants like Microsoft, Google, Amazon, and IBM, all of which signed the aforementioned letter — see the AISI as the most promising avenue to AI benchmarks that can form the basis of future policy.
There’s also concern among some interest groups that allowing the AISI to fold would risk ceding AI leadership to foreign nations. During an AI summit in Seoul in May 2024, international leaders agreed to form a network of AI Safety Institutes comprising agencies from Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union in addition to the U.K. and U.S.
“As other governments quickly move ahead, members of Congress can ensure that the U.S. does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing U.S. AI innovation and adoption,” Jason Oxman, president and CEO of the Information Technology Industry Council, an IT industry trade association, said in a statement. “We urge Congress to heed today’s call to action from industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year.”
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.