Big Tech and other AI stakeholders have officially been tapped by the Biden-Harris administration to address the safety and trustworthiness of AI development.
On Thursday, the U.S. Department of Commerce announced the creation of the AI Safety Institute Consortium (AISIC). The consortium, which is housed under the Department of Commerce's National Institute of Standards and Technology (NIST), is tasked with following through on mandates laid out in President Biden's AI executive order. This includes "developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content," said Secretary of Commerce Gina Raimondo in the announcement.
The list of over 200 participants includes major tech players that have been developing AI tools. This includes OpenAI, Google, Microsoft, Apple, Amazon, Meta, NVIDIA, Adobe, and Salesforce. The list also includes stakeholders from academia, including institutes from MIT, Stanford, and Cornell, as well as think tanks and industry researchers like the Center for AI Safety, the Institute for Electrical and Electronics Engineers (IEEE), and the Responsible AI Institute.
The AI consortium is an outcome of Biden's sweeping executive order which seeks to tame the wild west of AI development. AI has been deemed a major risk for national security, privacy and surveillance, election misinformation, and job security to name a few. "The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," said Raimondo.
While the European Parliament has been working to develop its own AI regulations, this is a significant step from the U.S. government in its effort to formally and concretely reign in AI. The full list of AISIC participants can be found here.