A strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI) released, describing the department’s approach to AI safety.

The National Institute of Standards and Technology (NIST) launched the AISI building on NIST’s long-standing work on AI. In addition to releasing a strategic vision, U.S. Secretary of Commerce Gina Raimondo also shared the department’s plans to work with a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices, and to convene the institutes later this year in the San Francisco area, where the AISI established a presence.

The Strategic Vision document describes the AISI’s philosophy, mission, and strategic goals. Rooted in two core principles: First, that beneficial AI depends on AI safety; second, AI safety depends on science, the AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more.

The AISI will focus on three key goals:

  1. Advance the science of AI safety by making the vision possible
  2. Articulate, demonstrate, and disseminate the practices of AI safety by making the vision actionable
  3. Support institutions, communities, and coordination around AI safety by making the vision sustainable

To achieve these goals, the AISI plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations, among other topics; and perform and coordinate technical research.

Schneider Bold

The U.S. AI Safety Institute will work closely with diverse AI industry, civil society members, and international partners to achieve these objectives.


Pin It on Pinterest

Share This