OpenAI and Anthropic have agreed to let the U.S. government test their new AI models for safety through the U.S. AI Safety Institute, a federal agency. This collaboration aims to ensure responsible AI development by identifying and addressing potential risks before the models are publicly released. The initiative is part of the U.S. government’s voluntary approach to AI regulation, contrasting with stricter regulations in the European Union. However, California lawmakers recently passed a state-level AI safety bill that OpenAI criticized, arguing it could hinder innovation.
Source – CGTN