OpenAI, a leading artificial intelligence (AI) company backed by Microsoft, has introduced a safety framework for its advanced models, outlined in a plan published on its website. The framework includes provisions for the board to reverse safety decisions. OpenAI commits to deploying its latest technology only if it meets safety criteria, particularly in areas such as cybersecurity and nuclear threats. To enhance safety oversight, the company is establishing an advisory group tasked with reviewing safety reports and forwarding them to executives and the board. While decisions will be made by executives, the board retains the authority to reverse those decisions.
The move comes in response to growing concerns about the potential risks associated with powerful AI models, such as OpenAI’s ChatGPT. The technology, while capable of impressive feats like generating poetry and essays, has raised worries about its potential to spread disinformation and manipulate human behavior. In April, a group of AI industry leaders and experts called for a six-month pause in the development of systems surpassing OpenAI’s GPT-4, citing concerns about the societal risks posed by such advancements.
Source – CGTN