UK, US, EU Authorities Gather in San Francisco to Discuss AI Safety

AI safety institutes from the U.S., U.K., E.U., Australia, Canada, France, Japan, Kenya, the Republic of Korea, and Singapore officially formed the “International Network of AI Safety Institutes.” Continue reading UK, US, EU Authorities Gather in San Francisco to Discuss AI Safety

OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute, Handing Over Frontier Models For Testing

OpenAI and Anthropic will give the U.S. government early access to their frontier models for safety evaluation. Continue reading OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute, Handing Over Frontier Models For Testing

UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety

Global and tech leaders gathered in the U.K. for an influential summit dedicated to AI regulation and safety. Here’s what you need to know about the Bletchley Declaration, testing of new AI models and more. Continue reading UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety

OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI

The forum’s goal is to establish “guardrails” to mitigate the risk of AI. Learn about the group’s four core objectives, as well as the criteria for membership. Continue reading OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI