Researchers automated jailbreaking of LLMs with other LLMs

AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models (LLMs) in an automated fashion. “The method, known as the Tree of Attacks with Pruni… Continue reading Researchers automated jailbreaking of LLMs with other LLMs

Robust Intelligence collaborates with MongoDB to secure generative AI models

Robust Intelligence announced a partnership with MongoDB to help customers secure generative AI models enhanced with enterprise data. The offering combines Robust Intelligence’s real-time AI Firewall with MongoDB Atlas Vector Search for an enterp… Continue reading Robust Intelligence collaborates with MongoDB to secure generative AI models

MITRE partners with Robust Intelligence to tackle AI supply chain risks in open-source models

MITRE is collaborating with Robust Intelligence to enhance a free tool to help organizations assess the supply chain risks of publicly available artificial intelligence (AI) models online today. The collaboration also includes work with Indiana Univers… Continue reading MITRE partners with Robust Intelligence to tackle AI supply chain risks in open-source models