Researchers automated jailbreaking of LLMs with other LLMs
AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models (LLMs) in an automated fashion. “The method, known as the Tree of Attacks with Pruni… Continue reading Researchers automated jailbreaking of LLMs with other LLMs