Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer

New Immersive World LLM jailbreak lets anyone create malware with GenAI. Discover how Cato Networks researchers tricked ChatGPT, Copilot, and DeepSeek into coding infostealers – In this case, a Chrome infostealer. Continue reading Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer

Trojans disguised as AI: Cybercriminals exploit DeepSeek’s popularity

Kaspersky experts have discovered campaigns distributing stealers, malicious PowerShell scripts, and backdoors through web pages mimicking the DeepSeek and Grok websites. Continue reading Trojans disguised as AI: Cybercriminals exploit DeepSeek’s popularity

“Emergent Misalignment” in LLMs

Interesting research: “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs“:

Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned. Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment…

Continue reading “Emergent Misalignment” in LLMs

More Research Showing AI Breaking the Rules

These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating.

Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a “scratchpad:” a text box the AI could use to “think” before making its next move, providing researchers with a window into their reasoning.

In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’—not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign…

Continue reading More Research Showing AI Breaking the Rules

Hackaday Links: February 23, 2025

Hackaday Links Column Banner

Ho-hum — another week, another high-profile bricking. In a move anyone could see coming, Humane has announced that their pricey AI Pin widgets will cease to work in any meaningful …read more Continue reading Hackaday Links: February 23, 2025