When AI agents go rogue, the fallout hits the enterprise

In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like hallucinations, prompt injections, and embedded biases can turn these systems into v… Continue reading When AI agents go rogue, the fallout hits the enterprise

Package hallucination: LLMs may deliver malicious code to careless devs

LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting” (courtesy of Seth Larson, Security Developer-in-Residence at the Pyth… Continue reading Package hallucination: LLMs may deliver malicious code to careless devs

The quiet data breach hiding in AI workflows

As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use large language models. CISOs cannot treat this as a secondary concern. To red… Continue reading The quiet data breach hiding in AI workflows

Excessive agency in LLMs: The growing risk of unchecked autonomy

For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have permissions to execute commands. However, as these systems gain deep access to info… Continue reading Excessive agency in LLMs: The growing risk of unchecked autonomy

Two things you need in place to successfully adopt AI

Organizations should not shy away from taking advantage of AI tools, but they need to find the right balance between maximizing efficiency and mitigating organizational risk. They need to put in place: 1. A seamless AI security policy AI may have previ… Continue reading Two things you need in place to successfully adopt AI

Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing 

Knostic provides a “need-to-know” filter on the answers generated by enterprise large language models (LLM) tools.
The post Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing  appeared first on SecurityWeek.
Continue reading Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing 

Man vs. machine: Striking the perfect balance in threat intelligence

In this Help Net Security interview, Aaron Roberts, Director at Perspective Intelligence, discusses how automation is reshaping threat intelligence. He explains that while AI tools can process massive data sets, the nuanced judgment of experienced anal… Continue reading Man vs. machine: Striking the perfect balance in threat intelligence

DeepSeek’s popularity exploited by malware peddlers, scammers

As US-based AI companies struggle with the news that the recently released Chinese-made open source DeepSeek-R1 reasoning model performs as well as theirs for a fraction of the cost, users are rushing to try out DeepSeek’s AI tool. In the process… Continue reading DeepSeek’s popularity exploited by malware peddlers, scammers

GitLab CISO on proactive monitoring and metrics for DevSecOps success

In this Help Net Security interview, Josh Lemos, CISO at GitLab, talks about the shift from DevOps to DevSecOps, focusing on the complexity of building systems and integrating security tools. He shares tips for maintaining development speed, fostering … Continue reading GitLab CISO on proactive monitoring and metrics for DevSecOps success