Researchers develop malicious AI ‘worm’ targeting generative AI systems

Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988. The worm demonstrates the potential dangers of AI security threats and […]

The post Researchers develop malicious AI ‘worm’ targeting generative AI systems appeared first on Security Intelligence.

Continue reading Researchers develop malicious AI ‘worm’ targeting generative AI systems

How AI can be hacked with prompt injection: NIST report

The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines […]

The post How AI can be hacked with prompt injection: NIST report appeared first on Security Intelligence.

Continue reading How AI can be hacked with prompt injection: NIST report