How AI can be hacked with prompt injection: NIST report

The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines […]

The post How AI can be hacked with prompt injection: NIST report appeared first on Security Intelligence.

Continue reading How AI can be hacked with prompt injection: NIST report