How does hex-encoded prompt injection work to bypass protections in LLMs (i.e. ChatGPT)?
Recent reports describe how a new prompt injection technique uses hex encoding to bypass the internal content moderation safeguards in language models like ChatGPT-4o, allowing them to generate exploit code. This technique reportedly disgu… Continue reading How does hex-encoded prompt injection work to bypass protections in LLMs (i.e. ChatGPT)?