Research4 min read
OpenAI Open-Sources Training Dataset to Help AI Models Resist Prompt Injection
OpenAI has released IH-Challenge, an open-source training dataset designed to teach AI models to reliably distinguish trusted instructions from potentially malicious ones — a significant step toward securing agentic AI systems against prompt injection attacks.