OpenAI Open-Sources 'Privacy Filter' — a 1.5B Parameter Model That Strips Personal Data From Text
OpenAI has released Privacy Filter under Apache 2.0 — a compact 1.5 billion parameter model that detects and redacts eight categories of PII including names, addresses, phone numbers, and API keys. The model runs locally on a laptop, processes 128K token contexts in a single pass, and is designed as a pre-processing layer before feeding sensitive text to larger AI models.

D.O.T.S AI Newsroom
AI News Desk
OpenAI has released Privacy Filter, an open-source model designed to detect and redact personally identifiable information from text before it is processed by downstream AI systems. The model is compact — 1.5 billion parameters total, with only 50 million active parameters per inference request — and is designed to run locally on a laptop or directly in a browser without any cloud dependency. Privacy Filter is available under the Apache 2.0 license on both GitHub and Hugging Face, with commercial use explicitly permitted. The release addresses a practical problem that has limited enterprise adoption of AI tools: many organizations cannot send raw documents, emails, or customer records to cloud AI APIs because those texts contain PII that is subject to GDPR, HIPAA, CCPA, or other data protection requirements.
What It Detects
Privacy Filter is trained to identify eight categories of sensitive content: names, postal addresses, email addresses, phone numbers, URLs, dates, account numbers (including credit cards and social security numbers), and "other secrets" such as passwords, API keys, and authentication tokens. The model makes a single pass through the input text and labels each span that belongs to one of these categories — it does not generate new text, and it does not attempt to understand the semantic meaning of what it reads. This architecture makes it fast and predictable: a 128,000-token context window means it can process a long document or a substantial chat history in one operation, and the labeling approach produces structured output that downstream systems can act on programmatically rather than requiring text parsing.
Tunable Sensitivity
Users can adjust the model's sensitivity threshold to control the tradeoff between recall and precision. High-recall settings catch more PII but produce more false positives — flagging non-sensitive text as personal data. Conservative settings reduce false positives but risk missing some PII. For regulated industries, OpenAI recommends starting with high-recall settings and using human review to catch errors at the boundary. The model also supports fine-tuning on domain-specific datasets, which is important for industries with specialized PII patterns — healthcare records contain different sensitive-data structures than financial services documents, and a fine-tuned variant will outperform the base model in domain-specific deployments.
Honest About Limitations
OpenAI's documentation for Privacy Filter is notably candid about what the model cannot do. It does not provide a legal guarantee of anonymization or GDPR compliance — it is a technical tool, not a legal instrument. Known failure modes include reduced accuracy on rare or regionally uncommon names, false positives for well-known public figures and organizations, and degraded performance on non-English text or non-Latin scripts. For sensitive deployments in healthcare, law, finance, or human resources, OpenAI explicitly recommends maintaining human review alongside automated redaction. The label categories are also fixed at inference time — organizations that need custom PII categories (e.g., proprietary product codes or internal ID formats) must fine-tune the model rather than adjusting behavior through prompting. Despite these caveats, Privacy Filter fills a real gap in the open-source tooling ecosystem: a lightweight, locally-runnable PII redaction model with a permissive license is genuinely useful infrastructure for organizations building privacy-preserving AI pipelines.