As users, we sometimes accidentally paste secrets into AI chats — API keys, OAuth tokens, database passwords, private keys, client secrets — and to address this, I built a skill called Sensitive Data Guard together with Claude that activates proactively whenever there’s any chance sensitive data could surface in a conversation.
How Sensitive Data Guard Works
The skill activates on any of the following triggers:
- Explicit pastes — the user directly shares a credential value in the message
- Code and config snippets — files containing fields like
api_key,password,token,secret,private_key, orcertificate - Keyword mentions — words like “secret”, “token”, “key”, “password”, “credential”, or “passphrase” appear anywhere in the conversation
When triggered, Claude switches into a protective mode:
- Does not echo back any credential values, even partially
- Warns the user if a value appears to be a real secret (not a placeholder like
<YOUR_API_KEY>) - Suggests redaction before continuing — replacing real values with
[REDACTED]or environment variable references - Avoids storing or referencing the value in subsequent responses
Getting Started
Download the skill file and add it at claude.ai/customize/skills. The skill will then be available across all your Claude conversations.
Download
The skill file is available for direct download: sensitive-data-guard.skill
The full skill is also available as a GitHub Gist.