PII Shield

We Already Have Policies. We Don’t Need Another Tool.

Why written rules aren’t enough to protect sensitive data from AI risk

5 min read - January 15, 2025

You’ve got the policies in place. You’ve told the team what not to do. You’ve maybe even added it to onboarding or training.

“Don’t put sensitive information into AI tools.”

And that’s good. Really. But here’s the part nobody wants to admit:

People don’t follow policies. At least, not perfectly. Especially when they’re in a rush, or not even sure what counts as sensitive.

The Myth

“We’ve already told our team what to do. We don’t need another tool.”

This one comes from a good place. If your people are smart and well-trained, and your policy is clear, it makes sense to think that’s enough.

But written rules don’t enforce behavior. And when it comes to AI tools, the stakes are higher because inputs go out fast and often aren’t reviewed at all.

The Reality

A policy is a guide. It’s not a guardrail.

Even with the best intentions, policies often fall short in practice. Here’s what the data shows:

  • A Cybsafe study found that 38% of employees share sensitive work information with AI tools without their employer knowing.
  • Another report revealed that 65% of employees admit to bypassing security policies to get their work done faster.
  • In a review of LLM usage, 13% of prompts were found to include sensitive internal data like URLs, credentials, or customer case details.
  • The same research reported that 46% of leaked prompts contained billing or authentication data—two of the most damaging types if mishandled.
  • Even in companies with AI use policies, 63% of employees reported witnessing coworkers using AI tools inappropriately or outside guidelines.

The takeaway? Most privacy incidents aren’t caused by bad actors. They’re caused by normal people trying to do their job.

Most privacy incidents aren’t the result of malicious actors. They happen when well-meaning people do normal things:

  • Paste in a customer email to clean up the tone
  • Drop a helpdesk transcript into ChatGPT to summarize it
  • Ask the model to rewrite a performance review

These things happen every day, even in well-managed teams.

Where the Policy Breaks Down

The problem isn’t awareness. It’s friction. In a fast-moving workflow, people take the fastest path to the result. That usually means skipping the policy checklist.

And even when they mean to follow the rules, it’s easy to miss what qualifies as sensitive:

  • A name with a region
  • An internal ID number
  • A complaint tied to a role

It’s not always obvious. That’s why policies alone don’t hold the line.

Here’s what happens in the real world:

  • People think the rule doesn’t apply to their current task
  • Or they think the data isn’t that sensitive
  • Or they just forget, because they’re juggling five things at once

No one prints out the policy before submitting a prompt. That doesn’t mean they’re reckless. It means they’re human.

What You Can Do About It

You don’t need to rewrite your policies. You just need a way to back them up.

Real protection means helping people do the right thing in the moment it matters. This requires both clear policies AND automated systems that can scan for sensitive information before it leaves your organization.

Effective Policy Implementation:

  • Clear written policies that account for AI usage patterns
  • Real-time scanning that detects sensitive information automatically
  • User training on AI-specific privacy risks
  • Audit trails showing what worked and what didn’t

Good policy deserves backup. Because even great policies need a little help.

Have Questions About AI Compliance?

Get answers about regulatory requirements, risk management strategies, and practical compliance steps in our comprehensive FAQ section.

Visit FAQ Section