It’s 16:47 on a Friday. You’re basically done for the week. Then a message pops up: “Can you review this document/code ?”
Right now, the default move is copy → paste → “summarize/review this” in ChatGPT.
It’s fast. It’s frictionless.
And it’s exactly how shadow AI happens: people use AI outside the tools the company controls.
<aside> 🚨
According to Red Hat article published on October 15, 2025, 99% of Dutch organizations are dealing with shadow AI.
</aside>
It’s easy to see why this keeps happening. AI tools save time, reduce effort, and make tedious work feel instant. But that same “make-your-life-easy” tools can quietly introduce real risks:
<aside> ⚠️
Beeckestijn Business School reports that more than half of employees don’t admit they use AI, nearly half upload company information into public tools like ChatGPT, and two-thirds don’t even verify the output.
</aside>
That’s why, recently quite a number of companies are hitting the brakes not on AI itself, but on how it’s being used.
The real question we should be asking isn’t “Should we use AI?” It’s:
“Do we let it happen in a public tool or bring it into a controlled environment?”
This blog post has two parts: