Right now, someone in your company is pasting client data into ChatGPT
Right now, somewhere in your company, someone is pasting a client contract into ChatGPT.
They are not doing it to be reckless. They are doing it because it saves them an hour, the output is good, and nobody told them not to. They have been doing it for six months. They think everyone does it. They might be right.
You do not know about it.
That is not unusual. Most CEOs of mid-sized companies do not. And most will not, until it becomes a problem with a name, a date, and a legal notice attached.
What is happening in practice is this. Your employees are intelligent people with hard problems and access to extraordinary tools. ChatGPT. Claude. Gemini. Perplexity. They use them daily. They upload documents, paste internal data, write prompts with client names and financial figures and HR details. They get results. They keep going.
From a productivity standpoint, it works.
From a compliance standpoint, it is a live wire.
GDPR says you cannot monitor your employees too closely. NIS2, the EU cybersecurity directive that came into force in late 2024, says you need to know what is happening with your data and your systems. These two things are in direct contradiction with each other. Most legal teams are quietly hoping nobody asks the question too loudly.
Someone will.
The regulatory tension is real but it is not even the most immediate problem. The most immediate problem is that you are operating with a blind spot at the core of your business. Confidential client information is leaving your systems through channels you did not build, did not authorize, and cannot audit. The people doing it have no bad intentions. They are just trying to do their jobs faster, with the best tools available to them.
The fact that it is well-intentioned does not change the exposure.
Block, the fintech company Jack Dorsey runs, made headlines when it cut 4,000 people from a team of 10,000. Forty percent. The stated reason was not budget cuts or a bad quarter. It was AI. Direct substitution of work that used to require people. That story got covered as a layoff story. It was an operational transformation story.
But there is another version of the AI story that is happening right now, in a quieter way, in companies that have not made any announcement. It is the company where AI is everywhere, unstructured, undocumented, without any central visibility. Where your sales team is drafting proposals with Copilot, your finance director is running analysis with tools that have never seen the inside of a security review, and your operations manager is summarizing board materials with Claude on a personal account.
Everyone is more productive. Nobody has any idea what data went where.
When the question arrives, and it will arrive, from a client doing a security review, from a regulator, from a prospective acquirer running due diligence, the answer we did not really track that is not a good one.
You have two options when your employees are using AI on company data.
You can prohibit it or you can channel it.
Prohibition is the instinct. It feels like control. Large tech companies have already gone there. There are documented cases of major firms issuing policies that make unauthorized AI tool use a fireable offense.
The instinct makes sense. The execution is nearly impossible.
A motivated employee with a personal laptop and a browser extension will find a way. The cost of prohibition is also steep in a different sense: you lose the productivity gains that your competitors are capturing, and you create resentment among your best people who have seen what these tools can do and know you are asking them to go back.
The companies that end up ahead are the ones that channeled it instead.
Channeling means building a corporate AI environment. Your data, under your control, with your permissions structure, your audit trail, your policies. Not consumer ChatGPT. A system your legal and security teams can actually account for when someone asks. A system where the same capabilities your employees are already using are available within a structure you can manage.
This is not a technology problem. It is a governance decision.
The technology exists, it works, and it is not nearly as complex to implement as most companies imagine. What is missing in most mid-sized companies is someone who understands both the operational side and the compliance side well enough to put something real in place.
The gap between everyone is using AI on the side and we have an AI environment that works for the business and respects the regulatory framework is a gap most companies could close in weeks, not years.
Most are not closing it.
Some of that is inertia. Some of it is the sense that AI governance is a problem for larger companies, or a future problem, or something to revisit once the regulatory picture is clearer.
The regulatory picture is not getting clearer. NIS2 is already in force. GDPR enforcement is increasing, not decreasing. And the audit that reveals your exposure is not going to wait until you feel ready.
The practical question is not whether to act. The practical question is what acting looks like in a company your size, with your specific mix of clients, data types, and regulatory exposure.
That answer is different for every company. What is the same is the cost of not finding it.
You do not find out about a data exposure when it happens. You find out months or years later, usually when someone else finds it for you.
The company that moved first on governance is the one that controls that conversation. The one that moved last is the one having it in the worst possible circumstances.
Your employees are already using these tools. The question is whether it is happening on your terms or theirs.
That is a question worth having an actual answer to.