There’s a growing trend in workplaces right now that feels oddly familiar. A new tool shows up, leadership gets excited, and suddenly it’s not just encouraged, it’s measured.
For one employee in the cybersecurity space, that’s exactly what happened with AI.
It wasn’t enough to use it when it made sense. The company started tracking usage, monitoring credits, and even calling people into meetings to explain why they weren’t using enough of it.
Not why their work wasn’t getting done.

Just why they weren’t using the tool.









When “Use This Tool” Becomes a Requirement
At first glance, the directive sounded reasonable. Use AI to save time, improve efficiency, and reduce repetitive work.
And to be fair, he didn’t disagree with that.
He found AI useful in certain situations, just not in everything he did. Like most tools, it worked well for some tasks and slowed him down for others. That nuance, however, didn’t seem to matter to management.
What mattered was usage.
Credits spent. Prompts entered. Activity logged.
And that’s where the disconnect started.
The Pressure to Perform, Not to Produce
Instead of focusing on outcomes, the company began focusing on behavior.
Employees weren’t being evaluated on whether AI improved their work. They were being evaluated on whether they were using it enough.
That kind of pressure changes how people respond.
Research and workplace discussions, including those often referenced by the Harvard Business Review, have pointed out that when organizations measure tool usage instead of results, employees tend to optimize for the metric, not the goal.
In other words, they do what’s tracked, not what’s useful.
And that’s exactly what happened here.
Using the System Exactly as Intended… Technically
He didn’t refuse to use AI.
He leaned into it.
But not in the way management expected.
Instead of forcing AI into his actual workflow, he used it for something else entirely. The endless mandatory training courses the company required, especially cybersecurity modules filled with multiple-choice questions.
Normally, those would take anywhere from fifteen minutes to an hour.
Now, they took about a minute.
He would copy each question into the AI, ask for detailed explanations, and let it generate long, thorough responses. Not only did this save him time and effort, it also consumed a significant amount of AI credits.
Which, conveniently, satisfied the company’s tracking system.
When Metrics Become the Game
From the outside, it looks like high engagement.
Plenty of AI usage. Lots of credits burned. Active participation in the initiative.
From his perspective, it’s something else.
A way to meet expectations without disrupting the parts of his work that already function well.
It’s not sabotage. It’s compliance, just optimized.
And it highlights a common problem with top-down mandates. When the people setting the rules don’t fully understand how the work is done, the rules often get followed in ways they didn’t anticipate.
The Trade-Off No One Talks About
There’s also a quieter concern underneath all of this.
AI can be helpful, but it isn’t always efficient. Sometimes it adds friction. Sometimes verifying its output takes longer than doing the task yourself. And in fields like cybersecurity, accuracy and understanding matter more than speed alone.
Forcing usage without context risks turning a tool into a distraction.
It shifts focus away from thinking and toward interaction. Not “What’s the best way to solve this?” but “How can I involve AI in this so it counts?”
That’s a subtle but important difference.
Take a look at the comments from fellow users:
A lot of people immediately recognized the pattern. When companies introduce quotas for tool usage, employees tend to find ways to meet those quotas without necessarily improving their work.










Some commenters shared similar experiences, where AI adoption targets felt disconnected from real productivity.







Others suggested alternative ways to “burn credits” on low-impact tasks, essentially turning the system into a checkbox exercise.




He didn’t resist the system.
He worked within it.
He used AI exactly as encouraged, just not where it actually mattered to his core responsibilities.
And in doing so, he exposed something bigger than a single workplace policy.
When companies focus too much on how tools are used instead of what gets done, they don’t just change workflows.
They change behavior.
So maybe the real question isn’t whether he’s using AI the “right” way.
It’s whether the company is measuring the right thing in the first place.


















