How does AI fluency break mid-market consulting firms before leadership realizes?
The Big Answer: AI fluency splits inside mid-market consulting firms the moment usage becomes individually useful before it becomes organizationally legible. The first fracture is not between “AI firms” and “non-AI firms.” It is between teams that have quietly built prompt habits, review discipline, and workflow shortcuts into daily delivery, and teams that are still working in the old cadence while assuming everyone is operating from the same playbook. In the middle market, that split accelerates because adoption is already widespread, but implementation quality is not. RSM’s 2025 middle-market survey found 91% of respondents using generative AI, yet only 25% said it was fully integrated across core operations; 53% said they were only somewhat prepared, 62% said implementation was harder than expected, and 39% cited lack of in-house expertise as a top issue. That is exactly the kind of environment where capability diverges faster than management can see it. 
Inside consulting firms, the visible process stays stable longer than the real process. Decks still get delivered. Research still gets synthesized. Client calls still happen. But underneath, some managers are using AI to compress turnaround time, widen option sets, and improve first drafts, while others are either avoiding it, using it badly, or hiding it. Microsoft found 78% of AI users bring their own tools to work, and 52% are reluctant to admit using AI for important tasks. That means uneven delivery quality starts as a hidden production-system problem long before it appears as an official performance problem.
The split begins in unofficial workflow, not official strategy. Leadership announces experimentation, or says nothing, or issues vague caution. Meanwhile, individual consultants start using AI for proposal drafts, interview synthesis, research compression, workshop design, Excel cleanup, meeting recap, and first-pass analysis. The people who learn fast do not just save time. They start changing how they sequence work. They move more iterations upstream, test more framings, and arrive at sharper intermediate outputs. Everyone else keeps producing work with the older labor pattern. Same job title, different actual machine.
That divergence gets sharper in consulting because the industry is already under pressure. Source Global Research described firms entering 2025 with shaky market confidence, rising costs, and teams stretched thin, while clients were simultaneously bullish on generative AI and willing to pay more for specialist expertise. In plain terms: firms are under margin pressure, clients want smarter work faster, and internal teams are not upgrading evenly. That is fertile ground for a hidden fluency hierarchy to form.
The middle market is especially exposed because it often has enough ambition to push AI usage but not enough operating discipline to standardize it. RSM found adoption is nearly universal, but 41% of those with implementation issues cited data quality concerns, 39% cited lack of in-house expertise, and 70% said they needed outside help to get the most from AI solutions. That means the typical mid-market firm is not dealing with a simple yes-or-no adoption question. It is dealing with uneven human capability layered on top of incomplete systems.
3. How inconsistency shows up before metrics catch it
The first symptom is not catastrophic failure. It is variance. Some teams suddenly become suspiciously fast without becoming proportionally better. Others become better in bursts but cannot explain why. Some client deliverables get cleaner and more expansive at the draft stage; others start sounding polished but thin, generic, or structurally overconfident. The problem is not that AI makes all work worse. The problem is that it amplifies differences in judgment. Microsoft’s research synthesis found meaningful gains in speed of execution without significant quality loss on many tasks, but also emphasized wide variation across users and tenants, with strong differences between merely being given access and actually using the tools effectively.
That variation creates quality drift before the KPI dashboard notices. A partner may see that turnaround improved, utilization looks fine, and the team is still making deadlines. What they do not yet see is that two managers producing “the same kind of strategy deck” are now using radically different cognitive processes. One is using AI as a disciplined accelerator with human verification. Another is using it as unstructured scaffolding and passing off blended output as original reasoning. KPMG’s 2025 U.S. findings are blunt: half of workers reported using AI at work without knowing whether it was allowed, 44% said they were knowingly using it improperly, 58% admitted relying on AI outputs without properly evaluating them, and 53% said they present AI-generated content as their own. That is not a productivity story. That is a control failure.
The next symptom is client-facing inconsistency. Thomson Reuters found about 40% of professionals reported contradictory guidance from clients and leadership about AI use, and half said no conversations with clients about AI usage had happened at all. In consulting, that creates a dangerous gap: the delivery team may be using AI materially in work product creation while the client relationship layer assumes norms are uniform and understood. They are not. So the risk shows up first as uneven tone, weak substantiation, shallow references, inconsistent rigor between teams, or sudden overproduction that hides under normal billing narratives.
There is also a labor-pattern tell. In a large field experiment across 66 firms and 7,137 knowledge workers, treated workers who used generative AI spent two fewer hours on email each week and reduced work outside regular hours. St. Louis Fed researchers separately found AI users saved an average 5.4% of work hours, about 2.2 hours per 40-hour week, and noted that these gains may not show up in firm-level productivity statistics when adoption remains mostly informal. That matters because consulting leaders often infer consistency from visible effort. But AI changes effort distribution before it changes reporting.
4. Why leadership misses it
Leadership misses it because the reporting system was built for a pre-AI production model. Consulting firms are trained to read utilization, margin, client satisfaction, and manager confidence. None of those measures cleanly captures hidden tool asymmetry. If the deck got done and the client did not complain, the system reads “healthy.” But AI changes the path to output, not just the output itself. When that path becomes invisible, leaders overestimate uniformity.
They also miss it because billing structures distort what they look for. In many mid-market firms, managers are still rewarded for throughput, calm delivery, and team reliability. So a fast team looks strong even when its process is becoming fragile. A slow team may actually be doing more human verification and producing more defensible work. AI breaks the old relationship between time spent and confidence deserved. If leadership still trusts time signatures more than process instrumentation, it will misread where the real capability sits.
Then there is the leadership perception gap itself. McKinsey’s 2025 workplace report found employees are more ready for AI than leaders imagine and identified leadership as the biggest barrier to success. Slack likewise found executive urgency around AI far outpaced actual employee adoption, while 37% of desk workers said their company had no AI policy and only 7% considered AI outputs completely trustworthy for work-related tasks. In other words, the top thinks in mandate, the floor thinks in risk, and the work gets remixed in between without a shared operating standard.
Strategist’s Takeaway
If you run or advise a mid-market consulting firm, stop treating AI adoption as a training issue and start treating it as a delivery variance issue. The real question is not “Are our people using AI?” Assume they are. The real question is “Where has hidden process divergence already changed the quality, defensibility, and consistency of client work?” Microsoft’s BYOAI data, KPMG’s improper-use findings, and Thomson Reuters’ guidance-gap research all point in the same direction: usage outruns governance, and governance gaps show up at the work edge first.
Step 1: instrument the workflow, not just the outcome. Audit how proposals, research syntheses, workshop materials, and client recommendations are actually produced across teams. You are looking for step-level differences: where AI enters, who reviews outputs, what gets verified, and what gets passed through untouched.
Step 2: classify tasks by verification burden. Not every AI-assisted task carries the same risk. Internal brainstorm support is one thing; client-ready analysis, benchmarks, citations, recommendations, and sensitive synthesis need a much harder review standard.
Step 3: identify your high-fluency teams and extract their operating habits before those habits harden into private advantage. The goal is not to flatten good teams. It is to prevent silent capability feudalism inside the firm.
Step 4: reset client disclosure norms now, before a bad miss forces you to. Thomson Reuters’ data makes clear many clients still have not had explicit AI-use conversations with their outside providers. That is unstable. Set a firm view on where AI is allowed, how outputs are verified, and what gets disclosed.
Step 5: stop confusing speed with maturity. RSM’s middle-market data shows firms are adopting fast while still underprepared. In consulting, that means the fastest team is not automatically the best team; it may simply be the least visible risk.
The firms that get ahead here will not be the ones with the loudest AI posture. They will be the ones that make delivery quality legible again before clients notice the split.
Sources:
Microsoft Work Trend Index 2024: AI at Work Is Here. Now Comes the Hard Part — Microsoft
https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part
Microsoft & LinkedIn 2024 Work Trend Index (press release summary) — Microsoft
https://news.microsoft.com/source/2024/05/08/microsoft-and-linkedin-release-the-2024-work-trend-index-on-the-state-of-ai-at-work/
Generative AI in Real-World Workplaces — Microsoft Research
https://www.microsoft.com/en-us/research/wp-content/uploads/2024/07/Generative-AI-in-Real-World-Workplaces-Deck.pdf
RSM Middle Market AI Survey 2025 — RSM US LLP
https://rsmus.com/insights/services/digital-transformation/rsm-middle-market-ai-survey-2025.html
Predictions for 2025: Consulting Market Trends — Source Global Research
https://sourceglobalresearch.com/reports/9505-predictions-for-2025
Trust in Artificial Intelligence: Global Insights 2025 — KPMG
https://kpmg.com/us/en/media/news/trust-in-ai-2025.html
AI Guidance Gap Report — Thomson Reuters
https://www.thomsonreuters.com/en-us/posts/technology/ai-guidance-gap/
Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential — McKinsey & Company
https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
The Impact of Generative AI on Work Productivity — National Bureau of Economic Research
https://www.nber.org/papers/w33795
The Rapid Adoption of Generative AI — Federal Reserve Bank of St. Louis
https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity