AI Productivity Gains Are Not Always a Given
I recently had an interesting conversation with a client’s CTO. Six months after rolling out GitHub Copilot to their entire engineering org, usage had fallen to less than 15%. The developers who were already going to use it were using it. Everyone else? Crickets.
This particular client is a manufacturing company with a solid e-commerce footprint, not exactly a tech dinosaur. But there’s only a skeleton AI usage policy in place. No official training program. The senior developers are skeptical and nobody’s telling them that’s a problem. AI tooling is just… there. Like a really expensive shiny hammer that most people ignore.
I’m seeing this pattern in a lot of places right now. Companies buy the licenses, send out the announcement email, and then wonder why their velocity hasn’t magically improved. Meanwhile, McKinsey recently published research showing that some organizations are getting 30-45% quality improvements from AI adoption. That’s the difference between rushing to deploy patches every week and shipping a solid product that enhances customer delight.
The gap between companies that fully leverage AI in their SDLC and companies that just experiment with AI is becoming uncomfortable to watch.
The Resistance Is Relatable
Most of the developers I talk to aren’t anti-AI. They’re anti-risk, and nobody’s giving them good reason to believe AI isn’t risky for them personally.
Picture this: You’re a senior developer who’s been writing C# for 15 years. You know every quirk of your codebase. You can spot bad patterns from across the room. Now someone wants you to let a robot write code that you’re still responsible for when it breaks in production. Would you instantly adopt that?
The younger developers might be more willing to experiment, but they’re watching the seniors. And when the seniors aren’t touching it, that sends a clear message.
What kills me is that leadership always assumes this is a training problem. It’s not. I’ve watched teams sit through vendor training, nod along, and then go back to their IDEs and work exactly the same way they did before. And it’s usually because nobody’s answered the important questions: What happens when AI-generated code causes an outage? Can I paste our proprietary algorithms into this thing? Is learning to work with AI going to make me more valuable or just more replaceable?
One engineering manager told me: “My team has adopted and abandoned so many ‘transformative’ tools in the last five years that they just assume this is another one. Why invest time learning something that might not be here next year?”
Fair point.
The Companies Making It Work
McKinsey’s research found something interesting, the companies seeing real gains aren’t just the ones with the biggest AI budgets. They’re the ones who stopped treating AI like it’s a VS Code extension and started treating it like a fundamental shift in how software gets built.
They’re running AI across the entire lifecycle. Not just code generation: design, testing, deployment, monitoring, documentation. One client of mine significantly cut their regression testing time using AI to identify which tests need to run based on code changes. Another is using AI to refine their user stories to ensure AC feeds directly into test case development.
But here’s what really separates the front runners: they updated their job descriptions. Their product managers aren’t writing requirements docs anymore, they’re working directly with AI tools to prototype features. Their senior engineers aren’t reviewing every line of code, they’re architecting systems and teaching AI agents how to implement patterns.
They also measure different things. While everyone else is tracking “percent of code written by AI” (a meaningless metric in isolation), these companies track defect rates, deployment frequency, and mean time to resolution. You know, things that actually mean something to the business.
And, this is crucial, they put their money where their mouth is. If you want people to change behavior, you need to recognize and reward the new behavior. Several of my clients have added “AI proficiency” as an explicit component of engineering levels. Senior engineers are expected to not just use AI but to establish usage patterns for their teams.
The Pathway to Success
After working with many organizations on this, I’ve noticed the ones that succeed make a few key decisions early:
- They stop pretending AI is optional. Not in a “you must use Copilot or else” way, but in a “this is a core competency now” way. Just like we expect developers to understand version control or CI/CD, we need to expect them to understand how to collaborate with AI safely and effectively
- They invest in real capability building. Not vendor webinars, actual, hands-on working sessions where teams can bring their real code and real concerns. Where someone can say “but what about our compliance requirements?” and get an actual answer from someone who understands their context
- They create clear guardrails. Document what’s okay and what isn’t. Can you use AI for test data generation? For production code? For customer data analysis? The answers matter less than having answers
- They focus on outcomes, not activity. One client spent months trying to increase Copilot acceptance rates. Then they started measuring cycle time instead. Guess which one actually moved the needle?
Why This Matters Now More Than Ever
The performance gap between top and bottom performers is already at 15 percentage points according to McKinsey. In my experience leading digital transformations, I’ve never seen a capability gap widen this fast. Companies that figure out AI-augmented delivery now aren’t just going to be a little faster, they’re going to be playing a whole different game.
We’re helping our clients navigate this by looking at their entire engineering ecosystem, not just software engineering but how it connects to cloud architecture, DevSecOps practices, QA automation, and UX. AI doesn’t simply slot into one piece of this; it changes all of it.
The manufacturing client I mentioned earlier? We’re starting with clear AI policies and governance, then moving to hands-on capability building with their actual codebase. No generic examples, it’s their code, their challenges, their constraints. When senior developers see AI helping them navigate their legacy systems instead of just generating boilerplate, the conversation changes completely.
The companies still treating AI as a side project are going to wake up in 18 months and wonder how their competitors are shipping twice as fast with half the bugs. By then, it’ll be a major challenge to catch up.
If you’re seeing resistance in your teams and the productivity gains aren’t showing up, it’s time to stop tweaking the deployment and start rethinking the entire approach. Trissential has been through enough of these transformations to know what actually works versus what looks good in a vendor pitch.
Want to talk about what this looks like for your organization? Reach out.
Learn more about Trissential’s Digital Engineering Services: Software Engineering, Quality Assurance & Testing, Cloud Strategy, PLM
Talk to the Expert

Brian Zielinski – Sr. Director, Digital Engineering
brian.zielinski@trissential.com










