Jan 1
/
James Kavanagh
A Guide for Compassionate Managers
How to purposefully prepare your team for opportunities and meaningful careers against a backdrop of tech layoffs and AI-fuelled automation.
5:46am, Tuesday 29th October, 2025.
Your alarm went off ten minutes ago, but you’re already checking Slack, before you set off on the hour-long commute. Another day of the mandated 5-day return-to-office at Amazon. It’s been your reality since mid-last year. One hour each way, every single day. You used to tell yourself that the commute was “thinking time.” Now it’s just two more hours you don’t see your kids.
The work is relentless. Sprints that turn into marathons, then into forced marches that never actually end. Sev 2 incidents at 9pm. Your manager messaging you at 10pm about an urgent document needed in the morning. You’ve developed this enduring knot of tension in your shoulders, and a constant only-half-there presence.
But the pay is good. Really good. Good enough that you can afford the mortgage despite rates over 6.2%. Good enough to put money aside so your kids won’t face crippling loans for college. Good enough that you rarely stop to check grocery prices. And then there’s your H-1B visa, tethering your entire family’s future to this job. Forty-five days to find new employment or leave the country. That’s what kept you working until midnight, trying to resolve comments on version 40 of a six-pager document that you suspect won’t result in any real action.
So you drag yourself up, make the coffee extra strong, and start the ritual. Shower. Laptop bag. Kiss the sleeping kids. Drive to the station. Train. Walk. Badge in.
Except today is different.
Today, you’re one of 14,000 Amazon employees who wake up to find the message above on your phone.
You’ve been laid off.
No warning. No performance issues. Just another “organizational restructuring” that somehow requires eliminating entire teams of people who were “critical to our success” in last quarter’s all-hands, but now surplus to business need.
You sit there in your kitchen, coffee getting cold, trying to process what just happened. The train you were about to catch rolls on without you. Your calendar still shows five meetings today, including that project review you stayed up preparing for. Your work laptop - the one you can’t even unlock anymore - sits there like a $3,000 paperweight.
You should call someone. You should tell your spouse. You should do something. But you just sit there, coffee going cold, as the house slowly wakes.
Some variation of this scene played out for at least 150,000 tech workers last year. Some of them were my friends.
I was at Amazon from 2021 to 2025 and witnessed the striking gap between corporate messaging and lived reality during the layoffs that began in 2023. In one financial quarter you’re “critical to our success.” The next, you’re discovering your badge doesn’t work anymore.
In these layoffs, the numbers only tell part of the story: 14,000 job cuts at Amazon in October alone, roughly 5% of corporate staff, and another 16,000 planned. The return-to-office mandate had quietly become a headcount reduction tool, with employees who couldn’t relocate deemed to have “voluntarily resigned,” perhaps to avoid severance obligations. And across Big Tech, the pattern repeated over and over in these years: 264,000+ tech workers displaced in 2023, another 153,000 in 2024. Full figures for 2025 are not yet clear, but close to those for 2024.
Apart from the personal impact, I think there’s something that the headlines are missing: how this isn’t primarily a cost-cutting story. It’s a structural shift in what work organisations actually need humans to do. And yet, that shift creates a genuine opportunity, if employers and managers are willing to proactively do something other than hand people a severance package and wish them luck.
This article isn’t a critique of Amazon. Corporations are incentivised to be inhumane. Expecting otherwise misses the point.
But managers are people, not corporations. And if you’re a manager, you can choose differently.
This article is about one concrete way to do that: helping at-risk employees transition into AI governance roles. This is work that can’t be automated because it requires human judgment over AI systems. The skills gap is acute, the talent pipeline barely exists, and many people being pushed out of traditional roles already have the foundational capabilities this work requires.
My aim is just to offer some simple practical ideas to reflect upon, for those compassionate managers who want to do right by their people and help them find meaningful work amid the displacement that AI brings.
The workforce disruption happening now
The scale of displacement is real, even if the narrative around it is sometimes overblown.
Layoffs.fyi, which compiles publicly reported tech layoffs, recorded 264,320 tech employees laid off in 2023, 152,922 in 2024, and 122,549 in 2025. That’s roughly 540,000 layoffs over three years. The peak was 2023, followed by a drop in 2024 and a further decline in 2025, but the volume has remained meaningfully elevated throughout. The largest single-company reductions in early 2023 included Alphabet (~12,000), Microsoft (~10,000), Meta (~10,000), and Amazon (27,000 across two rounds). These are much more than statistics, representing hundreds of thousands of disrupted careers across engineering, product, HR, marketing, and operations.
The layoffs of 2023 landed in a hiring market that was already cooling. LinkedIn’s Economic Graph showed year-on-year hiring declines across every major region by August 2023: the U.S. down 23.8%, Australia down 28.6%, France down 22.2%, Germany down 20.5%. Job seekers were submitting 18% more applications per person, reflecting intensifying competition for fewer openings. By late 2025, the picture was sluggish rather than recovered. U.S. hiring was still down 8.7% year-on-year and more than 20% below pre-pandemic levels.
It’s tempting to attribute all of this to AI and automation, but that overstates the case. The layoff wave had multiple drivers: post-pandemic over-hiring corrections, cost discipline in a higher interest rate environment, and strategic repositioning. That said, executives have increasingly pointed to automation-driven efficiency as a rationale for constraining headcount in specific functions. In May 2023, IBM’s CEO said the company would pause hiring for some back-office roles, projecting that roughly 7,800 jobs could be replaced by AI and automation over time. In June 2025, Amazon’s CEO said he expects the corporate workforce to shrink in coming years as generative AI and agents automate routine tasks, with some roles shrinking while others emerge.
The longer-run backdrop suggests this is structural churn, not just a short cycle. The World Economic Forum’s Future of Jobs Report 2023 projects 23% job churn by 2027, with 69 million jobs created and 83 million eliminated. That’s a net decline of 14 million globally. The jobs disappearing are disproportionately clerical and administrative. The jobs emerging require substantially different skills.
If you’re a manager in one of these companies, you’re caught in the middle. You can’t stop the layoffs. You probably can’t even influence which of your people get cut. But you can do something about what happens next, both for the people who stay and the people who don’t. And one of the most practical things you can do is help them see where the work is actually going.
Where the work is going
The corporate BS doesn’t help. “Foster a culture of lifelong learning.” “Reward curiosity and adaptability.” “Invest in your people.” These phrases get nodded along to in leadership offsites and then translate into exactly nothing when you’re sitting across from someone whose role is being eliminated.
What actually matters is what you say to one person in a one-on-one meeting when you both know their job is at risk.
That conversation requires honesty. Not the “do a great job, and everything will be fine” reassurance that some managers default to because it’s easier. And not the “I fought for you but my hands were tied” deflection that protects your own sense of yourself as a good person. Sorry, but that’s weak. What’s actually useful is a realistic assessment: here’s what I think is happening, here’s what I don’t know, and here’s where I’d look to develop if I were in your position.
The demand side of the equation is real. LinkedIn’s data showed a 21-fold increase in job postings mentioning GPT or ChatGPT within a year of those tools launching. The World Economic Forum reports that six in ten workers will need reskilling by 2027 to remain effective, but only half currently have access to adequate training. Organisations need people who can work effectively with AI systems, and they’re struggling to find them.
But “learn AI” is too vague to be actionable. “Become a data scientist”, or “Pivot into machine learning engineering” is unrealistic for most mid-career professionals. What’s needed is a more specific answer: roles that are genuinely emerging, that can’t easily be automated, and that build on skills your people already have.
That’s where you can actually help. Not by forwarding links to Coursera or suggesting they “explore opportunities in the AI space.” But by creating space for them to learn, making time for them to prepare, and support them while they do it. And by being specific about where you think the opportunities are and why their background might actually matter there.
Why AI governance, specifically
I keep coming back to AI governance as one important answer to “where should I point people” because it sits at an unusual intersection. The work can’t be automated away. It exists precisely because you need human judgment to oversee AI systems. The talent pipeline barely exists, so there’s genuine demand. And many of the skills it requires aren’t technical in the narrow sense. They’re skills that experienced professionals from adjacent fields already have.
An IBM survey found that 80% of businesses are concerned about adopting AI without first having effective governance and ethical guardrails in place. That concern is creating roles. But the job titles floating around don’t capture what the work actually involves.
AI governance isn’t primarily about ethics in the abstract sense. It’s not sitting around debating the existential risk of superintelligent AI or writing philosophical position papers. It’s also mostly not about debating the finer points of the EU AI Act or evaluating AI model performance against benchmarks. The practical work is more operational: building the mechanisms and controls that make sure AI systems behave the way the organisation intends, and that the organisation can demonstrate compliance when regulators or customers ask questions.
This includes things like mapping your AI inventory. Which turns out to be surprisingly difficult. Most organisations don’t actually know how many AI systems they’re running or where they came from. It includes building review mechanisms that catch problems before deployment without creating so much friction that teams route around governance entirely. It includes translating between the engineers building AI systems and the legal teams trying to understand what those systems do. It includes setting up monitoring so you know when something is going wrong, not just when someone complains.
The reason this work suits people from compliance, legal, risk, IT, or project management backgrounds is that it’s fundamentally about building adaptable mechanisms of control and accountability. If you’ve spent years implementing SOX compliance, or managing enterprise risk frameworks, or coordinating cross-functional programs, you already understand how to build governance mechanisms that actually work in practice. What you need to add is enough understanding of AI systems to know what questions to ask and where the risks actually sit.
That’s a meaningful gap, but it’s bridgeable. The gap in the other direction is often harder to close. Taking someone with deep ML expertise and teaching them how to build governance programs, navigate regulatory requirements, and manage organisational change requires a very different kind of learning curve.
How to help someone make this transition
You have to be honest about what the work requires. This isn’t a soft landing for people who can’t handle technical roles. Effective AI governance requires genuine understanding of how AI systems work. Not at the level of writing code, but at the level of understanding what a model is doing, where it might fail, and what the implications are. Someone who wants to stay entirely on the policy or legal side without engaging with the technical reality will struggle. Have that conversation early. Talk about what’s changing in their current role, why their existing skills in compliance or risk or program management actually transfer, and what they’d need to build on top of that foundation.
Help them build the technical fluency without pretending they need to become data scientists. What they actually need is enough understanding to have productive conversations with ML engineers and to read a model card or risk assessment without their eyes glazing over. That’s achievable in months, not years. This might mean formal training, like an AI governance certification or a short course on AI. It might mean arranging for your ML team to run a workshop explaining how your systems actually work. It might mean sending them to an industry conference on AI regulation where they can see how practitioners talk about these problems. The best strategy I’ve seen and used is to simply get people to work side-by-side with engineering teams for an extended period. The key is making time for this within their working hours and covering the costs. Learning on top of an already demanding job, on your own time and your own dime, is a recipe for burnout and failure.
Create opportunities for them to do governance work before they have a governance title. If your organisation is deploying AI systems, someone needs to think through the risks, document the decisions, coordinate the reviews. That work is often falling through the cracks because nobody owns it. An employee transitioning into this space can start contributing immediately by picking up those pieces. Have them help draft an AI Use Policy. Get them involved in a pilot project assessing risks for a new tool. Pair them with someone in legal or compliance who’s already working on AI issues, even informally. Practical experience builds confidence and credibility in ways that certifications alone don’t.
Recognise progress publicly. Career transitions are disorienting. People worry they’re starting over, losing status, moving backwards. When someone completes a certification, mention it in your team meeting. When they spot a potential issue in a model review, highlight that contribution. If there’s a path to formally moving into a governance role, whether in your team or another, make that path visible. Ambiguity about where this leads is one of the biggest reasons people abandon transitions halfway through.
If you need to justify this investment to your own leadership, the data helps. The World Economic Forum found that two-thirds of companies expect to see a return on investment in skills training within just one year. Retraining someone who already understands your business, your customers, and your processes is almost always cheaper than laying them off and hiring a specialist externally. And in a field where qualified candidates are scarce, growing your own might be the only realistic option.
When layoffs actually happen
Everything I’ve written so far assumes you have time. Sometimes you don’t. Sometimes the layoffs are announced and your job is to help people who are leaving immediately, not transitioning over a period of months.
That situation is different, but you’re still not powerless.
Your HR department might tell you to have a five-minute scripted conversation and nothing more. Fine. Do that. Follow the process.
And then afterwards, be human. Call them. Talk to them properly. Stay in touch. Show compassion.
Your corporation might not be humane, but you can be.
For people who are leaving, the most valuable thing you can offer is honesty about their prospects and specificity about their strengths. Don’t just write a generic LinkedIn recommendation. Sit down with them and help them articulate what they’re actually good at, what they’ve built, what they understand that others don’t. Help them see how their experience might translate to roles they haven’t considered. If AI governance is a realistic path for them, say so, and explain why.
Make introductions. If you know people hiring in adjacent fields, connect them. If you know someone who’s successfully made a similar transition, put them in touch. Your network is one of the most concrete things you can offer someone who just lost their job.
And be honest about what happened. The instinct to protect the company’s reputation, or your own, is understandable. But people talk. If you handle layoffs with integrity, that gets around too. The people who stay are watching how you treat the people who leave. Your credibility as a leader depends on not pretending this is fine when it isn’t.
For people who are staying but shaken, you want to acknowledge the reality. Don’t immediately pivot to “but this is an opportunity for those of us who remain.” Give people space to be unsettled. Then, when the dust settles, have the honest conversations about where things are heading and what you can do together to prepare.
When layoffs actually happen
I want to be realistic about what one manager can accomplish here. You’re not going to fix the structural dynamics driving these layoffs. You’re not going to convince your leadership to prioritise workforce development over cost reduction. You’re probably working within constraints that make even small investments of time and resources difficult to justify.
But you can change what happens for specific people.
The person on your team whose role is likely being automated doesn’t need you to solve the whole problem. They need you to have an honest conversation about what’s happening. They need you to help them see that their experience in compliance or risk or program management or any other field isn’t worthless in this new landscape. It’s foundational and valuable. They need time to learn, support while they’re learning, and someone who’ll advocate for them when opportunities come up.
That’s not nothing. For the person on the receiving end, it might be the difference between a disorienting scramble and a genuine career transition.
I’ve been on both sides of this. I’ve managed large teams and small teams. I’ve watched good people get cut with no warning and no support. I’ve also seen what happens when someone gets the right guidance at the right moment. They don’t just survive. They end up in work that matters more, that’s harder to automate, that uses their experience in ways their previous role never did.
I think the people who succeed in these transitions tend to share a few characteristics. They’re comfortable with ambiguity, because the field is still defining itself. They can move between technical and non-technical conversations without losing either audience. And they’re genuinely interested in the underlying questions, not just looking for a safe harbour. They’re willing to work hard with curiosity and determination.
AI governance is one path. It’s not the only one. But it’s a field that genuinely needs people, that rewards the kind of cross-functional thinking that experienced professionals bring, and that isn’t going away. If you can help even one person on your team see that path clearly, you’ve done something useful.
For me, that’s what compassionate management looks like in practice. Not inspirational talks about lifelong learning. Not forwarding articles about the future of work. Not pretending everything will be fine. Sitting down with someone, being honest about what you see coming, and helping them figure out what to do about it.
For me, that’s what compassionate management looks like in practice. Not inspirational talks about lifelong learning. Not forwarding articles about the future of work. Not pretending everything will be fine. Sitting down with someone, being honest about what you see coming, and helping them figure out what to do about it.
Corporations are not humane. There will certainly be more layoffs in 2026. But as a manager, you can still be a compassionate human.
