We fired our platform team


The Platform Fix | Issue #009

Hello Reader—

“We’re shutting down the platform team.”

The Slack channel went silent. 15 engineers. £4.2M annual budget. Gone.

But here’s the twist: It was their idea.

Six months later, deployment frequency increased 400%. Developer satisfaction hit 9.2/10. Platform costs dropped £3M annually.

The platform team didn’t get fired. They got promoted to “Product Engineering” and became the most valuable team in the company.

Here’s how they did it - and why your platform team should consider the same radical move.


THE PLATFORM BOTTLENECK EPIDEMIC

After analysing 73 platform teams, I discovered a shocking pattern:

The platform team becomes the biggest bottleneck to platform adoption.

The death spiral looks like this:

Week 1: “We need a platform team to help developers”
Week 12: “All requests must go through the platform team”
Week 24: “Platform team is overwhelmed, 6-week backlog”
Week 48: “Developers are building shadow infrastructure”
Week 96: “Platform adoption is 12%, we’re spending £4M on nothing”

Sound familiar?

That fintech client had 47 engineers supporting a platform used by… 23 developers. The math was brutal: £182K per active user annually.


THE SELF-SERVICE REVOLUTION

The breakthrough came when their platform lead asked one question:

“What if we made ourselves unnecessary?”

Instead of being the gatekeepers, they became the product builders:

Before (Gatekeeper Model):

  • Developers submit tickets for deployments
  • Platform team manually provisions resources
  • 6-week average turnaround time
  • 47 people managing infrastructure
  • 12% platform adoption

After (Self-Service Model):

  • Developers deploy directly via automated pipelines
  • Platform team builds tools and guardrails
  • 6-minute average deployment time
  • 8 people building platform products
  • 94% platform adoption

The secret: They stopped doing work FOR developers and started building tools that let developers do work THEMSELVES.


THE PLATFORM AUTONOMY SCORECARD™

Rate your platform team (1-5 scale):

Developer Independence

1: Developers need tickets for everything
5: Developers self-serve 90% of needs

Request Turnaround Time

1: Weeks for simple requests
5: Minutes via automation

Team Focus

1: Reactive firefighting and manual work
5: Proactive product development

Adoption Rate

1: <30% voluntary adoption
5: >80% voluntary adoption

Knowledge Distribution

1: Platform team holds all knowledge
5: Developers understand and own their deployments

Score 15+: You’re building a product
Score 10-14: You’re a service team (risky territory)
Score <10: You’re a bottleneck (time for radical change)


REAL STORY: THE £8M SELF-SERVICE TRANSFORMATION

Media company. 200 developers. Platform team of 52 people.

The breaking point: Developer survey showed 73% were “actively seeking alternatives” to the internal platform.

The transformation:

  • Fired 44 platform team members (with generous packages)
  • Kept 8 to build self-service tools
  • Gave developers direct access with guardrails
  • Built automated compliance and security

Results after 6 months:

  • Platform costs: £8M → £2.1M annually
  • Deployment time: 4 hours → 8 minutes
  • Developer satisfaction: 2.1/10 → 8.7/10
  • Platform adoption: 23% → 89%
  • Incidents: 47/month → 12/month

The kicker: Those 8 remaining platform engineers became the highest-paid team in the company. They weren’t managing infrastructure - they were building products that generated millions in developer productivity.


THE 4-PHASE SELF-SERVICE TRANSFORMATION

Phase 1: The Audit (Week 1-2)

  • Track every platform team request for 2 weeks
  • Categorise: Automatable vs. Requires human judgment
  • Survey developers: What do you actually need?

Phase 2: The Quick Wins (Week 3-6)

  • Automate the top 5 most common requests
  • Build self-service dashboards for status/metrics
  • Create “break glass” procedures for emergencies

Phase 3: The Product Pivot (Week 7-12)

  • Redefine platform team as product team
  • Focus on building tools, not doing tasks
  • Measure success by developer adoption, not tickets closed

Phase 4: The Scale Test (Week 13-24)

  • Gradually reduce manual interventions
  • Monitor for gaps in self-service capabilities
  • Continuously improve based on developer feedback

THIS WEEK’S PLATFORM PSYCHOLOGY INSIGHT

From last week’s Metric Makeover responses:

87% of you are tracking vanity metrics that hide real problems.

But here’s the connection to self-service: Teams with high self-service scores had the BEST business metrics.

Why? When developers own their deployments, they care about the metrics that actually matter. When platform teams do everything, developers disconnect from the consequences.

The uncomfortable truth: Your platform team might be shielding developers from the feedback loops that create great software.


THE SELF-SERVICE SUCCESS FORMULA

Instead of: “Submit a ticket for deployment”
Build: One-click deployment with automated rollback

Instead of: “Platform team will investigate”
Build: Self-service debugging tools and runbooks

Instead of: “We’ll provision your resources”
Build: Resource templates with cost visibility

Instead of: “Platform team owns production”
Build: Developer ownership with platform guardrails

Instead of: “Come to us for help”
Build: Documentation so good help isn’t needed


READER SUCCESS: THE TEAM THAT MADE THEMSELVES OBSOLETE

“Steve, we followed your self-service framework. In 3 months, we went from 67 tickets/week to 3. Our platform team shrunk from 12 to 4 people, but those 4 are now building tools that the entire company uses. My CEO asked how we ‘magically’ became so efficient. The secret: We stopped being the bottleneck and became the highway.”

- Platform Director, SaaS (Edinburgh)


YOUR SELF-SERVICE CHALLENGE

This week, pick ONE manual process and automate it:

  1. Deployment approvals → Automated compliance checks
  2. Resource provisioning → Self-service templates
  3. Environment setup → One-click creation
  4. Monitoring setup → Automatic instrumentation
  5. Access requests → Role-based automation

Track the time saved. Multiply by 52 weeks. That’s your annual ROI from ONE automation.


WHAT’S COMING NEXT WEEK

Issue #010: “The Platform Migration That Saved Christmas” - When everything goes wrong at the worst possible time - Crisis management for platform leaders - Your disaster recovery playbook

Plus: Results from this week’s Self-Service Challenges!


📢 FROM THE NEWSLETTER TO THE STAGE

I'll be at BitSummit Hamburg (Sept 4th) sharing the full story behind our biggest platform transformation - the one that started with pink post-its and ended with GitOps clarity.

"From Console Chaos to GitOps Clarity: A FinTech Transformation Tale"

​Newsletter readers get 15% off with code: STEVE_BITSUMMIT ​
Register: https://bitsummitapp.eventify.io/t2/tickets

See you in Hamburg? Reply and let me know!


Build products, not processes.

Steve

P.S. That platform team that “fired themselves”? They’re now the most requested team for new projects. Turns out, when you solve problems instead of managing them, everyone wants to work with you.

P.P.S. In the masterclass, I’ll show you the exact automation that saved one team 847 hours per month. Spoiler: It wasn’t complex.


© 2025 Steven Wade Consulting Ltd

113 Cherry St #92768, Seattle, Washington 98104-2205

Unsubscribe · Preferences

Steve Wade

Platform Engineering leaders are drowning in failed Kubernetes migrations. Get weekly stories of £3M disasters turned into 30-day wins, plus frameworks that actually work. No fluff, just battle-tested CNCF insights.

Read more from Steve Wade

The Platform Fix | Issue #008 Hello Reader— At 3am last Tuesday, alerts started screaming. “CRITICAL: CPU usage at 95%! Memory at 87%! Disk I/O spiking!” The platform team scrambled. Emergency scaling. Incident calls. War room activated. Six hours later, they discovered the truth: Every single alert was meaningless. The platform was handling traffic perfectly. Users were happy. Revenue was flowing. They’d spent £2M building dashboards that tracked everything except what actually mattered....

The Platform Fix | Issue #007 Hello Reader— One Monday, I got the call every CTO dreads. “Steve, our 10X engineer just quit. The platform is completely down. We can’t deploy anything. The board is asking if we should shut down the entire engineering division.” Three years. £10M invested. One person held it all together. When he left, everything collapsed in 72 hours. Here’s the uncomfortable truth: Your platform heroes aren’t saving you. They’re slowly killing your business. THE £500K HERO...

The Platform Fix | Issue #006 At 4am, James was a hero. Again. He’d fixed production. Saved the company. Everyone would thank him on Monday. Six months later, James burned out and quit. The platform collapsed within a week. Your heroes aren’t saving your platform. They’re hiding its failures. THE HERO PARADOX™ Every failing platform has the same story: One brilliant engineer holds it all together. They know every system. Fix every issue. Answer every question. Everyone says: “Thank god for...