SaaS Platform Performance Overhaul
10x faster, 3x more customers, same infrastructure cost
A growth-stage B2B SaaS client
A growth-stage SaaS was bleeding customers and didn't know it. Renewals quietly stalled, support tickets rose, and exit interviews kept saying the same thing: the product was slow. Page loads averaged 8+ seconds, dashboards timed out for the largest tenants, and the database was the obvious villain - except every fix made things worse. I came in for a focused 12-week engagement, did a top-to-bottom performance audit, and shipped a layered fix: indexes and query rewrites first, a Redis cache where it actually paid off, a Next.js frontend with proper code splitting, and edge caching for the long tail of static and semi-static routes.
This is a representative architecture study based on real project patterns. Specific metrics and client details have been generalized to protect confidentiality.
Results
What changed, in numbers
The metrics the engagement is measured by.
800ms
Page Load
from 8+ seconds average
-40%
Churn Rate
performance-related churn reduction
3x
User Growth
user capacity on same infrastructure
1.2s
LCP
Core Web Vitals in green
Challenge
What was broken
Eight-second page loads and frequent dashboard timeouts were directly tied to churn. The team had been adding more database replicas and bigger Redis instances for a year with diminishing returns. The real problems were N+1 queries, a hot index on the wrong column, a frontend that shipped 4MB of JavaScript on every route, and zero edge caching on routes that were trivially cacheable.
Solution
The shape of the fix
A multi-layered strategy: database indexing and query optimization first, Redis caching for the genuinely hot paths, Next.js server components and aggressive code splitting on the frontend, and CDN edge caching for everything else - with performance budgets in CI to keep it that way.
Approach
How I tackled it
The concrete moves that took the project from broken to shipped.
Ran a full performance audit with real-user-monitoring and traced every slow request to a specific layer
Killed N+1 queries, added covering indexes, and rewrote the three worst dashboard queries with materialized views
Introduced Redis caching only on hot paths with measurable hit rates, not as a blanket cure
Refactored the Next.js app to use server components and route-level code splitting, cutting initial bundle size by 70%
Pushed cacheable routes to the CDN edge with stale-while-revalidate and surgical purges
Wrote a Core Web Vitals budget into CI so regressions get caught at the PR stage
Outcomes
What shipped, and what it changed
Measured results from the engagement, told as a story rather than a scoreboard.
Reduced p75 page load from 8.0s to 800ms across the most-visited routes
Cut performance-attributed churn by 40% over the following two quarters
Tripled active user capacity on the same infrastructure spend
Brought all Core Web Vitals into the green on every monitored route, with LCP at 1.2s
Reduced average database CPU by 55%, deferring a planned hardware upgrade
Stack
Technologies used
Linked entries open the technology page with related studies, playbooks, and notes.
Services
How I helped
The specific services involved in this engagement. Each links to a deeper breakdown.
Lessons
What I would tell the next team
The takeaways I carry into every similar engagement.
Performance work is rarely about a single silver bullet. The wins compound across layers
Adding caches before fixing the underlying queries just hides the problem until traffic doubles
If performance is not a budget in CI, it will regress within a quarter
Related
Other studies you might recognize
Engagements with overlapping problem shapes, industries, or stacks.
Have a similar challenge?
If any of this looks like the project on your desk, the conversation is the cheapest part. You can also browse other saas work or the full service list.