Postgres as Everything: The 2026 Reality
I keep replacing 'modern' tooling with Postgres extensions. The pattern keeps winning.
Vector search? Postgres has pgvector. Job queue? Postgres + SKIP LOCKED. Cache? Postgres unlogged tables. Search? Postgres FTS. The boring database keeps eating the modern stack.
Every six months I get pulled into a system that has too many backing services. Postgres for relational data, Pinecone for vectors, Redis for cache, Elasticsearch for search, RabbitMQ for queue, ClickHouse for analytics. Six things to operate, monitor, secure, version.
For startups under ~50M rows of operational data, this is overengineered. Postgres handles every one of those concerns capably.
The replacements I've actually made
Vector search → pgvector. pgvector with HNSW index handles up to ~10M vectors comfortably. Recall is competitive with Pinecone for most use cases. You give up some operational scale; you gain operational simplicity.
Job queue → Postgres SKIP LOCKED. A table with a status column, a worker that does SELECT ... FOR UPDATE SKIP LOCKED LIMIT 1 to claim work. Handles 10K+ jobs/second on commodity hardware. Reliable, replayable, queryable.
Cache → Postgres + UNLOGGED tables. UNLOGGED tables skip the WAL. Insert/lookup latency is comparable to Redis for small payloads. You give up TTL semantics (need to add a cleanup job).
Full-text search → Postgres tsvector / pg_trgm. Good enough for most product search. When it isn't, you'll know.
Time-series → TimescaleDB extension. If you need pure time-series, this is more capable than vanilla Postgres but lighter than running ClickHouse.
When this stops working
- Vectors above ~50M. pgvector starts hurting. Move to Pinecone, Weaviate, or Qdrant.
- Search with complex relevance tuning. Elasticsearch and the new generation (Typesense, Meilisearch) win.
- Analytics with TB-scale fact tables. ClickHouse, BigQuery, Snowflake.
- Sustained queue throughput >50K msg/sec. Move to Kafka or SQS.
The architectural lesson
The cost of adding a new backing service isn't just the runtime - it's the cognitive load on the team forever after. Onboarding gets harder, debugging gets harder, security review gets harder, observability gets harder.
You should add a new service when (and only when) Postgres genuinely can't do the job. Not when "best practice" says you should. Not when a vendor convinces you to.
I'd much rather operate one bigger Postgres than seven smaller services.
What I'd actually recommend
Pre-PMF startup (<10 engineers): Postgres for everything. Add a single managed Redis only when caching becomes a bottleneck.
Series A (10-50 engineers): Postgres for primary state. Add specialized services only when you've measured Postgres being the bottleneck. Most teams don't measure first.
Series B+ (>50 engineers): You'll naturally have services. The question becomes which Postgres concerns are worth migrating off. Usually: analytics first, then cache, then vector search.
Boring tech wins. The flashy modern stacks lose more startups than scale problems do.