How we re-architected a property search platform across the US and Canada — handling millions of MLS listings with sub-second response times.
Real Estate SaaS · Platform Architecture
A growing real estate brokerage needed a modern, white-label website platform that could serve individual agents across the United States and Canada. Each agent needed their own branded IDX website with MLS-integrated property search, lead capture, and CRM capabilities — all powered by a single, centralized backend.
The initial version worked for 5 agents. But as demand grew, the platform buckled under the weight of multi-tenant data, slow property searches, and infrastructure that wasn't built to scale. They came to us to rebuild the core architecture and turn a fragile MVP into a production-grade SaaS platform.
When we audited the existing system, we identified four critical bottlenecks that were preventing the platform from scaling beyond a handful of agents.
MongoDB direct queries for property search were taking 3–5 seconds under load. Auto-suggestions were unusable for agents showing properties to buyers in real-time.
The cron-based sync was running as a single process with no error recovery. One failed MLS feed would stall updates for all agents across the platform.
All agents shared configuration, leads, and property data in flat collections. One agent's misconfiguration could affect every other agent's website.
Sold and active properties lived in the same collection with no clear status management. Agents were showing sold properties as active listings on their sites.
We designed a three-layer data architecture that separates concerns cleanly: MongoDB as the source of truth, Elasticsearch for fast search, and Redis as a high-speed cache layer sitting on top. Each layer does what it's best at — nothing more.
For individual property detail pages, we bypass both Redis and Elasticsearch entirely and query MongoDB directly. This ensures agents always see the most current property data — including status changes, price updates, and newly uploaded photos — without waiting for the cache to refresh.
Full codebase audit. Mapped existing data models, identified bottlenecks, designed the three-layer architecture (MongoDB → Elasticsearch → Redis). Defined multi-tenant data isolation strategy.
Deployed Elasticsearch per environment. Built property search indexing pipeline, auto-suggestion engine, and geo-based filtering. Implemented Redis as a read-through cache with 2-hour TTL synced to the cron schedule.
Rewrote the MLS sync engine with delta detection, retry logic, and per-feed error isolation. Restructured MongoDB collections for clean multi-tenant separation. Split environment configs for USA Live, Canada Live, and Staging.
Optimized Next.js frontend with ISR, dynamic imports, image optimization via S3 + CDN. Deployed white-label agent themes. Load-tested the full stack with simulated 200-agent traffic before go-live.
Within 3 months of the new architecture going live, the platform handled a 40x increase in agent count without any degradation in performance or reliability.
Real estate SaaS platforms fail when they treat scaling as a hosting problem. The real challenge is data architecture — how property data flows from MLS feeds through your search layer and into the agent's browser. Getting that pipeline right is what separates platforms that cap out at 10 agents from ones that can serve hundreds.
The combination of Elasticsearch for search, Redis for cache, and MongoDB for source-of-truth isn't novel — but configuring them to refresh in sync with a 2-hour cron cycle, while keeping property detail pages on direct-to-DB reads, is the kind of nuance that only comes from building in this space repeatedly.
Every environment — USA Live, Canada Live, and Staging — runs its own local Elasticsearch and Redis instances. This isolation means a Canadian MLS sync failure never impacts US agent websites, and staging changes never touch production data.
We help PropTech companies scale from MVP to production-grade SaaS. Let's talk about your architecture.