How we built a supply chain visibility platform for 2M+ shipments across 12 carriers — with carrier API integration, ETA prediction, exception alerts, and 45% fewer customer inquiries.
Carrier APIs · ETA · Exception Alerts
A 3PL and freight brokerage managing shipments for hundreds of shippers was drowning in “Where's my shipment?” calls. Each carrier had a different tracking portal — FedEx, UPS, DHL, regional LTL carriers. Customer service manually looked up tracking numbers one by one.
They wanted a unified visibility platform where shippers could see all shipments in one dashboard, get proactive exception alerts (delays, damage, customs hold), and receive predicted ETAs — with multi-tenant isolation so each customer sees only their data.
Twelve carriers, twelve different APIs — different auth, rate limits, and response formats. Some offered webhooks, others required polling. Error handling and retry logic varied wildly.
Carrier ETAs were often wrong — especially for cross-border or multi-leg shipments. Customers wanted more accurate delivery windows. No historical data to train models.
Delays, damaged goods, and customs issues went unnoticed until customers called. No proactive notification. Support had no single view of exceptions across carriers.
Hundreds of shippers — each should only see their own shipments. The old system used shared spreadsheets. No role-based access or data isolation.
We built a supply chain visibility platform with a carrier abstraction layer that normalizes APIs from 12 carriers. Tracking events are ingested, stored in TimescaleDB for time-series queries, and used for ETA prediction. Exception rules trigger alerts. Multi-tenant isolation is enforced at query time.
The carrier abstraction layer was the foundation. We defined a canonical tracking event schema (timestamp, location, status, description) and built adapters for each carrier. Polling runs on a schedule; webhooks push when available. Deduplication ensures we don't store the same scan twice. Failed API calls go to a retry queue with exponential backoff.
Documented 12 carrier APIs — auth, endpoints, rate limits. Designed canonical tracking event schema. Defined multi-tenant data model.
Built adapter for each carrier with retry and rate limiting. Implemented webhook receivers and polling jobs. Stored events in TimescaleDB. Built shipment and tenant management.
Built ETA prediction using lane history. Implemented exception rule engine (delay, damage, customs). Integrated email and in-app notifications.
Built shipper portal with tracking and alerts. Migrated historical shipments. Onboarded 50 pilot shippers, then full rollout. Measured inquiry reduction.
Carrier API diversity is the main integration challenge. Each carrier has different auth (API key, OAuth, certificate), rate limits (per second, per day), and payload structure. The abstraction layer hides this — our application works with a single normalized format. Adding a new carrier is an adapter + config, not core changes.
TimescaleDB was critical for tracking event volume. We ingest millions of scan events per month. PostgreSQL alone would struggle with time-range queries. TimescaleDB's hypertables and compression cut storage and query time significantly. We partition by carrier and time.
ETA prediction improved over time as we collected lane-level history. Initial version used carrier ETAs when available; we layered on our own model using historical delivery times per origin-destination pair. Exception rules were built from common patterns — delayed, out for delivery, exception (damage, hold). Configurable thresholds let each tenant tune sensitivity.
We help logistics companies build production-grade tracking and visibility systems. Let's talk about your architecture.