A logistics startup came to us with a simple ask: replace a WhatsApp-and-spreadsheet dispatch process with a proper on-demand courier platform that could scale across cities. Six months later, the platform was running 5,000+ bookings a day with 200+ courier partners — and cut average delivery time by 35%.
The client runs a last-mile courier business that started with a handful of riders in a single city. Growth was capped by operations: a dispatcher manually matched incoming WhatsApp and phone requests with available riders, routes were decided by guesswork, and billing was reconciled from paper slips once a week. Each new city added a dispatcher, a new WhatsApp group, and another layer of spreadsheet reconciliation.
By the time they came to us, the founders knew exactly what was wrong but did not have an internal tech team to fix it. They needed a technology partner who could take the idea from whiteboard to production — not just a code shop. Their specific pain points:
We scoped the platform as three tightly-integrated products plus a shared backend. This split is standard for on-demand logistics, but the devil is in the details — especially around real-time state sync and failure handling.
Clean booking flow: source address (with Google Places autocomplete), destination, parcel type/weight, pickup slot, delivery slot, payment. Live tracking once assigned, ratings on completion, saved addresses, booking history, and re-book. Push notifications at each status change (assigned, picked up, in transit, delivered).
Rider logs in, goes online, receives job offers based on proximity and current load. Accepts → gets turn-by-turn navigation, pickup/delivery OTP verification, in-app earnings wallet, daily and weekly reports, support chat. Offline-tolerant: the app queues state changes when the network drops and syncs on reconnect.
Operations' control tower. Live map of all active jobs and riders, manual override for tricky assignments, dispute resolution queue, rider onboarding with KYC, zone/pricing configuration, payout approval, MIS reports, and configurable alert rules (e.g. "pager me if any job is un-assigned for 90 seconds").
REST API for CRUD, Firebase Realtime Database for live job and location streams, Cloud Functions for event-driven workflows (SMS, payment capture, invoicing, pay-out triggers). Razorpay for payments. Google Maps Platform for geocoding, directions, and ETA. All services behind an API gateway with rate-limiting and observability (Datadog).
We picked the stack for three reasons: shared code between apps, real-time without building it ourselves, and cheap to operate at scale. Here is the breakdown:
Flutter for both apps: one codebase, two stores. Shaved ~35% off what a native iOS + native Android build would have cost. Performance on mid-range Android — the most common rider device — was indistinguishable from native in benchmarks.
Firebase for real-time: building reliable presence and location streaming from scratch is a multi-month project in itself. Firebase gives us that off the shelf, with battle-tested offline sync. Cost remained predictable because we kept hot collections small and moved cold data to Firestore.
Node.js on the backend: the team's dominant language, plus easy interop with Firebase Functions. Typed end to end with TypeScript for schema safety.
Job assignment. Everything else is table stakes. Assignment is what makes or breaks an on-demand platform.
The naive approach — "assign the nearest available rider" — fails in three ways: it overloads hotspot zones, it ignores rider load (a driver with 3 pending jobs is not "available"), and it doesn't handle declined offers gracefully. Our scoring function combines:
Offers are broadcast to the top 3 candidates simultaneously with a 12-second timer. First acceptance wins; the others are told the job is gone. If all 3 decline or time out, we expand the radius and retry. This single change cut unassigned jobs from 6% to under 1.5%.
The v1 launch was intentionally narrow: one city, two zones, 20 hand-picked riders, and a 90-day soft-launch with heavy ops involvement. This gave us a tight feedback loop to fix the embarrassing bugs (there were plenty — mostly around edge cases in payment capture and OTP timing) before scaling.
City expansion after month 3 was driven by a configuration-first architecture: new cities mean adding zones, pricing, and cut-off times in a database, not shipping code. The fourth city onboarded in four working days end to end.
"We went from spreadsheet chaos to running 5,000+ daily bookings on a single dashboard. Driver app dispatch alone saved us two full-time coordinators. The team didn't just ship features — they pushed back when we asked for the wrong thing."
Operations Director
On-Demand Courier Platform · Mumbai
Two things, with hindsight. First, we would invest in observability from day one — we added Datadog around month 4, and the month of production blindness cost us real-money bugs (duplicate payouts from a retry loop). Second, we would ship a dedicated courier-partner on-boarding flow earlier. Manual KYC via ops was fine at 20 riders and painful at 200.
Whether you are scaling from spreadsheets or replacing a legacy aggregator, we can ship your v1 in 4–5 months. Every quote is scoped against real project data from builds like this one.
Get a Free Consultation