Home/ Case Studies/ On-Demand Courier App
Logistics · Mobile App · Full-Stack

From Manual Dispatch to 5,000+ Daily Bookings in Six Months

A logistics startup came to us with a simple ask: replace a WhatsApp-and-spreadsheet dispatch process with a proper on-demand courier platform that could scale across cities. Six months later, the platform was running 5,000+ bookings a day with 200+ courier partners — and cut average delivery time by 35%.

5,000+Daily Bookings
35%Faster Delivery
200+Courier Partners
4.6 App Store Rating

Project at a glance

Industry
Logistics & same-day courier
Scope
Customer mobile app, driver mobile app, admin web dashboard
Team size
8 engineers + 1 PM + 1 designer
Timeline
5 months to v1 launch · 6 months to 5,000 daily bookings
Regions
Launched across 4 Indian metros; architecture built for multi-city scale
Status
Live · on ongoing AMC with ITD GrowthLabs

The client & the problem

The client runs a last-mile courier business that started with a handful of riders in a single city. Growth was capped by operations: a dispatcher manually matched incoming WhatsApp and phone requests with available riders, routes were decided by guesswork, and billing was reconciled from paper slips once a week. Each new city added a dispatcher, a new WhatsApp group, and another layer of spreadsheet reconciliation.

By the time they came to us, the founders knew exactly what was wrong but did not have an internal tech team to fix it. They needed a technology partner who could take the idea from whiteboard to production — not just a code shop. Their specific pain points:

  • No self-serve booking: customers had to chat or call, which capped volume to the dispatcher's bandwidth.
  • No driver visibility: riders had no live view of the job queue, earnings, or navigation.
  • No tracking for customers: every "where is my parcel?" query was handled manually.
  • Billing chaos: invoicing was a weekly, error-prone batch process that delayed payouts.
  • No data: no dashboard for order volume, partner utilisation, dispute rate, or unit economics.

The solution: a three-app platform with shared infrastructure

We scoped the platform as three tightly-integrated products plus a shared backend. This split is standard for on-demand logistics, but the devil is in the details — especially around real-time state sync and failure handling.

1. Customer app (Flutter, iOS + Android)

Clean booking flow: source address (with Google Places autocomplete), destination, parcel type/weight, pickup slot, delivery slot, payment. Live tracking once assigned, ratings on completion, saved addresses, booking history, and re-book. Push notifications at each status change (assigned, picked up, in transit, delivered).

2. Driver (courier partner) app (Flutter)

Rider logs in, goes online, receives job offers based on proximity and current load. Accepts → gets turn-by-turn navigation, pickup/delivery OTP verification, in-app earnings wallet, daily and weekly reports, support chat. Offline-tolerant: the app queues state changes when the network drops and syncs on reconnect.

3. Admin dashboard (React web app)

Operations' control tower. Live map of all active jobs and riders, manual override for tricky assignments, dispute resolution queue, rider onboarding with KYC, zone/pricing configuration, payout approval, MIS reports, and configurable alert rules (e.g. "pager me if any job is un-assigned for 90 seconds").

4. Backend (Node.js + Firebase)

REST API for CRUD, Firebase Realtime Database for live job and location streams, Cloud Functions for event-driven workflows (SMS, payment capture, invoicing, pay-out triggers). Razorpay for payments. Google Maps Platform for geocoding, directions, and ETA. All services behind an API gateway with rate-limiting and observability (Datadog).

Tech stack & why

We picked the stack for three reasons: shared code between apps, real-time without building it ourselves, and cheap to operate at scale. Here is the breakdown:

Flutter 3.x Dart React 18 Node.js (Express) TypeScript Firebase RTDB Firestore Firebase Cloud Messaging Google Maps Platform Razorpay AWS S3 Datadog GitHub Actions CI/CD

Flutter for both apps: one codebase, two stores. Shaved ~35% off what a native iOS + native Android build would have cost. Performance on mid-range Android — the most common rider device — was indistinguishable from native in benchmarks.

Firebase for real-time: building reliable presence and location streaming from scratch is a multi-month project in itself. Firebase gives us that off the shelf, with battle-tested offline sync. Cost remained predictable because we kept hot collections small and moved cold data to Firestore.

Node.js on the backend: the team's dominant language, plus easy interop with Firebase Functions. Typed end to end with TypeScript for schema safety.

The hardest technical problem we solved

Job assignment. Everything else is table stakes. Assignment is what makes or breaks an on-demand platform.

The naive approach — "assign the nearest available rider" — fails in three ways: it overloads hotspot zones, it ignores rider load (a driver with 3 pending jobs is not "available"), and it doesn't handle declined offers gracefully. Our scoring function combines:

  • Distance to pickup (Haversine, then refined with actual road ETA from Maps).
  • Rider load (current active jobs, weighted by estimated completion time).
  • Rider rating (last 30 days) — a small bonus to protect CSAT.
  • Decline streak — a cooldown so repeat decliners get fewer offers.
  • Zone parity — small bonus for riders who help balance an underserved zone.

Offers are broadcast to the top 3 candidates simultaneously with a 12-second timer. First acceptance wins; the others are told the job is gone. If all 3 decline or time out, we expand the radius and retry. This single change cut unassigned jobs from 6% to under 1.5%.

Go-to-market & ramp

The v1 launch was intentionally narrow: one city, two zones, 20 hand-picked riders, and a 90-day soft-launch with heavy ops involvement. This gave us a tight feedback loop to fix the embarrassing bugs (there were plenty — mostly around edge cases in payment capture and OTP timing) before scaling.

City expansion after month 3 was driven by a configuration-first architecture: new cities mean adding zones, pricing, and cut-off times in a database, not shipping code. The fourth city onboarded in four working days end to end.

Results after six months

5,000+Daily Bookings (steady-state)
35%Reduction in Avg Delivery Time
200+Courier Partners Onboarded
4.6 ★App Store Rating (3,400+ reviews)
<1.5%Unassigned Jobs (from 6%)
4 citiesLive · architected for more
0Dispatchers Required per City
22%Rider Retention Improvement

"We went from spreadsheet chaos to running 5,000+ daily bookings on a single dashboard. Driver app dispatch alone saved us two full-time coordinators. The team didn't just ship features — they pushed back when we asked for the wrong thing."

Operations Director

On-Demand Courier Platform · Mumbai

What we would do differently

Two things, with hindsight. First, we would invest in observability from day one — we added Datadog around month 4, and the month of production blindness cost us real-money bugs (duplicate payouts from a retry loop). Second, we would ship a dedicated courier-partner on-boarding flow earlier. Manual KYC via ops was fine at 20 riders and painful at 200.

Planning a similar logistics build?

Whether you are scaling from spreadsheets or replacing a legacy aggregator, we can ship your v1 in 4–5 months. Every quote is scoped against real project data from builds like this one.

Get a Free Consultation

Get Digital Growth Tips in Your Inbox

Weekly insights on app development, web design, SEO, and marketing. No spam — just actionable advice.

Join 2,500+ business owners. Unsubscribe anytime.