Field force automation is a category in which the distance between a compelling demonstration and a reliable production system is considerable. The demo environment has good connectivity, a small dataset, and cooperative conditions. The production environment has intermittent network coverage, hundreds of thousands of records, and field representatives who cannot afford to have their tools fail mid-route.

We have built and operated SFA platforms at this scale — including a system managing more than 376,000 retail outlets across Bangladesh for a major telecommunications operator. What follows reflects what we have learned about the decisions that determine whether a field force system actually works in the conditions it was designed for.

Offline capability is an architectural commitment, not a feature

The most consequential decision in field force platform design is made before a single line of application code is written: whether the system is designed to function without a network connection, or whether it assumes one will be available.

In South Asia, Southeast Asia, and large portions of Africa and the Middle East, connectivity in field conditions is not reliable. A system that requires a live server connection to function will fail precisely where it matters most — in a rural district, in a densely built commercial area with poor signal, in a basement stockroom. When a field representative cannot complete a visit record or submit an order because the application will not respond, the data is lost, the coverage metric is incorrect, and the representative's time has been wasted.

Offline-first is not a feature that can be added after the fact. It is an architectural position that shapes the data model, the synchronisation strategy, and the conflict resolution logic from the outset. The key components are:

Local-first data storage. All working data must reside on the device. SQLite is the appropriate choice for most mobile field force applications — mature, reliable, and well-supported across both iOS and Android. The application reads from and writes to the local store. Network availability determines only when that data synchronises with the server, not whether the application can function.

A well-defined conflict resolution strategy. When two representatives update the same outlet record independently while offline and both subsequently synchronise, the system must have a deterministic rule for which version is authoritative. This rule must be defined, documented, and tested before deployment — not resolved ad hoc when the first conflict occurs in production.

Optimistic UI throughout. The interface should respond immediately to every user action. Confirmation that data has reached the server is a background event. Requiring a server round-trip before the UI responds is not an acceptable design in a low-connectivity environment.

GPS data collection: accuracy, battery, and integrity

Every field force platform captures location data. The questions that matter in practice are how frequently, with what accuracy, and at what cost to device battery life.

Continuous high-accuracy GPS polling on a mid-range Android device — the dominant hardware class in South Asian field operations — will exhaust the battery in under four hours. That is not viable for a representative who begins their route at eight in the morning. The architecture must balance the legitimate need for location verification against the operational reality of the devices it runs on.

The approach that works at scale:

Location should be captured at the point of action — when a representative initiates an outlet visit, submits a compliance record, or closes a task — rather than on a continuous polling interval. This provides the location verification the business requires without the battery cost of persistent tracking.

Network-based location should serve as the primary positioning source, with GPS invoked only when greater accuracy is specifically required. Network triangulation is faster, substantially less battery-intensive, and accurate to within 50 to 100 metres — sufficient for outlet-level verification in most contexts.

Geofence validation should be performed server-side, not on the device. Storing geofence parameters locally creates an obvious vector for location spoofing. Server-side validation ensures that the verification logic cannot be circumvented at the device level.

Reporting that drives decisions rather than displaying data

Field force platforms generate substantial volumes of data — visit records, order submissions, compliance photographs, GPS trails, attendance logs. The question that is rarely asked with sufficient precision at the design stage is: what decisions does this data need to enable?

The failure mode is common. The platform captures the data. A business intelligence tool is integrated. Dashboards are configured. After three months, the dashboards that nobody has learned to use are quietly abandoned, and the operational value of all that data collection is never realised.

Reporting that drives field operations works differently. It starts from the decision — what will a regional manager do differently because of this information — and works backward to the data required to support it. The output is not a configurable dashboard with forty available metrics. It is a small number of clear indicators, surfaced proactively, that tell the manager what requires attention today.

A system that notifies a regional manager that outlet coverage on a particular route has declined by thirty percent this week, and attributes that decline to specific representatives with attendance anomalies, is more operationally valuable than a system that makes all the underlying data available for the manager to analyse if they choose to.

Raw data export in standard formats — CSV, Excel — should be built from the start. Finance, supply chain, and HR functions will require access to the underlying data in formats they can work with directly, and retrofitting export capability is a common and avoidable source of friction.

Data architecture for production volumes

Performance at development-time data volumes provides no indication of performance at production-scale. An outlet query that executes in milliseconds against ten thousand records may take twelve seconds against four hundred thousand. By the time that is discovered in production, tens of thousands of representatives are experiencing a slow application, and the remediation options are constrained.

The database schema should be designed for the data volumes the system will carry in its third year of operation, not its first week. This means understanding which queries will be executed most frequently, what the join patterns are, and where indexes are required. It means testing with production-representative datasets before go-live, not after.

Slow query monitoring should be instrumented before launch. Performance degradation in production is easier to diagnose and address when you have a baseline and a history of query execution times. Without that instrumentation, the first indication of a problem is often user complaints.


Building a field force platform that operates reliably at enterprise scale requires a series of early decisions — on offline architecture, GPS strategy, reporting design, and data modelling — that cannot be effectively revisited once the system is in production.

We build and operate field force platforms across telecoms, FMCG, and managed services. If you are planning a new deployment or working through the limitations of an existing system, we are happy to have a direct conversation.