Kenya Moves Toward Always-On Network Oversight and Operators Feel the Pressure
A plan to monitor performance in real time could redraw the balance between public frustration and corporate accountability

Telecom performance in Kenya has long been measured through drive tests, with engineers travelling across towns and highways collecting samples intended to represent everyday network use. The process produced delayed snapshots rather than current conditions. Reports arrived after network conditions had already changed, leaving regulators assessing problems that users had already experienced and moved past.
The Communications Authority now wants something closer to permanence. Its proposed Network Performance Monitoring System would sit within networks themselves, drawing performance data continuously rather than periodically. Alongside it, a mobile application and web portal would collect user experience data directly from subscribers’ devices, feeding the regulator a stream of real-time information about speeds, call quality, message delays, and streaming performance.
While the change looks technical, it actually alters the relationship between regulator, operator, and subscriber. Oversight moves from sampling toward surveillance of performance, from annual assessment toward continuous judgment. The regulator is no longer asking how networks performed last quarter. It is asking how they perform now.
When compliance becomes a moving target
The timing is not accidental. The authority has proposed raising the service quality threshold from 80 percent to 90 percent, while introducing penalties tied to county-level performance. Under existing measurements, no operator cleared the 90 percent mark in the last assessment cycle. That alone explains the urgency.
Continuous monitoring changes incentives. Operators accustomed to managing performance around known measurement periods now face evaluation without clear pauses. Network congestion, local outages, or temporary capacity constraints become visible in real time. A weak week in one county could carry regulatory consequences even if national averages remain acceptable.
The proposed penalty structure reinforces this pressure. Fines can reach up to 0.2 percent of annual revenue for failure to meet standards. For large operators, that is not symbolic. It introduces a financial argument for network investment that goes beyond customer churn or competitive positioning.
Yet there is an underlying tension. Networks are complex systems influenced by geography, device quality, weather patterns, and user behaviour. Continuous measurement risks treating fluctuating technical environments as regulatory failure. The distinction between structural underperformance and temporary degradation becomes harder to maintain once measurement never stops.
Quality of service meets quality of experience
The more interesting element sits outside the network infrastructure itself. The proposed mobile application aims to measure quality of experience rather than just quality of service. That distinction has been growing across global telecom regulation.
Quality of service deals with technical metrics. Latency, accessibility, dropped calls, throughput. Engineers understand these. Quality of experience moves into perception. How long a video buffers. Whether a message feels delayed. Whether a connection feels reliable even when technical thresholds are technically met.
This introduces subjectivity into regulation. A network may pass engineering benchmarks while users remain dissatisfied. Conversely, perception can improve even when underlying metrics remain unchanged. The regulator is attempting to bridge that gap by turning subscriber devices into measurement points.
There is an institutional logic here. Annual surveys are expensive and often outdated by the time results appear. Automated data collection promises scale and immediacy. But it also raises questions about interpretation. User experience varies widely depending on handset quality, application optimisation, and local congestion patterns that no operator fully controls.
The risk is that perception data, once quantified, acquires the authority of engineering data. Numbers tend to flatten context.
Data, consent, and the expanding regulatory perimeter
Any system that gathers performance data from personal devices inevitably crosses into questions of data governance. The proposed application would capture speed tests, voice performance, messaging delays, and streaming behaviour. Even when anonymised, such datasets reveal patterns about usage habits, location density, and digital behaviour.
Kenya’s telecom sector already operates within an environment where trust in institutions fluctuates. Introducing a regulator-linked application that continuously gathers device-level data requires careful handling of consent and transparency. Adoption will depend less on technical design and more on public confidence that the data will remain narrowly used for performance assessment.
There is also a competitive dimension. Operators may argue that embedding monitoring tools inside networks grants the regulator unprecedented visibility into operational performance. Some will see accountability. Others may see regulatory overreach, especially if data interpretation becomes contested.
Globally, regulators attempting similar approaches have discovered that measurement is rarely neutral. The choice of metrics influences investment priorities. Networks begin optimising for what is measured rather than what is experienced more broadly.
The economics beneath the compliance debate
Kenya’s telecom market has matured into a space where growth increasingly depends on data consumption rather than subscriber expansion. Network upgrades require significant capital expenditure, particularly outside major urban centres where returns are slower.
Raising compliance thresholds to 90 percent while increasing measurement frequency effectively raises the cost of maintaining regulatory approval. Operators may respond by accelerating investment in dense urban areas where performance improvements are most visible in aggregated metrics. Rural performance gaps, already persistent, could become harder to close if compliance penalties outweigh commercial incentives.
At the same time, the regulator faces its own constraint. Public frustration with connectivity remains high. Complaints about speeds, dropped calls, and inconsistent service have accumulated over years. A more assertive monitoring framework allows the authority to demonstrate action in an environment where digital connectivity increasingly shapes economic participation.
The outcome depends on balance. Excessive enforcement risks discouraging investment. Weak enforcement risks preserving the status quo.
Automation changes expectations
Once performance data becomes continuous, expectations follow. Users begin to assume that poor service should trigger immediate consequences. Regulators may feel pressure to intervene more frequently. Operators lose the space that periodic measurement once provided to correct problems before they became official findings.
This is where the project moves beyond technology. It reflects a broader evolution in regulation itself. Digital infrastructure is increasingly treated as essential public utility rather than competitive service alone. Reliability becomes a civic expectation.
Whether the new system improves everyday connectivity will depend less on software than on how the data is used. Measurement can expose problems. It does not automatically solve them. Enforcement decisions, investment responses, and public communication will determine whether automation produces better networks or simply more visible dissatisfaction.
The authority’s tender marks the beginning of that transition. What follows will reveal how Kenya defines accountability in a sector that underpins almost every part of modern economic life.


