Kenya Draws a Line Around AI Use but Not Around Its Value
Kenya is building oversight for AI at home while the real leverage sits in systems developed elsewhere
Kenya’s proposed Artificial Intelligence Bill, 2026 places a regulator at the centre of a fast-moving sector. It sets rules, defines risk, and attaches penalties that reach up to Sh5 million or 2 years in prison. On paper, it reads as an attempt to bring order to systems that increasingly influence credit, hiring, logistics, and public services.
Yet the core tension does not sit inside the law. It sits outside it.
The Bill governs how artificial intelligence is used within Kenya. The economic weight of AI, however, is built earlier, long before any system reaches a local market. Data is gathered, models are trained, and products are refined across jurisdictions that Kenyan law does not fully touch. By the time an AI system is deployed in Nairobi, most of the value has already been locked in.
That gap is not accidental. It reflects how the global AI economy is structured.
Data Leaves Early, Value Returns Late
AI systems depend on scale. Large volumes of text, images, and behavioural traces are drawn into training pipelines, often from publicly accessible sources. Much of that material originates in countries that do not control the infrastructure where models are built.
Kenya is part of that pipeline. Its digital footprint feeds systems developed elsewhere, then reappears as subscription services, enterprise tools, or embedded software. The cycle runs outward when data is collected and inward when products are sold.
The proposed law does not interrupt that flow. It regulates the endpoint, not the origin.
This creates an uneven arrangement. Local companies may face strict compliance requirements while relying on systems whose underlying data and design remain outside domestic reach. Oversight becomes concentrated at the point of use, while the earlier stages of value creation stay diffuse and largely external.
There is a practical consequence here. If data continues to move without clear ownership or licensing rules, Kenya participates in the AI economy as a contributor of raw material, not as a co-owner of the finished product.
The Missing Architecture of Ownership
Data sovereignty is often described in territorial terms. In practice, AI complicates that idea. Once data is absorbed into a model, it is no longer retrievable in its original form. It becomes part of a statistical structure that resists simple claims of ownership.
The Bill does not yet define how Kenya might assert rights over data that enters these systems. There are no clear provisions on whether companies operating locally must disclose how training data is sourced, or whether certain categories of data require local storage or licensing.
That absence leaves the question open. If a dataset drawn from Kenyan users contributes to a model trained abroad, what claim, if any, does the country retain over that output?
Without a defined answer, negotiation tends to favour those who control infrastructure and capital. Legal ambiguity rarely benefits the weaker party.
| Friction Point | The Bill’s Approach | The Ground Reality |
|---|---|---|
| Data Sovereignty | Requires record-keeping of training data | Data is often scraped or “donated” through free apps long before regulation applies |
| Liability | Places responsibility on the deployer | Providers are often insulated by cross-border jurisdictional layers |
| Local Context | Mandates fairness and non-discrimination | Models trained on external datasets miss informal systems like chamas and credit patterns |
Liability Travels in Fragments
Responsibility in AI systems rarely sits with a single actor. A model developed in one country may be adapted by another firm and deployed by a third. When something goes wrong, tracing accountability becomes less straightforward.
The proposed framework places obligations on deployers, particularly in sectors classified as high risk. Annual compliance reporting is expected, with some disclosures made public. That creates a clear line at the point of use.
What remains less settled is how responsibility extends beyond that point.
If a hospital relies on a system built on a foreign model and an error occurs, the immediate exposure rests with the local institution. The upstream developer, often beyond Kenyan jurisdiction, sits further away from the consequences. That imbalance shapes behaviour. Firms become cautious, not only about adopting AI, but about how much risk they are willing to carry on behalf of systems they do not fully control.
In time, this could slow integration in sectors where reliability carries legal weight.
Compliance Costs Land Unevenly
The Bill introduces penalties designed to be felt. A fine of up to Sh5 million is not trivial, and the possibility of a 2-year sentence adds a personal dimension to corporate decisions.
For large firms, these costs can be absorbed or planned for. For smaller companies, they alter the equation. Compliance becomes a barrier that arrives early in the lifecycle, before products have matured or revenue has stabilised.
There is also the matter of frequency. Enforcement that is applied inconsistently tends to create a different kind of uncertainty. Companies begin to factor in not just what the law requires, but how often those requirements are likely to be tested.
Over time, that shapes the market. Some firms hold back. Others proceed, betting on low visibility. Neither outcome produces a stable environment for growth.
Local Context Remains an Afterthought
AI systems trained on global datasets do not always translate neatly into local conditions. Language, informal economic activity, and social context can all affect how a system performs.
The Bill addresses fairness in general terms. It does not set out detailed requirements on dataset composition or contextual testing. That leaves room for systems to meet regulatory standards while still missing the nuances of the environments in which they operate.
The gap shows up in subtle ways. A credit model may misread income patterns outside formal employment. A language system may struggle with code-switching or regional phrasing. These are not edge cases. They are routine features of the Kenyan context.
Without stronger requirements on localisation, compliance risks becoming procedural rather than practical.
A Regulator With Reach, a System Without Grip
The proposed Office of the Artificial Intelligence Commissioner is equipped with inspection powers and access to records on notice. It can request information, review systems, and enforce penalties. The structure is clear.
Execution is less certain.
Auditing modern AI systems requires technical depth. Models are complex, often proprietary, and not always transparent even to their creators. Assessing them involves more than reviewing documentation. It requires the ability to interrogate how they function in real-world conditions.
That capability takes time to build. Until it does, enforcement may lean on surface-level compliance rather than deep technical scrutiny.
There is a risk here. Authority can be established faster than capacity. When that happens, regulation exists in form before it fully exists in practice.
External Alignment, Internal Limits
Kenya’s framework draws from international models, including the European Union Artificial Intelligence Act. Alignment with such regimes has advantages. It can ease cross-border trade and make the country more predictable to external investors.
At the same time, the conditions are not identical. The European Union regulates within a bloc that has its own technological base. Kenya is applying a similar structure in a setting where many advanced systems originate elsewhere.
That difference shapes outcomes. A framework designed for internal capacity does not automatically transfer to a context of external dependence.
Where the Next Pressure Builds
The law brings AI into formal regulation. That alone marks a turning point. Systems that once operated in a grey zone now sit within a defined legal boundary.
The next set of questions is less about oversight and more about position.
If data continues to flow outward without clear ownership, the economic returns will follow the same path. If liability remains concentrated at the point of use, adoption will move cautiously. If local context is not built into system design, performance will remain uneven even when compliance is achieved.
None of these outcomes is fixed. They depend on how the law evolves and how it is applied.
Kenya has set the terms for governing AI within its borders. The harder task is shaping how it participates in the wider system that AI has already become.
Go to TECHTRENDSKE.co.ke for more tech and business news from the African continent and across the world.
Follow us on WhatsApp, Telegram, Twitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to editorial@techtrendsmedia.co.ke




