For years, Meta has sold itself as a company determined to connect people, not exploit them. Yet inside its own records, another picture emerges — one where ad revenue tied to scams, fake shops, and banned products quietly underwrites its financial health. Internal papers reviewed in recent months reveal that as much as one in ten dollars Meta made in 2024 came from ads it classified as fraudulent or prohibited. That’s roughly sixteen billion dollars of business that, by the company’s own metrics, shouldn’t have existed.
What makes that number sting isn’t just its scale. It’s that Meta’s systems knew much of it was suspect. In internal presentations, the company estimated that Facebook, Instagram, and WhatsApp together served users fifteen billion high-risk scam ads every day. Each click and impression fed the same machine that now bankrolls Meta’s enormous investments in artificial intelligence and data infrastructure.
The company’s position is complicated. Executives argue that the figures were rough and inflated, and that many of those ads weren’t truly fraudulent. But the pattern inside the documents points to something larger — a business model entangled with the very misconduct it claims to fight.
A System That Tolerates What It Detects
Meta’s own enforcement policies reveal a careful calculus. Its internal systems only block an advertiser when they’re 95 percent certain it’s a scam. Anything less, even when likely fraudulent, is allowed to buy space — at a higher rate. The company calls this a deterrent strategy, charging “penalty” prices to make scams less profitable. In practice, it means Meta earns even more from accounts it suspects of deceit.
The logic may make sense on a spreadsheet. It avoids false positives and protects ad revenue. Yet it also leaves the company profiting from the very risk it warns regulators it’s trying to curb. That contradiction defines much of the story here: a company that measures integrity against quarterly revenue impact.
Internally, managers even placed a cap on how much money could be lost to stricter policing. In early 2025, the limit was set at 0.15 percent of total revenue. Anything more aggressive required executive sign-off. It’s an odd guardrail for a company facing global scrutiny over online fraud — a reminder that for Meta, enforcement isn’t just a moral question but a budget line.
The Human Toll of a Scalable Problem
Behind those percentages sit ordinary users — people who believe they’re investing in a friend’s cryptocurrency scheme, buying discounted products from a familiar brand, or joining a job network that looks legitimate. In one case detailed in the report, even a military recruiter’s hacked account drew in colleagues who lost thousands before the page was finally taken down.
The same algorithms that personalize ads also amplify these traps. A user who clicks one scam is likely to see more of them. Meta’s personalization engine interprets interest, not intent. Fraud, when viewed through the lens of engagement data, looks like demand.
That feedback loop has quietly turned social media into a central hub for online fraud. Meta’s own safety researchers estimated that a third of successful scams in the United States involved its platforms in some way. Regulators in the United Kingdom traced more than half of all payment-related scam losses in 2023 to Meta apps. Those numbers are now drawing the attention of financial watchdogs and lawmakers on both sides of the Atlantic.
Profit, Penalties, and Public Pressure
The company faces investigations from the U.S. Securities and Exchange Commission over financial scam advertising. In the U.K., regulators are exploring how Meta’s systems handle deceptive content that doesn’t technically violate its written policies. The company expects fines of up to one billion dollars — a substantial penalty, but still far smaller than the revenue it makes from scam ads in half a year.
That arithmetic defines Meta’s risk posture. The documents suggest leadership agreed to pursue a gradual reduction in fraudulent ad revenue — from just over ten percent in 2024 to under six percent by 2027. The plan wasn’t driven by ethics or user harm. It was designed to protect growth projections while satisfying regulators. In other words, reform paced by accounting.
Meanwhile, Meta continues to pour tens of billions into AI and data centers, including a new complex in Ohio that will span as much ground as Central Park. The irony isn’t lost on investors or employees: the very infrastructure that promises better detection tools is financed, in part, by the money flowing in from the fraud it’s meant to prevent.
A Culture of Managed Exposure
What emerges from these papers is not a company unaware of the problem, but one managing it within acceptable financial limits. Even the internal “Scammiest Scammer” leaderboard — a darkly humorous weekly memo highlighting top offenders — shows a kind of normalization. Scams are treated as data points, not moral breaches.
When staff in Singapore flagged hundreds of fraudulent accounts tied to local users, Meta’s systems determined that three-quarters didn’t technically break its rules. The policies themselves had gaps so wide that fake offers and impersonations slipped through untouched. The company later acknowledged those loopholes but moved slowly to close them.
Some employees appear frustrated by that inertia. Others note that layoffs and reorganization stripped safety teams of the people and computing resources needed to respond faster. Fraud enforcement was told to “keep the lights on,” a phrase that suggests survival rather than improvement.
The Cost of Doing Business
Meta’s current strategy — to moderate, not eradicate — is rooted in the economics of scale. Policing billions of ads daily is not just technically complex; it’s expensive. Each additional layer of scrutiny slows the flow of revenue. And in a market where investors expect double-digit growth to fund AI and virtual reality projects, slowing that flow risks more than reputation.
That tension may be Meta’s defining story in this decade. The company knows how to detect much of the fraud on its platforms but treats the losses it causes as collateral to maintain growth. Regulators are beginning to understand this dynamic, framing their inquiries not around morality but around accountability — whether a company that profits from deception can be trusted to police it.
If the internal numbers are accurate, Meta’s biggest challenge isn’t detection or technology. It’s deciding what kind of business it wants to be when honesty starts to cost too much.
Go to TECHTRENDSKE.co.ke for more tech and business news from the African continent.
Follow us on WhatsApp, Telegram, Twitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to editorial@techtrendsmedia.co.ke




