Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented

Sweden is often celebrated as a model welfare state. It consistently ranks at the top of global transparency indexes and enjoys high levels of public trust. Yet behind this reputation for openness, one of the country’s most important public institutions — the Social Insurance Agency (Försäkringskassan) — has for years been quietly running large-scale algorithmic experiments on welfare recipients.
These predictive systems, designed to score citizens on the likelihood of committing fraud, have flagged hundreds of thousands of people. Many of those affected — parents, migrants, low-income workers — suddenly find their benefits suspended or face humiliating investigations. Few know that a machine learning model, not a human suspicion, put them under scrutiny.
The Secret Machines
Internal sources describe the fraud prediction models as the agency’s “best-kept secret.” For years, Lighthouse Reports and Svenska Dagbladet pursued answers through Sweden’s freedom-of-information system. The agency rejected nearly every request. Even basic details — such as how many people were flagged, what data was used, or whether the models were randomised — were denied under the justification that disclosure would help “fraudsters evade detection.”
In one email accidentally copied to a reporter, a senior official wrote dismissively: “Let’s hope we are done with him!”
The stonewalling continued despite the fact that watchdogs and even the agency’s own data protection officer had raised alarms. A 2016 Integrity Committee report called the practice “citizen profiling” and warned of risks to personal privacy. In 2020, an internal memo questioned the legality of the system.
The Data Trail
A breakthrough came when reporters obtained an unpublished dataset from the agency’s fraud detection program targeting Sweden’s temporary child support scheme, which provides assistance for parents caring for sick children.
The dataset contained records of over 6,000 people flagged by the algorithm in 2017, along with demographic information. With the support of eight academic experts, Lighthouse and Svenska Dagbladet conducted statistical fairness tests.
The findings were stark:
- Women were disproportionately flagged.
- Migrants and people without university education were overrepresented.
- Low-income earners were more likely to be labelled suspicious.
- Those wrongly flagged were overwhelmingly from these vulnerable groups.
In short, the algorithm didn’t just find “fraud”; it replicated social bias.
Lives Disrupted
The first part of the joint series told the human stories: parents left without money for basic essentials, families whose only “crime” was having characteristics the model deemed risky.
The flagged cases are handled by a special investigative unit inside Försäkringskassan, operating behind closed doors. These investigators have sweeping powers, and while a human makes the “final decision,” the process often delays benefit payments for weeks or months.
As one parent put it: “It felt like we were guilty until proven innocent. We didn’t even know why.”
Baseless Claims of Fraud
The agency has long justified its surveillance system with claims of widespread welfare fraud. Media outlets repeated these figures, reinforcing a narrative that welfare abuse was systemic.
But analysis by Lighthouse revealed the estimates rested on baseless assumptions. The definition of fraud often did not distinguish between honest mistakes and intentional deception. Even in cases referred to the justice system, Swedish courts rarely upheld intentional fraud charges.
Opacity and Accountability
The final part of the investigation focused on accountability. Experts were scathing:
- Virginia Dignum, professor of AI at Umeå University and UN advisor, criticised the agency’s refusal to publish details.
- David Nolan, from Amnesty’s Algorithmic Accountability Lab, noted: “How are individuals expected to effectively challenge a decision made about them — as is their right — when they are not even aware an automated system flagged them?”
Yet the agency’s fraud algorithm supervisor, Anders Viseth, dismissed calls for transparency: “I don’t think we need to be.”
Welfare, Justice, and Trust
Sweden’s welfare state is founded on social solidarity and trust. But the hidden deployment of “suspicion machines” risks undermining that very trust.
By quietly embedding algorithmic profiling into the heart of its welfare system, Sweden has created a new form of inequality: one where the most vulnerable are more likely to be flagged, investigated, and deprived of support — not for what they did, but for who they are.