Heerlen zet AI in tegen criminaliteit – maar experts waarschuwen voor profilering
Heerlen, zaterdag, 28 februari 2026.
De gemeente Heerlen lanceert Pulse-Twin, een AI-project om criminaliteit te voorspellen met behulp van data uit politie, CBS en sociale diensten. Het systeem moet ingrijpen voordat misdrijven gebeuren. Maar experts melden grote zorgen. Er is nog geen bewijs dat dergelijke systemen werken. Juist het risico op ethnisch profileren is hoog. Samengestelde data kunnen leiden tot structurele discriminatie in specifieke wijken. Amnesty International noemt de stap riskant. De universiteit Maastricht houdt toezicht op ethiek. Toch blijft de vraag of veiligheid opweegt tegen privacyverlies. Burgemeester Roel Wever benadrukt: men zet hier nooit de eigen hersenen uit. Het project is mede mogelijk gemaakt met vijf miljoen euro van de Europese Unie.
Heerlen launches AI-driven crime prediction project
The municipality of Heerlen has initiated Pulse-Twin, an artificial intelligence project designed to predict and combat urban crime [1]. The system integrates anonymized data from police records, social services, and Statistics Netherlands (CBS) to identify areas at heightened risk of incidents such as arson, burglary, and vandalism [2]. Funded with a €5 million grant from the European Urban Initiative, the pilot aims to enhance public safety through proactive interventions before crimes occur [3]. Officials emphasize its experimental nature and stress human oversight remains central to decision-making processes [2].
Experts raise alarm over bias and privacy risks
Despite official optimism, data ethics experts warn there is currently no proven evidence that predictive policing systems reduce crime effectively [1]. Iris Muis from the University of Utrecht cautions that deploying digital twins for law enforcement carries significant risks of privacy violations and discriminatory outcomes [1]. She argues historical data patterns may reinforce existing biases, leading to disproportionate scrutiny in marginalized neighborhoods [2]. According to Muis, such systems often become self-fulfilling prophecies by directing more patrols to already monitored areas, thereby increasing recorded incidents there [1].
Amnesty warns against ethnic profiling in algorithmic policing
Human rights organization Amnesty International has voiced strong concern over the potential for ethnic profiling embedded in automated prediction models [1]. Alexander Laufer, policy advisor on technology and human rights at Amnesty, states these systems inherently amplify societal inequalities because they rely on historically biased enforcement data [2]. “When predictive tools are used to guide police action, certain communities remain permanently under suspicion,” he explains [1]. Amnesty urges local authorities to reconsider the deployment of Pulse-Twin until robust safeguards against discrimination are legally binding and independently auditable [2].
System design includes ethical oversight but questions persist
To address mounting criticism, the city has assigned Maastricht University to conduct independent ethical monitoring throughout the trial period [2]. The initial version of Pulse-Twin is scheduled for completion by December 31, 2026, followed by a closed testing phase involving selected municipal departments until March 2028 [3]. Full-scale rollout across Heerlen depends on evaluation results during this controlled period [2]. While officials insist no personal identities are processed and decisions remain human-led, critics question whether aggregated location-based data can truly avoid indirect identification or neighborhood stigmatization [1][2].
Past failures fuel skepticism around predictive algorithms
Skepticism toward Pulse-Twin builds upon previous national experiences with flawed predictive technologies [1]. In 2023, the Risk Assessment Instrument Violence (RIOG) was discontinued due to inaccuracies and lack of transparency [2]. Similarly, the Crime Anticipation System (CAS) was recently halted after audits revealed outdated datasets and unchecked algorithmic bias [2]. These precedents underscore challenges in balancing innovation with accountability [GPT]. Critics argue that without strict legislative boundaries, even well-intentioned pilots risk normalizing mass surveillance practices under the guise of public order [1].
City leadership defends innovation amid civil society pushback
Mayor Roel Wever maintains that Heerlen’s approach prioritizes responsibility alongside technological experimentation [3]. “We will never turn off our own brains,” he asserts, emphasizing that AI outputs serve only as advisory inputs for human judgment [3]. He describes the project as part of a broader smart-city vision that could eventually support traffic optimization and energy management in buildings [3]. Nevertheless, civil society groups caution against expanding surveillance infrastructures regardless of secondary benefits [1]. For now, the debate centers on whether preventive security justifies systemic data aggregation in residential communities [1][2].