In her north London home, Maria Julia Assis was having dinner when her 6-year-old son ran into the dining room with a pale face. His Android phone’s puzzle game had been interrupted. A message from the Israeli Ministry of Foreign Affairs informing a first-grader in capital letters that those who cause harm will pay a heavy price took its place, along with footage of Hamas militants, scared Israeli families, and blurry graphic images. Assis immediately erased the game. She claimed that her son was shaken. He questioned the purpose of a graphic advertisement in his game. The question was reasonable. No one had a clear response.
What transpired in that dining room in north London in October 2023 turned out to be one of at least six documented instances throughout Europe, including Britain, France, Austria, Germany, and the Netherlands, where the same government-funded video—which was a part of a $1.5 million digital advocacy campaign started by the Israeli Ministry of Foreign Affairs following the October 7 attacks—found its way onto screens in kid-friendly video games. Angry birds. Surfers of Subway. Play solitaire. Stack. Games intended for light entertainment, distraction, commuting, and waiting areas. The advertisements weren’t made for those areas. Nevertheless, they showed up there.
David Saranga, the head of digital for the Israeli Ministry, affirmed that the advertisement was promoted by the government and stated that authorities had given advertisers explicit instructions to prevent users under the age of eighteen from seeing it. He claimed he didn’t know how it got into kid-friendly games. The government’s admission that it had given the proper instructions but still had no control over where its most graphic content ended up sums up the exact issue that has been developing inside the digital advertising ecosystem for years. The instructions are sent out. Algorithms disperse. No single business is certain of the content’s final destination. To find out who placed the advertisement in Angry Birds, Reuters got in touch with 43 advertising companies that were listed as Rovio’s data partners. Twelve answered. None accepted accountability.

The World Organization for Early Childhood Education, or OMEP, had already been keeping a close eye on the larger trend. The organization’s official statement on the Israeli-Palestinian conflict, released through its World Executive Committee, called for an immediate end to all aggression, denounced attacks on civilian populations, particularly children, and made it clear that there are no just wars and that using violence to resolve social, economic, or political conflicts is the worst course of action. It had nothing to do with gaming advertisements. It was about something bigger: even in the most politically charged settings, governments and international organizations consistently fail to treat children’s protection as a non-negotiable priority.
That abstract idea was brought into the real world by the gaming advertisement incident. Here was a government using mainstream ad networks, such as Google, which ran over 90 ads for the ministry, operating lawfully, spending its own funds, and disseminating footage of armed conflict into areas where 6-year-olds play puzzle games. The technology that enabled it is not specific to any one conflict or government. Infrastructure is what it is. Furthermore, with over 3.4 billion players worldwide and nearly nine out of ten children in middle- and upper-income countries playing online, that infrastructure now represents one of the biggest unregulated points of contact between children and adult content anywhere in the world, according to UNICEF’s October 2025 report on protecting children in online gaming.
The global reaction has been gradually increasing. With an emphasis on how ad-serving algorithms distribute content across gaming networks where age-verification is inadequate or nonexistent, civil society organizations have put pressure on tech companies to regulate advertising networks operating in conflict zones. Rovio acknowledged that the advertisements had appeared in error and manually blocked them; while this is better than nothing, it also highlights how inactive the system is. Distribution is the default. The correction is done by hand. It is the responsibility of a spokesperson to identify and take action, not the algorithm to stop it.
It’s difficult to ignore the fact that Google, Apple, and the big ad networks—the companies most involved in this issue—have not put forth structural fixes. They have returned inquiries to the developers. Questions have been referred back to ad partners by developers. Ad partners have denied any responsibility. After completing the circle, the child is still holding a phone in a London dining room and posing a question that adults with billion-dollar platforms are unable to clearly respond to. Observing early childhood organizations and advocacy groups unite around this issue gives the impression that pressure is mounting to take action. It’s still genuinely unclear what that something entails, including regulations, self-imposed industry standards, and liability. However, the coalition is now larger than it was two years ago, and the recorded proof of what transpired will not go away.
