Composite: Angelica Alzona/Guardian Design; Photos via Getty Images
Israel’s elite surveillance unit 8200 has developed an AI tool trained on vast amounts of intercepted Palestinian communications, aiming to transform its spying capabilities, a joint investigation by the Guardian, +972 Magazine and Local Call has revealed.
The system, similar to ChatGPT, was built using extensive surveillance of phone calls and messages in the occupied territories. Sources said it is designed to answer questions about monitored individuals and analyze huge volumes of data. Development accelerated after the Gaza war began in October 2023, though it remains unclear if the model has been deployed.
Former intelligence technologist Chaked Roger Joseph Sayedoff, who oversaw the project, admitted the model was fed “psychotic amounts” of Arabic-language data. Three ex-officials confirmed the LLM’s existence, while others described how earlier AI tools were already used for monitoring Palestinians.
“AI amplifies power,” one source said. “It’s not just about preventing attacks – I can track human rights activists, monitor construction in Area C. I have more tools to know what every person in the West Bank is doing.”
Details of the new model’s scale sheds light on Unit 8200’s large-scale retention of the content of intercepted communications, enabled by what current and former Israeli and western intelligence officials described as its blanket surveillance of Palestinian telecommunications.
The project also illustrates how Unit 8200, like many spy agencies around the world, is seeking to harness advances in AI to perform complex analytical tasks and make sense of the huge volumes of information they routinely collect, which increasingly defy human processing alone.
Zach Campbell, a senior surveillance researcher at Human Rights Watch (HRW), expressed alarm that Unit 8200 would use LLMs to make consequential decisions about the lives of Palestinians under military occupation. “It’s a guessing machine,” he said. “And ultimately these guesses can end up being used to incriminate people.”
Mistakes will be made
For Israel’s Unit 8200, the appeal of an AI foundation model is its ability to process “everything that has ever been collected” and spot hidden patterns, said Ori Goshen, co-founder of AI21 Labs, whose employees contributed to the project while on reserve duty. But Goshen warned: “These are probabilistic models … often, the answer makes no sense. We call this ‘hallucination.’”
Experts caution that such flaws could have grave consequences. Brianna Rosen, a former White House national security official now at Oxford, said the tools might help detect threats early but could also produce false connections. “Mistakes are going to be made, and some of those mistakes may have very serious consequences,” she said.
Concerns grew after the Associated Press reported AI was likely used to select a target in a November 2023 airstrike in Gaza that killed four people, including three teenage girls. A message seen by AP suggested the strike had been conducted in error.
The Guardian asked the IDF about safeguards, but the military declined to comment how it prevents inaccuracies or protects Palestinian privacy. A spokesperson said only that the army follows a “meticulous process” and ensures “professional personnel” are involved in intelligence decisions.
Source: First published in The Guardian
Discover more from The New Palestine Post
Subscribe to get the latest posts sent to your email.