In modern warfare, control over digital infrastructure has become as critical as control over physical territory. Recent investigations have largely exposed how Israel has abused BigTech to commit war crimes in Gaza, Palestine. Big Tech has played a decisive role in enabling and enforcing digital repression—from AI-driven military targeting to widespread internet shutdowns.
A series of reports, including one from The Associated Press (AP), have confirmed that Israeli forces are using artificial intelligence and cloud computing systems—provided by U.S. tech giants—for mass surveillance and indiscriminate killings of Palestinians.
This siege has exposed the increasing role of Big Tech in warfare, surveillance, and digital repression, setting a dangerous precedent for the future.
How Internet Shutdowns Became a Tool of Digital Repression
Gaza is facing widespread communications blackouts, severely impacting its 2.3 million residents amid relentless bombardment. The near-total disruption of internet services has restricted access to emergency response, separated families, and prevented critical information from reaching the outside world.
The majority of the internet providers in Gaza had been shut down, while the remaining four suffered severe disruptions. According to reports from IODA, Cloudflare Radar, and RIPEstat, these blackouts were not incidental but systematically executed, demonstrating how internet infrastructure can be leveraged as a weapon of war.
Big Tech plays a crucial role in this connectivity crisis, as many ISPs in Gaza rely on upstream providers controlled by Israeli and foreign corporations. The dependence of key ISPs—including PalTel, DCC, and SpeedClick—on international providers makes Gaza’s digital infrastructure inherently vulnerable to targeted shutdowns, a tactic that can be replicated in future conflicts.
Following its violation of ceasefire agreements, Israel intensified its repression by escalating both airstrikes and digital censorship, ensuring that Palestinian voices are silenced while the genocide continues unabated.
Censorship & Narrative Control on Social Media
Social media platforms have been accused of suppressing documentation of war crimes and civilian casualties. Instagram, Facebook, X, and other major platforms—alongside tech giants like Google, Microsoft, and Apple—have been scrutinized for algorithmic censorship that disproportionately targets content exposing human rights violations.
Reports indicate that posts documenting destruction, displacement, and civilian casualties have been flagged, shadow-banned, or removed under opaque content moderation policies. AI-driven moderation tools have exacerbated the issue, filtering out vital footage and testimonies under vague pretexts such as “sensitive content” or “violating community guidelines.”
Artificial Intelligence (AI) is being leveraged for an unprecedented level of surveillance. Independent journalist Antony Loewenstein revealed that corporate tech giants are compiling massive databases on Palestinians, tracking every aspect of their lives—including their movements, communications, and even their fears and desires.
“Palestinians are guinea pigs—but this ideology and work doesn’t stay in Palestine,” Loewenstein warned. “Silicon Valley has taken note, and the new Trump era is heralding an ever-tighter alliance among Big Tech, Israel, and the defense sector. There’s money to be made, as AI currently operates in a regulation-free zone globally.”
These practices raise broader concerns about the role of AI in controlling narratives during conflicts. The reliance on automated systems for content moderation not only suppresses critical reporting but also limits accountability by erasing digital evidence of war crimes.
Big Tech’s Deep Involvement
Israeli forces purchase advanced AI models from OpenAI and Microsoft’s Azure cloud platform. OpenAI, which initially prohibited military applications, quietly removed this restriction from its usage policy in early 2024. Additionally, Google and Amazon have been supplying the Israeli military with cloud computing and AI services through Project Nimbus, a $1.2 billion contract signed in 2021.
Other major tech companies involved in Israel’s genocidal operations include:
- Microsoft – Provides AI models and Azure cloud computing.
- OpenAI – Supplies AI technology, despite initially prohibiting military use.
- Google – Supplies AI and cloud computing through Project Nimbus.
- Amazon – is also part of the Project Nimbus contract for cloud services.
- Cisco – Provides server farms and data centers for IDF operations.
- Dell – Supplies computing infrastructure.
- HP (Hewlett-Packard) – Supplies biometric identification and surveillance technologies used at Israeli military checkpoints, facilitating apartheid policies against Palestinians.
- Oracle – Provides cloud computing and data analytics for Israeli security operations.
- Intel – Develops military-grade AI chips used in IDF targeting systems.
- Nvidia – Supplies high-performance AI chips and computing infrastructure for Israeli defense projects.
- Red Hat (IBM subsidiary) – Sells cloud computing services to the Israeli military.
- Palantir Technologies – Has a “strategic partnership” with the Israeli military for predictive policing and surveillance.
Despite facing employee protests, these companies continue to support and supply Israel with the tools necessary to sustain its war machine. Google, in particular, has been “directly assisting” the IDF while simultaneously attempting to distance itself from the Israeli government. The company even fired dozens of employees who protested against its complicity in war crimes through the “No Tech for Apartheid” movement. One such employee, who interrupted a presentation by Google’s Israel managing director, stated, “I refuse to build technology that powers genocide, apartheid, or surveillance.”
The Role of AI in Military Targeting and Surveillance
Israel’s use of Microsoft and OpenAI technology “skyrocketed” following Hamas’ October 7, 2023, attack, according to the AP. AI systems are being leveraged to select bombing targets at an unprecedented rate, leading to the slaughter of over 50,000 people in Gaza and Lebanon.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief artificial intelligence scientist at the AI Now Institute. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Israeli military officials have openly referred to AI as a “game changer” in enabling faster targeting. The brutal efficiency of these AI-powered systems is evident: In the past, Israeli forces might have selected 50 targets in a year; now, they generate over 100 in a single day.
Big Tech corporations, including Google, Amazon, and Microsoft, have played a significant role in supplying AI-driven surveillance and military targeting systems. Projects like Project Nimbus, a $1.2 billion cloud computing contract involving Google and Amazon, have been linked to the development of advanced AI tools that enhance military precision targeting.
Warfare automation is here, and it happened, and is still happening in Gaza as we speak. Media investigations of the genocide revealed that the Israeli Government used AI targeting systems “Lavender” and “The Gospel” to automate mass slaughter and destruction across the Gaza Strip. This is the height of many AI rights-abusing trends, such as biometric surveillance systems and predictive policing tools.
This isn’t the first time Israel has used Palestinians as test subjects for weaponry. The Gaza Strip had been a testing ground for Israeli Defense Startups long before. From Suicide Drones, and smart guns, to AI-powered armored vehicles, Israel has turned Gaza into an experimental tech laboratory for weaponry.
The AI system Habsora is a critical tool in Israel’s campaign of indiscriminate killing. According to a +972 Magazine report, this system enables mass assassinations by prioritizing volume over accuracy, allowing mid-ranking officers to authorize airstrikes with minimal oversight.
Early in the war, these officers were permitted to sacrifice up to 500 civilian lives per day. Days later, even this limit was lifted, leading to catastrophic attacks such as the IDF’s bombing of the Jabalia refugee camp in October 2023. That single AI-driven airstrike, using U.S.-supplied 2,000-pound bombs, killed at least 126 people, including 68 children, and wounded 280 others.
These AI-driven targeting systems have raised concerns about algorithmic bias and the potential for disproportionate civilian harm. AI-powered facial recognition and predictive targeting tools are increasingly becoming staples of modern warfare, with minimal oversight or accountability.
The Future of Digital Warfare & Ethical Concerns
Gaza has become a testing ground for digital warfare strategies, including AI-powered surveillance, automated military decision-making, and large-scale internet blackouts. These developments highlight an urgent need for global scrutiny over how Big Tech’s innovations are used in conflict zones.
As AI militarization accelerates and digital repression becomes a norm, the ethical, legal, and humanitarian implications must be addressed. Corporations’ ability to profit from technologies that can be weaponized for warfare underscores the need for transparency, accountability, and stronger regulatory frameworks to prevent the misuse of digital tools in future conflicts.
The genocide in Palestine serves as a stark reminder of how technology is reshaping the landscape of modern warfare. If left unchecked, the growing entanglement of Big Tech with military operations could redefine the ethics of war, blurring the lines between technological innovation and digital oppression
Rimsha Salam is a tech-enthusiast, writer, blogger, ex-quality assurance engineer, and freelancer. She writes on the latest tech trends, gadgets, Information technology, and more. Always eager to learn and ready for new experiences, she is a self-proclaimed tech geek, bookaholic, introvert, and gamer.
Image: Pixabay