Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Israel Is Already Weaponizing AI — But Not in the Ways It Claims to Be
Sunday, December 22, 2024

Trending Topics

HomeInnovationIsrael Is Already Weaponizing AI — But Not in the Ways It Claims to Be

Israel Is Already Weaponizing AI — But Not in the Ways It Claims to Be

spot_img

Do scholars and activists who support Palestinian rights sometimes unintentionally promote the Israeli arms industry? The Israeli military hype machine famously uses the occupation as a “laboratory” or as a “showcase” for its newly developed weapons, but this creates a dilemma for activists who oppose Israeli arms exports. Scholars and activists are morally obligated to highlight the crimes committed by the Israeli forces. But by pointing to the destruction, suffering, and death caused by these weapons, activists may inadvertently reproduce exactly the propaganda that allows Israel to sell its technologies of death, destruction, and repression.

To avoid falling into the trap of Israeli hype, we must take a step back and look at the Israeli methods of oppression and state violence over time. Recently Israeli forces in the West Bank have returned to the methods of 20 years ago, of the second Intifada, with an Apache helicopter spraying a whole crowd with bullets. The technology is going backward.

Spyware is a good example of this hype. Israeli spyware companies received government authorization to sell spyware to the highest bidder, or to authoritarian regimes with which the Israeli government wanted to improve relations. This doesn’t make spyware an Israeli technology — intelligence organizations in the U.

S, Russia, and other countries with access to spyware simply do not offer it for sale on the market. In his book, The Palestine Laboratory , Antony Loewenstein discusses how this hype is manufactured to boost the sales of Israeli arms companies, and Rhys Machhold has also warned about critical texts against Israeli crimes being subverted into promotional materials by the very companies which activists are trying to stop. The most recent development of the hype machine is artificial intelligence.

The rapid development of artificial intelligence with the ability to learn and adapt evokes both awe and fear in the media and social media, so it is no surprise that Israeli apartheid institutions are already trying to brand themselves as the forerunners. In her article for 972 Magazine , Sophia Goodfriend warns of the use of artificial intelligence by the Israeli military, but her only source for this claim is the Israeli military itself. In June 2022, Israel’s largest arms company, Elbit Systems, showcased its new system of a swarm of killer robots called Legion-X, labeling it as “AI-driven.

” The weapon is indeed terrifying. It’s important to stress, however, that the Legion-X contains fewer AI features than a self-driving car and that there is no evidence that it will be any more or less lethal than any other military unit operating in a civilian neighborhood in occupied territory. Netanyahu gave a passionate speech about Israel being a world leader in AI research, which contains about as much truth as any other Netanyahu speech.

The CEO of Open AI and one of the most famous developers of the ChatGPT system, Sam Altman, refused the opportunity to meet with Netanyahu during a planned trip to Israel earlier in June. Netanyahu then quickly announced that Israel would contract NVIDIA, a company whose stock was soaring because of its involvement with AI, to build a supercomputer for the Israeli government. The plans were scrapped within days when it became apparent that the idea to build the supercomputer was based on a whim and not on any feasibility study.

Interestingly, the cancellation of the mega-project was published in Hebrew , but not in the English-language media. The fear of AI fuels a lively debate about the dangers of AI, with prominent AI scholars such as Eliezer Yudkowsky raising the alarm and warning that unsupervised AI development should be considered as dangerous as weapons of mass destruction. Discussions about the dangers of AI focus on the danger posed by autonomous weapons , or by AI taking control of entire systems in order to achieve a goal given to it by a reckless operator.

The common example is the hypothetical instruction to a powerful AI system to “solve climate change,” a scenario in which the AI promptly proceeds to exterminate human beings, who are, logically, the cause of climate change. Unsurprisingly, the Israeli discussion of AI is vastly different. The Israeli military claims to have already installed an autonomous cannon in Hebron, but Israel lags behind the EU, UK, and U.

S. when it comes to regulating AI to minimize risks. Israel is ranked 22nd in the Oxford Insights AI Readiness Index .

In October 2022, Israeli Minister of Technology and Innovation, Orit Farkash-Hacohen, declared that no legislation is required to regulate AI. Autonomous weapons, or robot rebellion, however, is not the greatest risk posed by the new developments of AI. In my opinion, the language model, often referred to as ChatGPT, and the ability to fabricate images, sound, and video — realistic enough to seem like authentic documentation — can give unlimited power to users of AI who are rich enough to purchase unrestricted access.

In a conversation with ChatGPT, if you try to bring up risky topics, the program will inform you that answering you will violate guidelines. ChatGPT has the power to gather private information about individuals, to collect information on how to manufacture dangerous explosives, chemical or biological weapons, and most dangerously – ChatGPT knows how to speak convincingly to human beings and make them believe a certain mixture of truth and lies which can influence their politics. The only thing that prevents ChatGPT users from wreaking havoc is the safeguards installed by the developers, which they can just as easily remove.

Disinformation companies such as Cambridge Analytica demonstrated how elections can be swayed by distributing fake news and, more importantly, by adapting the fake information to individuals — using data collected on their age, gender, family situation, hobbies, likes, and dislikes — to influence them. Although Cambridge Analytica was eventually exposed, the Israeli Archimedes Group that worked with them was never exposed or held accountable. A recent report by Forbidden Stories revealed that the Archimedes Group lives on as an entire disinformation and election-rigging industry based in Israel, but operating worldwide.

Disinformation companies already use rudimentary forms of AI to create armies of fake avatars, which spread disinformation on social media. Candidates who can afford to destroy the reputation of their opponents can buy their way into public office. It’s illegal, but the Israeli government has chosen to allow this sector to operate freely out of Israel.

Recently, Janes , Blackdot , and even the U. S. Department of Homeland Security have discussed the ethical risks posed by OSINT(open-source intelligence).

Espionage, which involves stealing information and secret surveillance, is risky and illegal, but by gathering information that is publicly available from open sources, such as newspapers, social media, etc. , spies can build comprehensive profiles on their targets. An OSINT operation by an intelligence agency in a foreign land requires a large amount of time, effort, and money.

A team of agents who speak the language and understand the local customs must be assembled and then painstakingly gather information on a target, which could then be used for character assassination — or even actual assassination. Again, Israel is not a leader of OSINT, but it is a leader in the unscrupulous use of these methods for money. The Israeli company, Black Cube, set up by former Mossad agents, offered its services to criminals such as Harvey Weinstein , and tried to conduct a character assassination of the women who complained against him.

Luckily, Black Cube has failed in most of its projects. Their lies were not believable enough, their covers too obvious, the information they gathered too incomplete. With the new capabilities of AI, all this changes.

Anyone who can bribe AI providers to disable the ethical restrictions on AI will have the power to conduct an OSINT operation within minutes, which would normally require weeks and a team of dozens of humans. With this power, AI can be used not just to kill people with autonomous weapons, but much more seriously, AI can play a subversive role, influencing the decision-making process of human beings in their ability to distinguish friend from foe. Human rights organizations and UN experts today recognize that the State of Israel is an apartheid regime.

The Israeli authorities do not need AI to kill defenseless Palestinian civilians. They do, however, need AI to justify their unjustifiable actions, to spin the killing of civilians as “necessary” or “collateral damage,” and to avoid accountability. Human propagandists have not been able to protect Israel’s reputation — it is a task too difficult for a human being.

But Israel hopes that AI might succeed where human beings have failed. There is no reason to think that the Israeli regime has access to AI technology other than what is available on the commercial market, but there is every reason to believe that it will go to any lengths and cross any red line to maintain apartheid and settler-colonialism against the Palestinian people. With the new AI language models available, such as ChatGPT, the only thing that will stand between this regime and its goal is if AI developers recognize the risk of arming an apartheid regime with such dangerous technology.

Israel’s commander of the secret police, Ronen Bar, announced that this is already happening and that AI is utilized to make autonomous decisions online and surveil people on social media, in order to blame them for crimes that they have not yet committed. It’s a wake-up call that AI is already being weaponized by Israel. Preventing the harm caused by AI is only possible, however, if we take the time to understand it.

.


From: truthout
URL: https://truthout.org/articles/israel-is-already-weaponizing-ai-but-not-in-the-ways-it-claims-to-be/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News