Добавить новость
News in English



Новости сегодня на DirectAdvert

Новости сегодня от Adwile

Olson: Claude AI helped bomb Iran. But how exactly?

The same artificial-intelligence model that can help you draft a marketing email or a quick dinner recipe has also been used to attack Iran. U.S. Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in the Wall Street Journal.

Hours earlier, President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favor of a more compliant rival. It was used, too, in the January operation that led to the capture of Nicolás Maduro.

But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.

Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots — the same underlying technology that billions use for mundane tasks like writing emails — is now being used on the battlefield.

Last November, Anthropic partnered with Palantir Technologies Inc., a data-analytics company that does a lot of work for the Pentagon, turning its large language model Claude into the reasoning engine inside a decision-support system for the military.

Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company’s pitch: Use Claude to translate a commander’s intent into digital instructions to coordinate a fleet of drones.

Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop “target-related awareness and sharing,” and “launch to termination” for potentially lethal drone swarms.

No man’s land

Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors. Hallucinations by large language models are a result of their training, when they are rewarded for grasping for an answer instead of admitting uncertainty. Some scientists say the persistent challenge of AI confabulation may never be fixed.

This would not be the first time unreliable AI systems have been used in warfare. Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone’s score passed a certain threshold, Lavender flagged them as a military target.

The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. “Around 3,600 people were targeted by mistake,” Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me.

“There are such incredible vulnerabilities in these systems and such extreme unreliability… for something so dynamic, sensitive and human as warfare,” says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies.

Schwarz points out that AI is often used in war to speed things up, a recipe for unwanted outcomes. Faster decisions are made at a greater scale and with less human scrutiny. The last decade and a half has seen military use of AI become even more opaque, she says.

And secrecy is baked into how AI labs operate even before the warfare applications. These companies refuse to disclose what data their models are trained one or how their systems reach conclusions.

Of course, military operations often have to be kept under wraps to protect combatants and keep enemies off the scent. But defense is heavily regulated by international humanitarian law and weapons testing standards, which in theory should also address the use of artificial intelligence. Yet such standards are missing or woefully inadequate.

Rules outdated

Taddeo notes that Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule.

In an ideal world, governments like the U.S. would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed.

It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time.

AI’s policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework.

The goal wouldn’t be to disclose exactly how Claude was used in something like Operation Epic Fury, but to release the broad contours, according to Schwarz. And, especially, to disclose when something goes wrong.

The current public debate about the Anthropic-Pentagon feud — about what is legal and ethical for AI when it comes to the mass surveillance of Americans or creating fully autonomous weapons — is missing the bigger question about the lack of visibility of how the technology is already being used in war. With such new and untested systems prone to making mistakes, this is sorely needed. “We haven’t decided as a society if we’re fine with a machine deciding if a human being should be killed or not,” says Taddeo.

Pushing for that transparency is critical before AI in warfare becomes so routine that nobody thinks to ask anymore. Otherwise we may find ourselves waiting for a catastrophic mistake, and imposing transparency only after the damage is done.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.” ©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media









103news.com — быстрее, чем Я..., самые свежие и актуальные новости Вашего города — каждый день, каждый час с ежеминутным обновлением! Мгновенная публикация на языке оригинала, без модерации и без купюр в разделе Пользователи сайта 103news.com.

Как добавить свои новости в наши трансляции? Очень просто. Достаточно отправить заявку на наш электронный адрес mail@29ru.net с указанием адреса Вашей ленты новостей в формате RSS или подать заявку на включение Вашего сайта в наш каталог через форму. После модерации заявки в течении 24 часов Ваша лента новостей начнёт транслироваться в разделе Вашего города. Все новости в нашей ленте новостей отсортированы поминутно по времени публикации, которое указано напротив каждой новости справа также как и прямая ссылка на источник информации. Если у Вас есть интересные фото Вашего города или других населённых пунктов Вашего региона мы также готовы опубликовать их в разделе Вашего города в нашем каталоге региональных сайтов, который на сегодняшний день является самым большим региональным ресурсом, охватывающим все города не только России и Украины, но ещё и Белоруссии и Абхазии. Прислать фото можно здесь. Оперативно разместить свою новость в Вашем городе можно самостоятельно через форму.

Другие популярные новости дня сегодня


Новости 24/7 Все города России



Топ 10 новостей последнего часа



Rss.plus


Новости России







Rss.plus
Moscow.media


103news.comмеждународная интерактивная информационная сеть (ежеминутные новости с ежедневным интелектуальным архивом). Только у нас — все главные новости дня без политической цензуры. "103 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию.

Мы не навязываем Вам своё видение, мы даём Вам объективный срез событий дня без цензуры и без купюр. Новости, какие они есть — онлайн (с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии).

103news.com — живые новости в прямом эфире!

В любую минуту Вы можете добавить свою новость мгновенно — здесь.

Музыкальные новости




Спорт в России и мире



Новости Крыма на Sevpoisk.ru




Частные объявления в Вашем городе, в Вашем регионе и в России