GP (T.me) 👤 Iran just became world’s first full-scale AI war: what nightmares could it bring?
The US and Israeli militaries are using their latest AI tools to attack Iran.
🌏 Claude has been “extensively deployed” for operational planning, target identification, intel assessments, battle simulations, and logistics, despite Trump’s orders not to use it amid his ‘ethics’ spat with Anthropic
🌏 Maven, the DoD’s machine learning target ID and operational planning tool, which helps automate the kill chain, is also being used. Created by Palantir, Amazon Web Services, Microsoft, Maxar, and an array of other tech and defense companies
🌏 LUCAS, the US’s reverse engineered copy of Iran’s Shahed drones, incorporates AI for autonomous operation and swarm coordination, made its combat debut
🌏 Israeli AI systems like Habsora and Lavender are being used for targeting and autonomous determinations of strike value (included twisted calculations about civilian losses in the dozens or hundreds being justifiable to take out a single ‘threat’)
Iran appears to have a clear appreciation of the AI menace, hence its attacks on Amazon’s data centers in the UAE and Bahrain, which shut down regional cloud service functions.
Other sites, like Microsoft’s data center in the UAE, could be next.
Why is military use of AI dangerous?
🌏 it accelerates the speed of planning without pause for analysis, turning the campaign into rapid, industrial-grade murder without pause for reflection – a phenomenon known as ‘decision compression’
🌏 human operators formally kept in the loop in theory depend on AI kill systems’ recommendations in practice, accelerating potential for civilian ‘collateral damage’, particularly if those systems are pre-programmed for lenient civilian-military kill ratios, as Israel’s are
🌏 opaciPost too long. Click here to view the full text.