As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.

Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.

  • GuyIncognito@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    9 hours ago

    After WW2, the industrialists who supported the Nazis mostly got off scot free. This was a terrible mistake and should not be repeated.

  • 🍉 Albert 🍉@lemmy.world
    link
    fedilink
    arrow-up
    40
    ·
    14 hours ago

    Israel did the same thing, by using AI, they can hallucinate as many targets as they can with the “plausible deniability”.

    Like the absolute worst plausible deniability in human history.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      24
      ·
      13 hours ago

      There are just so many things to be said about the ills of AI, but one of them is that it is very purposefully a liability laundering machine. The decisions and thought process are blackboxed and unauditable. We’ve been trained to dismiss any oopsies as an inevitable part of the system, both while it’s still “rapidly developing” as well as just inherent to the technology. Absolutely none of this is acceptable and yet here we are.

      • KurtVonnegut@mander.xyz
        link
        fedilink
        arrow-up
        4
        ·
        11 hours ago

        But how does it change liability? Isn’t the person who decides to run this system ultimately responsible for its effects?

          • themurphy@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            7 hours ago

            Everyone in the chain is reliable.

            This is the same nazi argument. “I only transported the jews, I only watched them, I only build the concentration camps” and so on.

    • baguettefish@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      militaries slaughtering civilians using very shaky justifications that only work because they have the bigger fist or a strong backer probably isn’t that unique, so you can’t exactly use extreme words like “only”, “worst”, “ever” about current happenings. the IDF is only some of the worst scum possible.

      • 🍉 Albert 🍉@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        was referring to using BS AI to justify killing everyone. because AI that was designed to point at civilians said that civilians are targets.

        it’s the most obvious bullshit justification. that makes absolutely no sense. but he we are

  • SpruceBringsteen@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    15 hours ago

    Later, we’ll find out the whole identifying part is a bit glossed over, and it’s really about using the AI as a rubber stamping plausible deniability shield to commit whatever atrocity you feel like.

  • mub@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    14 hours ago

    Did I miss something. I thought Anthropic chose to not allow Claude to be used for military shit?

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      13 hours ago

      They chose not to change their tools to comply with the new Pentagon demands. It’s the Pentagon that then decided to cancel contracts and declare them a “supply chain risk” in retaliation. Claude is still integrated into their systems and it’ll take time to switch.

      • UnimportantHuman@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        9 hours ago

        You think it was for show? I am genuinely curious of someone else’s view because I’ve been skeptical myself. The fact they were even working with the Pentagon in the first place was a red flag to me.

        • eldavi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          5 hours ago

          they had to be deeply involved with the pentagon to enable the anthropic ai the ability to function, so they knew that it would be used in this way. the fact that they said no the day before they started bombing is suspicious timing alone when they could have said something much sooner.

        • sakuraba@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          8 hours ago

          not sure, I think this is just anthropic trying to keep their tech closed (they already started limiting what you can do with their API)

          anthropic will keep providing claude to them until they are done transitioning to stop using it in favor of other providers but it will take around 6 months to do so