As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.

Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.

“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    2 days ago

    From a copyright perspective, you don’t need to ask for permission to train an AI. It’s no different than taking a bunch of books you bought second-hand and throwing them into a blender. Since you’re not distributing anything when you do that you’re not violating anyone’s copyright.

    When the AI produces something though, that’s when it can run afoul of copyright. But only if it matches an existing copyrighted work close enough that a judge would say it’s a derivative work.

    You can’t copyright a style (writing, art, etc) but you can violate a copyright if you copy say, a mouse in the style of Mickey Mouse. So then the question—from a legal perspective—becomes: Do we treat AI like a Xerox copier or do we treat it like an artist?

    If we treat it like an artist the company that owns the AI will be responsible for copyright infringement whenever someone makes a derivative work by way of a prompt.

    If we treat it like a copier the person that wrote the prompt would be responsible (if they then distribute whatever was generated).

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      no different than taking a bunch of books you bought second-hand and throwing them into a blender.

      They didn’t buy the books. They took them without permission.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 days ago

      A realistic take on the situation.

      I fully agree, despite how much people hate AI, training itself isn’t infringement based on how copyright laws are written.

      I think we need to treat it as the copier situation, the person who is distributing the copyright infringing material is at fault, not the tool used to create it.

      • OfCourseNot@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I agree with both of you but it’s a bit more nuanced than that: what if someone not familiar with the original IPs asks for a ‘space wizard’ or an ‘Italian plumber cartoon’, it outputs Obi Wan or Mario, and they use it in their work? Who’s getting sued by Disney or Nintendo?