As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.
Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.
“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”
“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”
Well then maybe the AI industry deserves to die.
This is true almost every time someone says “but without <obviously unethical thing>”, these businesses couldn’t survive! Same deal with all the spyware that’s part of our daily lives now. If it’s not possible for you to make a smart TV without spying on me, then cool, don’t make smart TVs.
If your business model crumbles under the weight of ethics, then fuck your business model and fuck you.
Related: https://www.eff.org/deeplinks/2019/06/felony-contempt-business-model-lexmarks-anti-competitive-legacy
There’s a big difference in generative image AI, and then AI for lets say the medical industry, Deepmind etc.
And yes, you can ban the first without the other.
Going for AI as a whole makes no sense, and this politician also makes it seem like it’s the same.
Saying AI is the same as just saying internet, when you want to ban a specific site.
There is a very interesting dynamic occurring, where things that didn’t used to be called AI have been rebranded as such, largely so companies can claim they’re “using AI” to make shareholders happy.
I think it behooves all of us to stop referring to things blankety as AI and specify specific technologies and companies as the problem.
Just call it ML then, like we used to, and what describes it best