

“too dumb to understand code requirements in every industry and profession.”
Or selfish. Unfortunately Hanlon’s razor can only cut so deep.
“too dumb to understand code requirements in every industry and profession.”
Or selfish. Unfortunately Hanlon’s razor can only cut so deep.
I find it cool that Dolly Parton’s stage persona is so thick a mask of glamour that when she’s not “in character”, she can go out and get to live like a normal person, who no-one recognises her. I don’t know if this was by design, but it sure seems to be a smart choice
I agree that baby steps are important. So many of the less techy people I know have become so accustomed to being annoyed at tech that they just suppress it, thinking that there is no alternative. I’ve been told a few times that my freely incandescent rage at technology is validating because “if even [I] are frustrated at things, then it’s not just a problem of [them] being bad at tech”.
Step one is acknowledging the problem
Something that I find interesting with Rome is that arguably one of the ways it managed to keep going for so long is that it was continuing to push its borders outwards through conquest. Assimilating a land and its people into the Republic/Empire is one way of dealing with the problem of invading “barbarians” (even if that is just transmuting the problem such that your external threat is a new group of “barbarians”, and the old potential invaders potentially pose a threat from within).
Continuing to push outwards is a way to continue developing the military though, and to distract the military from the potential option of seizing power for themselves. There’s only so far you can push before the borders you need to secure are too large to do effectively, and the sheer area to be administrated is too large, even for Rome.
As you highlight, it’s a common misconception that people don’t realise that the Fall of Rome was far more protracted and complex of a process than a single event. I think that’s a shame, because I find it so much more interesting that historians can’t even agree on when the Fall of Rome even was.
“Marked by opulence and a distracted upper class, depending on foreign born nationals and the impoverished to defend them from the mob.”
I’m not sure how linked to the Fall of Rome these things are when they existed throughout basically the entire history of the Roman Empire (and even the Republic before it). The “secession of the plebs” was effectively a general strike of the commoners that happened multiple times between the 5th venture BCE and the 3rd century BCE — many centuries before the Fall of Rome.
Commenting to echo my agreement. Rome was bloody huge, and it was hard to administrate. Things like high quality roads and advanced administrative systems help to manage it all, but when you’re that big, even just distributing food across the empire is a challenge. Rome only became as large as it was because it was supported by many economic, military and political systems, but the complexity of this means that we can’t even point to one of them and say “it was the failure of [thing] that caused Rome to fall.”
An analogy that I’ve heard that I like is that it’s like a house falling into disrepair over many years. A neglected house will likely become unliveable long before it collapses entirely, and it’ll start showing the symptoms of its degradation even sooner than that. The more things break, the more that the inhabitants may be forced to do kludge repairs that just make maintaining the whole thing harder.
Thanks for the podcast recommendation, I’ll check it out. I learned about a lot of this stuff via my late best friend, who was a historian, so continuing to learn about it makes me feel closer to him
Is it? I didn’t get that sense. What causes you to think it’s written by chatGPT? (I ask because whilst I’m often good at discerning AI content, there are plenty of times that I don’t notice it until someone points out things that they notice that I didn’t initially)
Sometimes, I feel like writers know that it’s capitalism, but they don’t want to actually call the problem what it is, for fear of scaring off people who would react badly to it. I think there’s probably a place for this kind of oblique rhetoric, but I agree with you that progress is unlikely if we continue pussyfooting around the problem
I find it odd that you seem to be more comfortable to think of the impact this will have on paragliders dropping bombs on people than on the innocent people bombed in this attack. I get that being a paraglider must be scary because it inevitably comes with the risk of being shot, but this is a story about civilian deaths due to a bombing, not paraglider deaths due to gunfire.
I’m of the view that there’d be more productive discussions if we collectively started to use the word “terrorism” in a more nuanced way that allowed for the possibility that not all terrorism is necessarily morally bad.
What got me started thinking this was that there is a character in Star Trek: Deep Space 9 who is open about the fact that she used to be a terrorist — except this was in the context of resisting a brutal occupation of her planet. I have recently been rewatching the show, and it’s interesting to see how the narrative frames this as an overall morally good thing whilst also reckoning with the aspects of the resistance that were morally bad. Makes me wistful for that kind of nuance in real world discussions of violent resistance.
It might also make it easier to vehemently condemn senseless acts of state sanctioned terrorism such as this bombing. Though based on the long history of interactional inaction towards multiple genocides, that probably wouldn’t make much difference.
Thanks for this better source. It’s hard to find quality journalism nowadays, so I appreciate when people like you make it easier to be well informed.
I recently watched the satirical video “Honest Government Ad | Visit Myanmar!” and it was surprisingly informative. I knew that the junta was bad, but through this, I learned a heckton more about how deep the problem goes.
I installed Linux on a friend’s old laptop in an attempt to wring a few more years out of it, and they told me that they were surprised at how easy to use it was. I think most people just struggle with feeling intimidated. There is a bit of a learning curve, but the main obstacle is getting over that initial inertia
“not that hard to do”
Eh, I’m not so sure on that. I often find myself tripping up on the xkcd Average Familiarity problem, so I worry that this assumption is inadvertently a bit gatekeepy.
It’s the unfortunate reality that modern tech makes it pretty hard for a person to learn the kind of skills necessary to be able to customise one’s own tools. As a chronic tinkerer, I find it easy to underestimate how overwhelming it must feel for people who want to learn but have only ever learned to interface with tech as a “user”. That kind of background means that it requires a pretty high level of curiosity and drive to learn, and that’s a pretty high bar to overcome. I don’t know how techy you consider yourself to be, but I’d wager that anyone who cares about whether something is open source is closer to a techy person than the average person.
Sidestepping the debate about whether AI art is actually fair use, I do find the fair use doctrine an interesting lens to look at the wider issue — in particular, how deciding whether something is fair use is more complex than comparing a case to a straightforward checklist, but a fairly dynamic spectrum.
It’s possible that something could be:
I’m no lawyer, but I find the theory behind fair use pretty interesting. In practice, it leaves a lot to be desired (the way that YouTube’s contentID infringes on what would almost certainly be fair use, because Google wants to avoid being taken to court by rights holders, so preempts the problem by being overly harsh to potential infringement). However, my broad point is that whether a court decides something is fair use relies on a holistic assessment that considers all four of pillars of fair use, including how strongly each apply.
AI trained off of artist’s works is different to making collage of art because of the scale of the scraping — a huge amount of copyrighted work has been used, and entire works of art were used, even if the processing of them were considered to be transformative (let’s say for the sake of argument that we are saying that training an AI is highly transformative). The pillar that AI runs up against the most though is “the effect of the use upon the potential market”. AI has already had a huge impact on the market for artistic works, and it is having a hugely negative impact on people’s ability to make a living through their art (or other creative endeavours, like writing). What’s more, the companies who are pushing AI are making inordinate amounts of revenue, which makes the whole thing feel especially egregious.
We can draw on the ideas of fair use to understand why so many people feel that AI training is “stealing” art whilst being okay with collage. In particular, it’s useful to ask what the point of fair use is? Why have a fair use exemption to copyright at all? The reason is because one of the purposes of copyright is meant to be to encourage people to make more creative works — if you’re unable to make any money from your efforts because you’re competing with people selling your own work faster than you can, then you’re pretty strongly disincentivised to make anything at all. Fair use is a pragmatic exemption carved out because of the recognition that if copyright is overly restrictive, then it will end up making it disproportionately hard to make new stuff. Fair use is as nebulously defined as it is because it is, in theory, guided by the principle of upholding the spirit of copyright.
Now, I’m not arguing that training an AI (or generating AI art) isn’t fair use — I don’t feel equipped to answer that particular question. As a layperson, it seems like current copyright laws aren’t really working in this digital age we find ourselves in, even before we consider AI. Though perhaps it’s silly to blame computers for this, when copyright wasn’t really helping individual artists much even before computers became commonplace. Some argue that we need new copyright laws to protect against AI, but Cory Doctorow makes a compelling argument about how this will just end up biting artists in the ass even worse than the AI. Copyright probably isn’t the right lever to pull to solve this particular problem, but it’s still a useful thing to consider if we want to understand the shape of the whole problem.
As I see it, copyright exists because we, as a society, said we wanted to encourage people to make stuff, because that enriches society. However, that goal was in tension with the realities of living under capitalism, so we tried to resolve that through copyright laws. Copyright presented new problems, which led to the fair use doctrine, which comes with problems of its own, with or without AI. The reason people consider AI training to be stealing is because they understand AI as a dire threat to the production of creative works, and they attempt to articulate this through the familiar language of copyright. However, that’s a poor framework for addressing the problem that AI art poses though. We would be better to strip this down to the ethical core of it so we can see the actual tension that people are responding to.
Maybe we need a more radical approach to this problem. One interesting suggestion that I’ve seen is that we should scrap copyright entirely and implement a generous universal basic income (UBI) (and other social safety nets). If creatives were free to make things without worrying about fulfilling basic living needs, it would make the problem of AI scraping far lower stakes for individual creatives. One problem with this is that most people would prefer to earn more than what even a generous UBI would provide, so would probably still feel cheated by Generative AI. However, the argument is that GenerativeAI cannot compare to human artists when it comes to producing novel or distinctive art, so the most reliable wa**y to obtain meaningful art would be to give financial support to the artists (especially if an individual is after something of a particular style). I’m not sure how viable this approach would be in practice, but I think that discussing more radical ideas like this is useful in figuring what the heck to do.
I get what you’re saying.
I often find myself being the person in the room with the most knowledge about how Generative AI (and other machine learning) works, so I tend to be in the role of the person who answers questions from people who want to check whether their intuition is correct. Yesterday, when someone asked me whether LLMs have any potential uses, or whether the technology is fundamentally useless, and the way they phrased it allowed me to articulate something better than I had previously been able to.
The TL;DR was that I actually think that LLMs have a lot of promise as a technology, but not like this; the way they are being rolled out indiscriminately, even in domains where it would be completely inappropriate, is actually obstructive to properly researching and implementing these tools in a useful way. The problem at the core is that AI is only being shoved down our throats because powerful people want to make more money, at any cost — as long as they are not the ones bearing that cost. My view is that we won’t get to find out the true promise of the technology until we break apart the bullshit economics driving this hype machine.
I agree that even today, it’s possible for the tools to be used in a way that’s empowering for the humans using them, but it seems like the people doing that are in the minority. It seems like it’s pretty hard for a tech layperson to do that kind of stuff, not least of all because most people struggle to discern the bullshit from the genuinely useful (and I don’t blame them for being overwhelmed). I don’t think the current environment is conducive towards people learning to build those kinds of workflows. I often use myself as a sort of anti-benchmark in areas like this, because I am an exceedingly stubborn person who likes to tinker, and if I find it exhausting to learn how to do, it seems unreasonable to expect the majority of people to be able to.
I like the comic’s example of Photoshop’s background remover, because I doubt I’d know as many people who make cool stuff in Photoshop without helpful bits of automation like that (“cool stuff” in this case often means amusing memes or jokes, but for many, that’s the starting point in continuing to grow). I’m all for increasing the accessibility of an endeavour. However, the positive arguments for Generative AI often feels like it’s actually reinforcing gatekeeping rather than actually increasing accessibility; it implicitly divides people into the static categories of Artist, and Non-Artist, and then argues that Generative AI is the only way for Non-Artists to make art. It seems to promote a sense of defeatism by suggesting that it’s not possible for a Non-Artist to ever gain worthwhile levels of skill. As someone who sits squarely in the grey area between “artist” and “non-artist”, this makes me feel deeply uncomfortable.
I liked it, personally. I’ve read plenty of AI bad articles, and I too am burnt out on them. However, what I really appreciated about this was that it felt less like a tirade against AI art and more like a love letter to art and the humans that create it. As I was approaching the ending of the comic, for example, when the argument had been made, and the artist was just making their closing words, I was struck by the simple beauty of the art. It was less the shapes and the colours themselves that I found beautiful, but the sense that I could practically feel the artist straining against the pixels in his desperation to make something that he found beautiful — after all, what would be the point if he couldn’t live up to his own argument?
I don’t know how far you got through, but I’d encourage you to consider taking another look at it. It’s not going to make any arguments you’ve not heard before, but if you’re anything like me, you might appreciate it from the angle of a passionate artist striving to make something meaningful in defiance of AI. I always find my spirits bolstered by work like this because whilst we’re not going to be able to draw our way out of this AI-slop hellscape, it does feel important to keep reminding ourselves of what we’re fighting for.
I’ve been practicing at being a better writer, and one of the ways I’ve been doing that is by studying the writing that I personally really like. Often I can’t explain why I click so much with a particular style of writing, but by studying and attempting to learn how to copy the styles that I like, it feels like a step towards developing my own “voice” in writing.
A common adage around art (and other skilled endeavours) is that you need to know how to follow the rules before you can break them, after all. Copying is a useful stepping stone to something more. It’s always going to be tough to learn when your ambition is greater than your skill level, but there’s a quote from Ira Glass that I’ve found quite helpful:
“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know it’s normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take a while. You’ve just gotta fight your way through.”
Biochemistry — specifically protein structure. It’s so cool.
My favourite protein is Green Fluorescent Protein (GFP). It was first extracted from a jellyfish, and it’s super useful in research. The middle bit is the bit responsible for the coloured glow, and the rest of it (the barrel type structure) is basically just to stop the emitted energy from being immediately absorbed by the solvent.
If anyone wants me to nerd out more about proteins, hit me up.