• 4 Posts
  • 825 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle



  • Yeah, I knew that, it’s super cool, and it came to mind as I was writing my earlier comment.

    What’s neat about the website stuff is that even if it’s not as good now (idk, I haven’t looked), that value they created is still there in the older case study — there were so many good resources. I was the disability rep in a few student societies, as well as in a few volunteer orgs after uni, and we referenced the guidelines a few times. Good resources like that are especially useful in those contexts — because they helped turn “that would be nice, but we don’t have the resources to implement accessibility in our materials” into “okay, let’s put our money where our mouth is and do our best to make something as accessible as we can”





  • Despite being so shit in many different respects (a chronic use of external consultants and contractors means the UK seems less likely than other European countries to make progress on a sovereign tech stack), the UK is pretty good with its data. There’s a surprisingly amount of data that’s released and is in a sensible format.

    During the teachers strikes last year, I ended up using playing around making visualisations using the data about the number of teachers in various parts of the country, and I was pleased to see how much there was there and how clearly it was documented. There are very few things I’m proud of the UK for, so I am glad to have this as one




  • “About not delegating your brain to machines, that’s a fair point, and I would encourage people to consciously choose where to use machines and where to use their brain”

    Yeah, big agree on this front. We should be using technology as a tool to aid us to do the stuff we care about, rather than letting ourselves be made subordinate to the tech itself. For some people, that kind of agency means using an open source system like Listenbrainz, and for some, like the person you’re replying to, that means continuing to discover music in their own way. Both of these approaches are fine — indeed, the whole point of building tech that serves as tools is that if our experience tells us that we have a task that wouldn’t benefit from the tool, we can just leave it in the box.

    Personally, I enjoy going for a combination approach — I sometimes use listenbrainz as a catalyst to help me discover new stuff beyond my experience, but once I have a few new artists I’m interested in, I then go and do some manual digging around them. I don’t need to do this manual work part of it, but it’s a key part of my enjoyment of the music discovery process — so I can somewhat relate to the person you’re replying to’s preference


  • Yessss! I am so jazzed to see other people in this thread who love Listenbrainz as much as I do.

    I will always love it because it was my first ever contribution to open source software. It was only documentation, because I’m a mediocre programmer, but documentation is a big deal for projects like these.

    What I really liked about contributing is that I felt a real sense of contributing to something bigger than myself. I mean, I feel that with the fact that my listening data gets added to the pool itself, but I felt it even more so when helping with the documentation.

    It was only something small, but I liked the idea that I was helping future tinkerers experience a little less frustration than I felt when I struggled with the outdated documentation. It made me happy to think that I was facilitating more people to tinker. I may only be a mediocre programmer, but that just means I am well placed to help pave the way for people more skilled than I am. This is the kind of project that I want to exist in the world, and so helping to support it genuinely makes me feel a little more hopeful in the face of this increasingly enshittified world


  • Yeah, it’s pretty low on the social side of things. However, having watched the massive progress the project has made over the last few years makes me hopeful that it’ll continue to improve. They seem to be quite smart about how they go about developing new features, which is wise for an open source project. It’s been pretty cool to watch how good their recommendation algorithm has been getting though, compared to when I first joined


  • I felt a bit weird about it at first, but the one thing keeping me tied to Spotify was how useful it was for discovering new music (though even that had been degrading by the time I cancelled it).

    If you’re someone who either prefers to listen to music that they already know and love, or someone who enjoys discovering new music through manual effort, then Listenbrainz isn’t for you

    However, if you’re currently relying on the recommendations of a service like Spotify, then it’s at least worth considering. For me, I became a lot more at ease with Listenbrainz when I realised that this kind of music recommendation simply isn’t possible without other people’s data — and that part of the “price” for being able to access recommendations built from that data is that my listening history gets added to the pool of listening data used by the recommendation system.

    If it’s Spotify’s pool that I’m contributing to, then I feel like I’m getting a pretty bad deal, because they hoard that data like a digital dragon, and then use it to further entrench their monopolistic position in the market. I don’t like that — it makes me feel complicit in the grossness.

    Whereas with Listenbrainz, I’m contributing to a data commons of sorts. Listenbrainz’s recommendation algorithm has gotten so much better in the couple of years that I’ve been using it, and that wouldn’t be possible without a growing pool of data. Independent researchers and developers are able to benefit from it, and the more people we have making stuff in this space, the more we chip away at Spotify’s power.

    Like I said, having my data be so public does make me feel a tad uneasy, but with data like this, it tends to only be valuable in bulk (meaning the system doesn’t care about any individual’s sad drinking songs), or hypothetically, to individuals who are excessively concerned with another individual (such as stalkers, I guess). However, that last point doesn’t concern me, because I made my Listenbrainz account under a username that’s unconnected to any of my others, and my profile shows no indication of who I am on Spotify.

    I’m sure that someone dedicated and skilled enough could retrieve my Spotify account name from the system, because I linked my account way back when I did have Spotify, but I trust Listenbrainz with my data a hell of a lot more than I do Spotify. Spotify definitely have way more money to hire cybersecurity folk to prevent exfiltration of user data, but they’re so opaque that even if there were a breach, I wouldn’t trust them to tell me. I’ve been following Listenbrainz’s development for a while, and they’re pretty cautious and transparent with how they go about things.

    To be clear, I’m not formally affiliated with Listenbrainz in any way. I have contributed to improving documentation a few times (because that’s usually the best way I can support open source projects, as a mediocre programmer), but that stems from the same thing that made me write this comment: I just really like what they’re trying to do, and I think the world would be a little better if more people joined it. (also, I am just a huge nerd for metadata schema, and the affiliated musicbrainz project has so much cool stuff for me to learn about)


  • When people are complaining about AI, it’s often the scale of it they have beef with: the fact that it’s being shoved into their face everywhere they look, mandated for use in their job by management, even if it does not make them more productive. A consequence of it being shoved everywhere are the larger problems that make people angry, such as the excessive resource use by AI data centres.

    I agree that LLMs are here to stay — I understand enough about how the tech works that I know that there is tremendous potential for their use (I originally got into learning about machine learning because I wanted to better understand AlphaFold, a protein structure prediction model made by Google Deepmind (not sure I’d count this as an LLM, but under the hood, it works pretty similarly)). However, the problem of AI is more about how the technology is functioning at a societal level than a purely technological problem.

    I believe that the current societal impact of the AI boom far exceeds the actual technological impact of LLMs. Whilst I get your point about the dotcom bubble analogy, I think that in that case, the ratio of “harms caused by the dotcom bubble” to “genuine societal impact of the technology once the bubble has popped” is much smaller. I grant that we have the benefit of hindsight with the internet, because the tech has had so much time to mature and become integrated with society, whereas we’re still in the middle of the AI hype bubble, but I don’t believe that LLMs/AI are capable of being anywhere near as transformative to society as the internet. There may be niche fields that are overturned or even functionally destroyed, but there are few genuine use-cases of LLMs. They’ll still exist after the bubble has popped, and they’ll have their uses, but I don’t believe they’ll be anywhere near as ubiquitous as they are now.

    Regardless of whether you agree with me on this, one thing we are in accord with is that the bubble is bullshit and harmful. Personally, something that frustrates me with it is that I am genuinely curious to see genuine progress in the real use cases for LLMs — I’m open to the possibility that in 10-20 years time, my predictions in my previous paragraph may have been proven to be wrong. However, the bubble is just delaying that kind of meaningful integration into society, as well as hindering areas of research that could improve LLMs

    (as well as crowding out other areas of AI research that are based on different architectures and methods, which may get us much closer to the sci-fi sense of AI than LLMs ever could. Song-Chun Zhu is an example of a researcher who used to work in this field of AI, but got burnt out by how the economic pressures on research meant that it was hard to do research that wasn’t based on this one dominant method. He’s one of many who is nowadays more interested in researching AI in a “small-data for big tasks” paradigm)



  • I personally don’t use AI, but I concede that for some people, it can be useful for them, if they use the AI as a tool for their own thinking, rather than subordinating themselves to the chatbot. Mostly, this means ensuring that they’re able to check whether the AI is right or not.

    When I dabbled in using coding AI, there were a few basic tasks that it was useful for. There were a few hallucinations, but because the task was basic and well within my proficiency to scan, I was able to set it right; even with these corrections, it still saved me time overall. However, when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly. Things weren’t working, so I felt sure that there must be some hallucinated errors, but I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency. A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting compared to how problem solving a code problem feels, and I felt dissatisfied by the lack of learning involved.

    Ordinarily, struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time. I guess I did get a little better at prompting the AI, but I felt like I learned far less than if I had solved the problem myself. Battling through to build a thorough understanding of my problem and my tools takes a long time upfront, but the next time I do this task or a similar one, I’ll be quicker, and these time improvements will build and build as my proficiency continues to grow. That’s why I stopped dabbling with AI coding assistants/agents — because even though using them for this complex task still saved me time compared to usual, in the long term, the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.

    Now I hear what you’re saying about how much more effective AI coding agents are becoming, and how the hallucination rate is lower than it was. I haven’t had much first hand experience for quite a few months now, but I have no doubt that I would be incredibly impressed at the progress in such a relatively short time. The time savings from using AI would likely be larger today than it was when I tested it, and in a year, it’ll be even better. However, in my view, that will still not be able to compete with the long term time savings of a human gaining proficiency. You might disagree with me on that.

    But the thing is, that human proficiency isn’t just a means to save time on their regular task, but a valuable end in and of itself. That proficiency is how we protect ourselves when things go wrong in unexpected ways. Even if the AI models we’re using now could perfectly capture and reproduce the sum of our collected knowledge, I don’t believe they can come close to rivalling humans in the realm of creating new knowledge, or adapting to completely novel circumstances. Perhaps some day, that might be possible for AI, but that’s not going to be possible with any of the AI architectures that we have today. In the meantime, creative and proficient humans will continue to find ways to exploit the flaws in AI systems, possibly for nefarious ends. A society that relies heavily on AI will need more technical expertise, not less.

    “Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.”

    The crux of my argument is “how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”. Even if hallucination rate continues to drop, it will always be non-zero. Sure, humans are also far from perfect, but that’s why so many of our systems include oversight mechanisms that involve many sets of eyes on critical systems; Junior developers are mentored by more experienced devs, who help ensure they don’t break stuff with their inexperience (at least, in an ideal world. In practice, many senior Devs are so overworked and stretched thin that they can’t give the guidance they should. Again, this is a case for more proficient humans). Replacing proficient humans with AI will build a culture of unquestioningly following the AI. Even if hallucination rate is a fraction of the human error rate, it will always be non-zero, and therefore there will be disasters.

    And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?


  • Something that a friend pointed out to me as a possible factor is the religious backdrop of US Christianity. I’ve forgotten the specific phrase (it was something like “prosperity Christianity”), but basically the idea that good fortune (from hard work) is vindication of God in this life. It’s pretty deeply tied to the Protestant work ethic, which is pretty pervasive in US culture, even in ostensibly secular institutions.

    The original idea was more or less “as well as having faith, you should also work very hard, because that’s part of your duty. Then you will be blessed and will have good fortune”. However, that has been increasingly distorted and subject to a logical fallacy that means people get it backwards. For instance, let’s say we took this doctrine to be an absolute fact: that if you diligently do good work, then you will be blessed, and have good fortune. I.e

    if good work, then blessed

    If blessed, then good fortune

    ∴ If good work, then good fortune

    However, under this doctrine, people often commit the fallacy of “affirming the consequent”. For instance, if the only lamp in a room breaks, then the room will be dark. However, the room being dark doesn’t necessarily mean that the lamp is broken (it could be off, or be stolen, or covered). So what people do is they go “I am wealthy. People who are blessed have good fortune, and I have good fortune, so therefore I must be blessed”. This logic has been used to justify all sorts of awful, awful crimes against humanity. For instance, enslaved people must be bad people because they clearly do not have good fortune. But the person who owns those slaves surely must be blessed, because he has good fortune.

    This way of thinking is so deeply embedded into US culture that even devout atheists end up absorbing a lot of this logic. This is only one small part of the puzzle as to why billionaires are so dumb, but applying this lens really helped me to understand the self-validation cycle that a lot of billionaires and powerful people get into.

    The way I imagine this cycle going is that someone who is quite successful under capitalism (often due to advantages like inherited wealth) has a brief moment of self reflection where they wonder “am I actually doing well here? Do I have anything of value to add? I was given a lot of opportunities to succeed (e.g. inherited wealth), but have I effectively utilised those opportunities? How would I know if I had actually done well? Sure, I’ve grown my wealth a heckton, but maybe a different person with these same opportunities would have done far better than I did?”

    With those questions comes a heckton of dread. And like, I actually really sympathise with that dread, because it’s a fairly universal feeling, I suspect. For instance, I dropped out of university due to a heckton of external extenuating circumstances. When I’m feeling bad about this, people who knew me during this period often reassure me that it was not my fault, and that it’s a testament to my strength that I held out as long as I did. Certainly, that’s what I’d like to believe, but the terrifying question that I’ll never be able to answer is “what if those external circumstances didn’t exist? What if I would’ve dropped out even if not for all that, and if I’m actually just not smart enough to study what I wanted?”. We can’t see alternative timelines.

    What’s different about billionaires though is that they have so much money that they can ignore the uncomfortable dread, rather than sitting with it and doing some useful self reflection, before setting it aside. They push it out of mind and distract themselves by throwing themselves into work or hedonism, or both (I have never known a billionaire, but I have known some very wealthy CEO types, and they worked themselves to the bone, potentially to avoid feeling this imposter syndrome dread. I’m inclined to view their hyper working habits as being irrational in this way because a lot of the excess work they did seemed to be bullshit work (in the sense of David Graeber’s “bullshit jobs” — that is, it was work done to make themselves feel useful)).

    Another thing that I have that billionaires don’t is friends that I trust to guide me on my self reflection. I trust my friends when they tell me my university disaster wasn’t my fault because they have shown that they are more than willing to call me out when I make poor choices. Even in scenarios where I am clearly the victim of some fucked up thing, if I have made things worse for myself by making poor choices (something I’m prone to doing if I’m in a fatalistic depression spiral), they hold me accountable for my choices, in addition to sympathetically supporting me.

    Instead, billionaires are surrounded by people who they can’t trust. Sycophants everywhere, who don’t care about who you are as a person, but what you can do for them. You’re less likely to have people calling you out for things, but you also won’t get much affirmation for the genuinely good things about your personality. Like, let’s imagine if Sam Altman had an aspect of his personality that was a really good quality that was distinctly him, and thus the kind of thing that would be productive to view as part of his self identity because it could help him focus on that as a direction of future growth. And let’s say he had a genuine, non-sycophantic friend who tried to highlight this to him — how would he be able to tell that this was a genuine compliment coming from a genuine friend, and not just another bullshit sycophant? You can’t, not really.

    It’s tragic really. The ultra rich have basically gatekept themselves from genuine human connection. They burn out from being on guard all the time, and so they surround themselves with people in their own wealth class (people who are also extremely poorly adjusted). I find it quite sad, because this isolation seems to be an inevitable consequence of being mega-rich. This is why when I say things like “billionaires should not exist”, I’m not just speaking in favour of peons like us, but also out of compassion for the billionaires. I resent them like hell, but I also deeply pity them. I’d love to be financially comfortable enough to not worry about whether I’ll be having to be sleeping in my car next month, but I’d rather be in my position in theirs. If by some weird twist of fate, I suddenly became mega rich, I would do everything I possibly could to give away money until I was “merely” financially comfortable.

    I got a bit off track with my ranting because I am procrastinating getting food, so I’ll bring it back to your question. Basically, billionaires get dumb because they are emotionally maladjusted and often deeply insecure. Wealth becomes a thing by which they measure their own self worth, but no amount of wealth can fill the vacuous chasm in their hearts caused by a deep isolation and lack of genuine fulfillment. Occasionally they do get slices of this fulfillment — see Mark Zuckerberg getting heavily into MMA.

    But if they ever have moments of self reflection where they experience that normal and healthy self doubt, they are so socially isolated and maladjusted to actually reflect. Their wealth means they can afford to never be uncomfortable, and that applies here too. So to escape their dread, they build a narrative of how they deserve it. They’re not just lucky — they are actually very smart and good and they deserve their wealth. And the sycophants around them will tell them they’re absolutely right. Meanwhile, the people they respect as their peers (other billionaires) are also prone to spouting psuedointellectual bullshit whilst pretending to be smart, so this validates their own dumbassery.

    The psuedointellectual stuff is another reason I pity them. I was a Gifted Kid™, and because I didn’t have friends in school, my intelligence was basically my entire identity. This meant I was so desperately scared of losing that that I would bullshit about what I knew or not. Nowadays, I’m a lot better at being open about when people ask me about something I either haven’t heard of, don’t understand, or can’t quite remember. I often say “I got a hell of a lot smarter when I let myself be more dumb”, because learning to be more vulnerable meant I had the opportunity to learn a heckton from loads of cool people (rather than being preoccupied with appearing smart).

    Billionaires are dumb because they’re cosplaying smart people, and they’re so deep in the role that they forget they’re cosplaying. They’re also surrounded by other dumbasses spouting psuedointellectual bullshit, but they will never call them out on this, because they’re so pathetically insecure that they fear that this will out them as being an imposter — they don’t realise that their peers are also cosplaying. It’s an absurd echo chamber of the worst kind.


  • "Fargo police did not cover Angela’s expenses to get home after her release from jail. Local defense attorneys gave her money to pay for a hotel room and food on Christmas Eve and Christmas Day.

    The day after Christmas, F5 Project founder Adam Martin drove Lipps to Chicago so she could get home to Tennessee. Fargo-based F5 Project is an organization providing services and resources to individuals struggling with incarceration, mental health and addiction."

    It’s bittersweet to read bits like this. It reminds me of the Mr Rogers line about how, in a disaster, you should “look for the helpers” if you need reminders of goodness to avoid becoming demoralised.

    I am glad that there are so many good people who are fighting for real justice — even people who have committed crimes don’t deserve the inhumane treatment they experience under our legal system. I wish it weren’t necessary though. These small kindnesses don’t make up for all the ways this imprisonment fucked up her life.


  • I do agree that there is much that remains. Indeed, I have found a lot of joy by discovering all the weird little personal websites that people are building as an act of rebellion. However, the culture has irrevocably changed. It makes me think of the line “man cannot step into the same river twice, for it is not the same river, and he is not the same man”.

    Many of us who grew up on a more free and chaotic internet have become jaded over time. If I went back in time, I wouldn’t be able to enjoy the internet in the same way I used to because I’d be too acutely aware of what lies ahead. That’s why I prefer to focus on moving forwards — it feels like a kind of healing