- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
My bingo-board didn’t have, “replacing actual research with a liar-box,” but here we are. I’ve noticed, to my increasing discomfort, trends with using ChatGPT (and similar) to replace actual research into topics, or people using them to “summarize” articles. As someone with actual research training it’s pretty alarming to realize how little people understand about what research is.
I’ve seen how badly these things mangle my area of academic interest, because they can’t write reasonable citations.
Not on the list:
- Memes (not just hoaxes, also jokes and puns)
- Profile pictures
- Spotting AI fakes (Google assistant → reverse image search)
- Filtering out CSAM
A lot of things actually. A ton of material science breakthroughs lately. Science in general. I’ve read about recent astronomical discoveries, too.
Yes, machine learning is great! Unfortunately the terms ML and AI got conflated recently.
ChatGPT actually pretends to be clever. An app that takes a bunch of pictures and spits out a % of how much “kitten” might be in it is not quite what one would think of when talking about AI.
yet it’s machine learning so some people call it AI nowadays
This is funny because you’re the one confusing LLM as the only option when it comes to AI.
ChatGPT is just an example most people are familiar with. But if you’d like to put examples of other tech, go for it
Nice article 😆 And I wonder if it’s going to stay that way. Or if it’s like a new invention which is still missing a (good) application. I have some other good use-case which are missing in the list and that’s image classification and description, speech to text and text to speech. And machine translation. I think that’s massively useful. But as pointed out in the article, generative AI does lots of things which harm people and society. I mean the promise is that it’s going to get better and stop lying so much, so we can have some proper applications as well. But that’s not a thing yet. And personally - I’m still waiting for AI to merge with robotics and do real hands-on work. Which could be very helpful in some professions. Or lead to a more dystopian future.
And I believe all the accelleration of everything, spreading misinformation and making it super cheap and easy to manipulate and spam, is here to stay. That’s something we need to deal with, and it’s not easy or straightforward. If I were a tech bro, I’d advertise my AI solution to deal with the issues that arise with AI 😅
The only positive thing on that list is automating repetitive coding tasks. Did I miss anything?
Meh. If you’re writing repetitive code you’re probably doing it wrong though.
Poor Java devs
Didn’t Edwin Starr sing something about this?
I think I most align with this take: https://youtu.be/-opBifFfsMY
I grieve the loss of what was before.
Content farms have been a thing since the early 2000s, no AI needed, just stuff hastily written by outsourced workers for less than a minimum wage, then poorly translated and turned into templates to generate thousands of pages, in what some called “SEO”.
Particularly, results for “file format” or “extension” have been a hot mess for the last 20 years or so, there was never a clean search… and yet, by searching right now for “glb file format specification”, the second link is to the canonical Khronos spec, the third one is the Wikipedia entry with links to the spec.
That’s way better than it used to be.