Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.
Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.
Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?
It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.
Some of the skepticism is just a reaction to the excessive hype with which generative AI has been pushed over the past few months. If you’ve seen tech hype cycles before, the hype itself can generate some skepticism. Plus there are many dubious cases where companies are shoving ChatGPT or similar into their products just so they can advertise them as “AI powered”, and these poorly thought out, marketing-driven moves deserve criticism.
Come on now, next you’ll be saying the tech industry consistently overplays its incremental improvements as Earth-shattering paradigm shifts purely for the investment money!
This message posted from the metaverse
3 months ago: Everyone’s going to lose their jobs!
Today: Generative AI’s dead!
More realistically: Generative AI is a tool that will gradually get better over time. It is not universally applicable, but it does have a lot of potential applications. It is not going to take over the world, nor will it just suddenly go away.
That’s pretty much been my take from the beginning. My main concerns were and still are:
- IP law, specifically copyright infringement
- correctness - ChatGPT makes stuff up
- detection - esp for school
My main fear was that it would be more useful for scammers and fraudsters than legitimate uses because of the above issues. I still have those concerns.
With any new technology that people say well change the world overnight, take a step back and think it through. For example:
- self driving cars - we still have taxis, Uber, etc, so it hasn’t taken over despite being here for years
- robotics in manufacturing - it’s incredibly expensive to put together and end to end robotic factory, so there are still plenty of manufacturing jobs
- automated fast food - again, the most I’ve seen is increased number of kiosks, that’s it
And so on. People freak out about new tech, then a couple years later they realize that it’s not “finished” and there will be plenty of time to adapt. Unless we recover an alien spaceship or something, that’s just not how technology progresses. Eventually generative AI will redically change our society, but it’ll take decades, so by the time your job is threatened, you’ll be ready to retire.
Genuine question: How hard is it to fix A.I. Hallucinations?
[This comment has been deleted by an automated system]
Isn’t ChatGPT’s launch only less than 6 months old or something…
Reminds me of the article saying open ai is doomed because it can only last about thirty years with its current level of expenditure.
OpenAI must evolve into serving something other than generative AI.
The compute bills for OpenAI are crazy. They would need more paying customers to try and at least keep the service somewhat viable.
https://futurism.com/the-byte/chatgpt-costs-openai-every-day