This is what Ilya saw…
This is what Ilya saw…
As long as they don’t fuck it up in a similar fashion to seemingly every other thing they have tried for a couple decades.
Assuming it takes its answer from search results, and the search results are all affiliate marketing sites that just want you to click on a link and buy something, this makes perfect sense.
I’m not saying it’s good (because it’s not) but I’m unfortunately pretty certain they’re correct.
Is language conscious?
Are atoms?
I don’t know if LLMs of a large enough size can achieve (or sufficiently emulate) consciousness, but I do know that we barely know anything about consciousness, let alone it’s limits.
The thing is, LLMs can be used for something like this, but just like if you asked a stranger to write a letter for your loved one and only gave them the vaguest amount of information about them or yourself you’re going to end up with a really generic letter.
…but to give me amount of info and detail you would need to provide it with, you would probably end up already writing 3/4 of the letter yourself which defeats the purpose of being able to completely ignore and write off those you care about!
I think there is a significant distinction between “regular” working class and “earning above €400,000 per year” working class.
I think the guy you’re responding to is more talking about the distinction between income and capital gains, with income making up far less of the wealthy’s worth than existing investments.
But yes, a lot of people also have no concept of how tax brackets work.
Low and middle-income countries in Asia face significant disparities in scientific capacity and ability to influence public policy, which is likely to affect responses to future pandemics, climate change and technological advancements such as Artificial Intelligence, according to the International Network for Government Science Advice (INGSA).
I originally read the title as “it is more difficult to influence public policy in least-developed countries based on this study” but it appears it’s actually “it is more difficult for scientists to advocate for science-backed policies in least-developed countries.”
I keep forgetting that that’s an option
The problem with hearing when a note isn’t right is that by the time you hear it you’ve already played it…
As someone who could never get used to just kinda eyeballing where a note is supposed to be, I strongly disagree about the trombone.
Well now how am I supposed to enjoy the sensation of someone else’s sweaty hand sliding down the pole to slowly touch mine while they remain oblivious of the entire situation?
I mean, it would probably be a good opportunity for a handful of really rich people to further their control and ownership globally…so as long as our billionaire overlords value human life over their own personal power we should be good.
This would explain the other article I saw about a US-Clooney $20 billion arms deal.
No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I’m guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they’ve been heavily trained on. If predictive text doesn’t do it then I would be betting on whatever Yann LeCun is working on.
Perhaps there is some line between assuming infinite growth and declaring that this technology that is not quite good enough right now will therefore never be good enough?
Blindly assuming no further technological advancements seems equally as foolish to me as assuming perpetual exponential growth. Ironically, our ability to extrapolate from limited information is a huge part of human intelligence that AI hasn’t solved yet.
GPT-2 came out a little more than 5 years ago, it answered 0% of questions accurately and couldn’t string a sentence together.
GPT-3 came out a little less than 4 years ago and was kind of a neat party trick, but I’m pretty sure answered ~0% of programming questions correctly.
GPT-4 came out a little less than 2 years ago and can answer 48% of programming questions accurately.
I’m not talking about mortality, or creativity, or good/bad for humanity, but if you don’t see a trajectory here, I don’t know what to tell you.
This is very much the type of case that settles out of court for an undisclosed amount of money.
So… nihilism?