I have one in my car, but the glovebox is where things get forgotten.
Maybe a small Swiss Army knife would be a good investment.
You get banned for random views though. In particular lemmy.world is heavy on the censorship.
If lemmy gets more popular then corporate influenced mods will appear.
I understand the logic, but I’ve never been in a situation when I thought “I wish I had a pocket knife”
At mine the person in charge of IT procurement is an ex Microsoft salesman.
Yes. His financial advisors have probably limited his liability. He can lose the value of his holdings, but is unlikely to go into debt himself.
Musk has pledged $62.5 billion in Tesla stock as collateral for margin loans of $12.5 billion.
Giacomo Santangelo, a senior lecturer in economics at Fordham University said “A 20% stock decline on a 60% loan-to-value loan means the borrower must immediately post additional collateral or face forced liquidation. This creates cascade risk, where small declines trigger margin calls, forcing either more pledging or open-market sales, putting more pressure on the stock.”
I think they were alluding to Israel.
It’s a fairytale town, isn’t it? How’s a fairytale town not somebody’s fucking thing?How can all those canals and bridges and cobbled streets and those churches, all that beautiful fucking fairytale stuff, how can that not be somebody’s fucking thing, eh?
Wait, wait. Better quote.
What’s Belgium famous for? Chocolates and child abuse, and they only invented the chocolates to get to the kids.
Try searching for French maid.
At first I suspected this was the French government demanding that 40% of the videos be made in europe using the French language.
I don’t know why you’re playing semantic games
I’m trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
Other way around. The claimed meaningful change (reasoning) has not occurred.
This paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should.
The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.
The architecture of these LRMs may make monkeys fly out of my butt. It hasn’t been proven that the architecture doesn’t allow it.
You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can’t.
You were starting a new argument. Let’s stay on topic.
The paper implies “Reasoning” is application of logic. It shows that LRMs are great at copying logic but can’t follow simple instructions that haven’t been seen before.
Sure. We weren’t discussing if AI creates value or not. If you ask a different question then you get a different answer.
Not “This particular model”. Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.
The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.
A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.
Software is priced at Apple levels.