

Not any lazier. Script kiddies didn’t write the code themselves, either.
Not any lazier. Script kiddies didn’t write the code themselves, either.
It was already known before the whistleblower that:
The “sinister” thing that we learned was that Apple was reviewing those activations to see if they were false, with the stated intent (as confirmed by the whistleblower) of using them to reduce false activations.
There are also black box methods to verify that data isn’t being sent and that particular hardware (like the microphone) isn’t being used, and there are people who look for vulnerabilities as a hobby. If the microphones on the most/second most popular phone brand (iPhone, Samsung) were secretly recording all the time, evidence of that would be easy to find and would be a huge scoop - why haven’t we heard about it yet?
Snowden and Wikileaks dumped a huge amount of info about governments spying, but nothing in there involved always on microphones in our cell phones.
To be fair, an individual phone is a single compromise away from actually listening to you, so it still makes sense to avoid having sensitive conversations within earshot of a wirelessly connected microphone. But generally that’s not the concern most people should have.
Advertising tracking is much more sinister and complicated and harder to wrap your head around than “my phone is listening to me” and as a result makes for a much less glamorous story, but there are dozens, if not hundreds or thousands, of stories out there about how invasive advertising companies’ methods are, about how they know too much, etc… Think about what LLMs do with text. The level of prediction that they can do. That’s what ML algorithms can do with your behavior.
If you’re misattributing what advertisers know about you to the phone listening and reporting back, then you’re not paying attention to what they’re actually doing.
So yes - be vigilant. Just be vigilant about the right thing.
proven by a whistleblower from apple
Assuming you have an iPhone. And even then, the whistleblower you’re referencing was part of a team who reviewed utterances by users with the “Hey Siri” wake word feature enabled. If you had Siri disabled entirely or had the wake word feature disabled, you weren’t impacted at all.
This may have been limited to impacting only users who also had some option like “Improve Siri and Dictation” enabled, but it’s not clear. Today, the Privacy Policy explicitly says that Apple can have employees review your interactions with Siri and Dictation (my understanding is the reason for the settlement is that they were not explicit that human review was occurring). I strongly recommend disabling that setting, particularly if you have a wake word enabled.
If you have wake words enabled on your phone or device, your phone has to listen to be able to react to them. At that point, of course the phone is listening. Whether it’s sending the info back somewhere is a different story, and there isn’t any evidence that I’m aware of that any major phone company does this.
Sure - Wikipedia says it better than I could hope to:
As English-linguist Larry Andrews describes it, descriptive grammar is the linguistic approach which studies what a language is like, as opposed to prescriptive, which declares what a language should be like.[11]: 25 In other words, descriptive grammarians focus analysis on how all kinds of people in all sorts of environments, usually in more casual, everyday settings, communicate, whereas prescriptive grammarians focus on the grammatical rules and structures predetermined by linguistic registers and figures of power. An example that Andrews uses in his book is fewer than vs less than.[11]: 26 A descriptive grammarian would state that both statements are equally valid, as long as the meaning behind the statement can be understood. A prescriptive grammarian would analyze the rules and conventions behind both statements to determine which statement is correct or otherwise preferable. Andrews also believes that, although most linguists would be descriptive grammarians, most public school teachers tend to be prescriptive.[11]: 26
You might be interested in reading up on the debate of “Prescriptive vs Descriptive” approaches in a linguistics context.
If you want to generate audiobooks using your own / a hosted TTS server, check out one of these options:
If you don’t have a decent GPU, Kokoro is a great option as it’s fast enough to run on CPU and still sounds very good.
If you’re going to use Kokoro, Audiblez (posted by another commenter) looks like it makes that more of an all-in-one option.
If you want something that you can use without an upfront building of the audiobook, of the above options, only OpenReader-WebUI supports that. RealtimeTTS is a library that handles that, but I don’t know if there are already any apps out there that integrate it.
If you have the audiobook generation handled and just want to be able to follow along with text / switch between text and audio, check out https://storyteller-platform.gitlab.io/storyteller/
What would moving your account here from Reddit entail? What would transfer with you? Your posts? Comments? Followed and moderated subreddits? Upvotes and downvotes?
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
I think the better question than “Does the experience system sound like it has potential,” then, is “Does the overall concept / system have potential?”
My gut is probably, but it depends a lot more on what you’re willing to put into it and what you want out of it. What’s your metric for success? If it’s something you want to run yourself and to share online to have a few groups use it, then that’s a lot more achievable than being able to get a publishing deal, for example. And in-between, publishing on drivethrurpg or something similar, at a nominal cost (like $2-$5), would take more effort than the former and less than the latter; and the higher the cost and the higher the number of players you’d want, the higher the effort you need to put in (and a lot of that isn’t just in system building, but in art, community building, marketing, etc.).
From what you’ve shared, it sounds like an interesting system. I could especially see it working in an academy setting where grinding skills to be able to pass practical exams is one of the players’ goals. I also could see it working well by a loosely GMed play by post system, with the players self-enforcing (or possibly leveraging some tools built into the site to track resource pools, experience, rolling, etc.), though I haven’t played in a forum game myself, so I might be way off-base.
Did your system have classes or was it completely free-form in terms of gaining access to those skill trees?
I run a Monster of the Week game and my players get experience throughout sessions, as well as at the end. The mechanics are basically:
I think other PbtA (Powered by the Apocalypse - systems inspired by Apocalypse World) systems do something similar.
I grew increasingly frustrated with the system of only distributing advancement/experience points at the end of a session.
Isn’t the simple fix to this to just distribute experience points as soon as they’re earned?
At some point, I started to divise a play system that relied on a split experience atribution system, with players being able to automatically rack experience points from directly using their skills/habilties, while the DM would keep a tally of points from goals/missions achieved, distributable at session end.
Your system sounds like the way that skill-based video game RPGs (Elder Scrolls games and Arcanum come to mind) handle experience.
In a lot of games I’ve played, I’d rather get experience for in-game accomplishments immediately and to be able to train skills like this during downtime - generally between games.
To those with more experience in TTRPGs: would this be feaseable? Or enticing? Interesting?
I could see people being interested in it. You get instant gratification and a bit of extra crunchiness. A lot of players enjoy that.
With the right skill system I could see this being useful. My main concern is that if you put this on top of a system with relatively few skills, it could encourage people to game it by grinding. There are ways to mitigate that, though.
In a system with fewer skills, instead of just being experience points, the “currency” you earned this way could be used for temporary power ups related to the skill in question.
You could also limit it so you only rewarded players for story-related tasks.
I’d recommend Colemak Mod-DH, personally - it seems ergonomically superior and switching later is a bit of a pain.
You don’t have to finish the file to share it though, that’s a major part of bittorrent. Each peer shares parts of the files that they’ve partially downloaded already. So Meta didn’t need to finish and share the whole file to have technically shared some parts of copyrighted works. Unless they just had uploading completely disabled,
The argument was not that it didn’t matter if a user didn’t download the entirety of a work from Meta, but that it didn’t matter whether a user downloaded anything from Meta, regardless of whether Meta was a peer or seed at the time.
Theoretically, Meta could have disabled uploading but not blocked their client from signaling that they could upload. This would, according to that argument, still counts as reproducing the works, under the logic that signaling that it was available is the same as “making it available.”
but they still “reproduced” those works by vectorizing them into an LLM. If Gemini can reproduce a copyrighted work “from memory” then that still counts.
That’s irrelevant to the plaintiff’s argument. And beyond that, it would need to be proven on its own merits. This argument about torrenting wouldn’t be relevant if LLAMA were obviously a derivative creation that wasn’t subject to fair use protections.
It’s also irrelevant if Gemini can reproduce a work, as Meta did not create Gemini.
Does any Llama model reproduce the entirety of The Bedwetter by Sarah Silverman if you provide the first paragraph? Does it even get the first chapter? I highly doubt it.
By the same logic, almost any computer on the internet is guilty of copyright infringement. Proxy servers, VPNs, basically any compute that routed those packets temporarily had (or still has for caches, logs, etc) copies of that protected data.
There have been lawsuits against both ISPs and VPNs in recent years for being complicit in copyright infringement, but that’s a bit different. Generally speaking, there are laws, like the DMCA, that specifically limit the liability of network providers and network services, so long as they respect things like takedown notices.
Retrieval-Augmented Generation (RAG) is probably the tech you’d want. It basically involves a knowledge library being built from the documents you upload, which is then indexed when you ask questions.
NotebookLM by Google is an off the shelf tool that is specialized in this, but you can upload documents to ChatGPT, Copilot, Claude, etc., and get the same benefit.
If you self hosted, Open WebUI with Ollama supports this, but far from the only one.
If you dislike decentralization, then that’s like being on an instance that’s banned every other instance.
Do you mean that you dislike defederation?
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
You said, and I quote “Find a better way.” I don’t agree with your premise - this is the better way - but I gave you a straightforward, reasonable way to achieve something important to you… and now you’re saying that “This is a discussion of principle.”
You’ve just proven that it doesn’t take a moderator to turn a conversation into a bad joke - you can do it on your own.
It’s a discussion of principle.
This is a foreign concept?
It appears to be a foreign concept for you.
I don’t believe that it’s a fundamentally bad thing to converse in moderated spaces; you do. You say “giving somebody the power to arbitrarily censor and modify our conversation is a fundamentally bad thing” like it’s a fact, indicating you believe this, but you’ve been given the tools to avoid giving others the power to moderate your conversation and you have chosen not to use them. This means that you are saying “I have chosen to do a thing that I believe is fundamentally bad.” Why would anyone trust such a person?
For that matter, is this even a discussion? People clearly don’t agree with you and you haven’t explained your reasoning. If a moderator’s actions are logged and visible to users, and users have the choice of engaging under the purview of a moderator or moving elsewhere, what’s the problem?
It is deeply bad that…
Why?
Yes, I know, trolls, etc…
In other words, “let me ignore valid arguments for why moderation is needed.”
But such action turns any conversation into a bad joke.
It doesn’t.
And anybody who trusts a moderator is a fool.
In places where moderator’s actions are unlogged and they’re not accountable to the community, sure - and that’s true on mainstream social media. Here, moderators are performing a service for the benefit of the community.
Have you never heard the phrase “Trust, but verify?”
Find a better way.
This is the better way.
It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.