- cross-posted to:
- linux@lemmy.world
- security@lemmy.ml
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.world
- security@lemmy.ml
- linux@lemmy.ml
Related discussion:
https://news.ycombinator.com/item?id=39865810
https://news.ycombinator.com/item?id=39877267
Advisories:
We’ve had a lot of trust among open-source projects, where people just kind of assume that people are doing the right thing, but there are some very, very large places where a potential attacker might manage to get maintainership of a library, if they’re willing to spend a long time slowly getting access.
I’d figured that one day, we’d have a really big apocalypse that would cause some of that to break down, and we’d lose our innocence and have to do things differently.
I mean, let’s say that I’m an important security researcher, and I use R, a common statistical tool, nothing directly to do with security. That pulls in all kinds of libraries from various online statistics archives, and the people working on those aren’t really security people, probably generally don’t know how to vet things effectively even if they wanted to do so. Perl and Python and other tools have similar things. If someone can target that security researcher using that, could be nothing more than an intentionally-induced parsing bug in a library they use, then they can get things like that researcher’s private keys, maybe get ahold of signing keys for software packages and the like.
And in the xz case, it looks like social engineering efforts were used against both the maintainer and packagers. The open-source community has a lot of well-meaning strangers collaborating in good faith, built on a lot of trust extended, and they aimed to exploit that.
All of the problems get a lot harder to deal with when it’s someone willing to spend a lot of time and use sophisticated tactics.