• DasSkelett@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Except you’re still trusting a lot of people and systems there. Those that programmed, compiled and/or packaged your software in use (be it e.g. the cryptographic libraries themselves, the OS, random user space applications you are running that might be able to access your mail some way or another…), the hardware you use, the software, hardware and OpSec of the recipient…

    The amount of people who have actually the resources, time and knowledge to eliminate all these points (i.e. reviewing the entire source code of all the software you use, and all the diffs of every new release you use, somehow check all the firmware blobs for your hardware or manage to get a fully de-blobbed system running and connected to the internet, and otherwise making sure your keyboard doesn’t sent a copy of every keystroke to “the enemy”, …) is very low. And the amount of people who actually do it might be zero? Not even a person in the NSA will have done all of this themself. They’re trusting some coworkers for some of these parts…

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      All of that can be publicly audited. When we talk about “trust” we’re referring to what happens server side, which we have to assume can never be publicly audited. The importance of e2e encryption is that what ever happens server side doesn’t matter. There’s a massive gulch between trusting a binary you’re able to inspect and trusting one you can’t.

      What you said is valid though, if you want/need privacy, you need to put in effort, but you also have to assume there’s someone smarter than you who will be able to outsmart your own audit. The absolute best you can hope for is that at least the binary is publicly reviewable and that they’re not smarter than every pair of eyes who reviews it. That’s basically the backbone of open source security.