It seems to me a repeating pattern that once freedom of thought, speech and expression is limited for essentially any reason, it will have unintended consequences.

Once the tools are in place, they will be used, abused and inevitably end up in the hands of someone you disagree with, regardless of whether the original implementer had good intentions.

As such I’m personally very averse to restrictions. I’ve thought about the question a fair bit – there isn’t a clear cut or obvious line to draw.

Please elaborate and motivate your answer. I’m genuinely curious about getting some fresh perspectives.

  • Ice@lemmy.worldOP
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    calls to violence, hate speech, and medical misinformation in the name of protecting its citizenry. I don’t think it can ethically suppress other kinds of expression, especially political express, most especially criticism of the government.

    …and yet political expression and both “calls to violence” and “hate speech” are overlapping. Is a call to revolution not the ultimate criticism of the government? (but also inherently violent?)

    Who gets to decide what is hateful, violent or misinformation? How do we prevent the tools used to regulate dissemination of these types of expression from being applied against other things, or the definitions of the terms from being changed/drifting over time? (Consider for instance statements regarding transgender individuals somehow getting covered by medical disinformation laws…)

    I think a voluntary community, however, can ethically set much narrower limits on expression within community space.

    I agree, I think this could be applied even regarding non-voluntary spaces.

    However, if a forum has a sufficiently large number of members amongst the population, I believe it should be considered a public space (and have these freedoms apply), hence taking away the power of controllers of large platforms to dictate/limit/direct the public discourse.