National and regional legislative measures/proposals that dramatically enhance platform liability for content developed by users such as the German Network Enforcement Act (NetzDG) and the EU’s proposed Digital Services Act (DSA) place free speech at risk and potentially shrink civic space. Such measures render private companies, not bound by International Human Rights Law (IHRL) arbiters of fact and law. To meet obligations and avoid hefty fines, social media platforms (SMPs) are adopting a “better safe than sorry approach,” increasingly relying on Artificial Intelligence (AI) to (proactively) remove even contentious areas of speech such as hate speech. Against the backdrop of the current developments in the form of the proposed DSA, this paper provides an overview of the challenges that emanate from the current European approach with a particular emphasis on contested areas of speech such as hate speech and disinformation and puts forth proposals that can be taken into consideration during negotiations and discussions. Whilst cognizant of the fact that the structural composition of the DSA, in particular its platform liability approach will not change (for now), this paper puts forth ideas that could feed into the negotiation process, namely a rights-based approach to content moderation. We make specific reference and recommendations as to the recent proposal by the European Commission of a mechanism for facing disinformation in exceptional circumstances (in reaction to the ongoing Ukrainian crisis and related pro-Russian information manipulation).