In recent years, big tech has come under intense criticism for the range of ways in which its digital platforms are used to spread extreme messages and abusive content. Social media companies in particular have struggled to develop and enforce appropriate community standards to address issues ranging from the proliferation of hate speech, incitement to cause harm, and a rise in xenophobic, racist, and sexist language in online interactions. Against this backdrop, panelists will review current and proposed efforts to monitor, regulate and curate content online. Who should define what sort of speech constitutes misinformation, hatred and violence on digital platforms, and how? Are technological solutions and alternative business models sufficient to solve the current information crisis? What are the risks associated with private actors developing their own decision-making frameworks for content-removal?
“If we’re aiming for global governance on online content, we must get accustomed to the fact that there will be lots of content that will offend us. It’s a price worth paying for the freedom of expression.” – Jacob Mchangama