By Eric Vandenbroeck and co-workers
The Risks of Internet Regulation
On March 14, the U.S.
House of Representatives passed legislation that would, if enacted, force Chinese
media conglomerate ByteDance to divest itself of
TikTok or find the popular social media site banned in the United States. The
fear that Beijing has access to the data of 170 million U.S. TikTok users and
thus the ability to influence their information diet allowed the bill to sail
through the House. The legislation’s approval demonstrates the adversarial
nature of the U.S.-China relationship, but it also spotlights a trend in the
democratic world of governments promising to transform the Internet from a zone
of danger and disinformation to one of safety and trust.
This U.S.
legislation, though draconian in its attempt to ban an entire platform, is not
isolated. The European Union, the United Kingdom, and many other countries are
also targeting online harms, including risks to children’s mental and physical
health, hate-fueled incitement, and interference with democratic debate and
institutions. These harms are compelling subjects of regulation, but tackling
them must be consistent with democratic norms. Human rights law—the framework
applicable in all democracies—requires at a minimum a demonstration of the
necessity and proportionality of any restriction, along with precision and
clarity to limit state discretion to enforce a rule at will. Although a focus
on safety and trust is legitimate, it alone cannot succeed if it puts at risk
individual rights, including the rights of minors, to freedom of expression,
privacy, association, and public participation. When not carefully crafted or
constrained, online speech regulation could be used to silence disfavored
opinions, censor marginalized groups, limit access to information, and diminish
privacy and individual security.
In response to public
pressure to clean up the Internet, policymakers in Brussels, London,
Washington, and beyond are following a path that, in the wrong hands, could
lead to censorship and abuse. Some in Brussels, including in EU institutions
and civil society, speak of “an Orban test,”
according to which lawmakers should ask themselves whether they would be
comfortable if legislation were enforced by Hungary’s authoritarian and
censorial Prime Minister Viktor Orban or someone like him. This is a smart way
to look at things, particularly for those in the United States concerned about
the possibility of another term for former U.S. President Donald Trump (who
famously, and chillingly, referred to independent media as enemies of the
people). Rather than expanding government control over Internet speech,
policymakers should focus on the kinds of steps that could genuinely promote a
better Internet.
Europe’s Mixed Messages
Europe has made the
most promising and comprehensive effort to achieve its goal of an Internet that
works for the public interest. The EU has for years been buffeted by arguments
that the behemoths of social media—the most prominent of which are U.S. companies
including Google, Meta, and X (formally Twitter)—irresponsibly control the
European information environment. Indeed, Brussels has long been concerned with
corporate control of personal data, leading the European Court of Justice to
establish an individual right to be forgotten against Internet companies in
2014. In 2015, the ECJ invalidated U.S.-European agreements on cross-border
personal data transfers, and the temperature rose still further when terrorist
and extremist violence broke out in Belgium, France, and Germany, which was
alleged to have been facilitated by online communications. Some leaders,
including in law enforcement authorities in the United States and Europe,
pressed for restrictions on digital security tools including encryption, whereas
others, including European commissioner Věra Jourová,
have wrestled with a desire to establish rules over U.S. firms while also not
wanting to establish, as they said repeatedly, a Brussels-based Ministry of
Truth. The EU has thus taken a broad approach, combining privacy law,
competition policy, media protection, and social media regulation to address
the entire sector.
At the heart of
Brussels’ approach to online content is the Digital Services Act (DSA). When
negotiations over the DSA concluded in April 2022, European Commission
Executive Vice President Margrethe Vestager exulted that “democracy is back.”
For Vestager and her allies, the DSA asserts the EU’s public authority over
private platforms. It restates existing EU rules that require platforms to take
down illegal content when they are notified of its existence. In its detailed
bureaucratic way, the DSA also goes further, seeking to establish how the
platforms should deal with speech that, though objectionable, is not illegal.
This category includes disinformation, threats to “civic discourse and
electoral processes,” most content deemed harmful to children, and many forms
of hate speech. The DSA disclaims specific directives to the companies. It does
not require, for instance, the removal of disinformation or legal content
harmful to children. Instead, it requires the largest platforms and search
engines to introduce transparent due diligence and reporting. Such a step would
give the Commission oversight power to evaluate whether these companies are
posing systemic risks to the public.
Politicization,
however, threatens the DSA’s careful approach, a concern that emerged soon
after Hamas’s October 7 terrorist attacks on Israel. Posts glorifying Hamas or,
conversely, promising a brutal Israeli vengeance immediately began circulating
online. Thierry Breton, the European commissioner responsible for implementing
the DSA, saw an opportunity and, three days after the attacks, sent a letter to
X CEO Elon Musk and then to Meta, TikTok, and YouTube. “Following the terrorist
attacks carried out by Hamas against Israel,” Breton wrote to Musk, “we have
indications that your platform is being used to disseminate illegal content and
disinformation in the EU.” He urged the platforms to ensure that they had in
place mechanisms to address “manifestly false or misleading information” and
requested a “prompt, accurate, and complete response” to the letters within 24
hours. Breton gave the impression that he was acting by the DSA, but he went
much further, taking on a bullying approach that seemed to presuppose that the
platforms were enabling illegal speech. The DSA authorizes Commission action
only after careful, technical review.
Breton’s concerns
were legitimate: Musk has decimated X’s content and public policy teams and
disseminates hateful speech and disinformation; Facebook has a history of
failing societies in the face of genocidal incitement; and YouTube has long
been accused of allowing the worst sorts of disinformation to gain traction and
go viral on its platform. Still, a global coalition of nearly 30 leading online
free speech organizations, including ARTICLE 19, European Digital Rights, and AccessNow, responded quickly to Breton’s letters,
expressing concern that he was opting for an approach of political demand over
the DSA’s careful public assessment. The organizations argued that Breton
conflated illegal content with disinformation, which is not subject to
automatic removal but, instead, to risk assessment and transparency. The DSA
requires platforms to act proportionately and with an eye on the protection of
fundamental rights—not to act rashly in times of crisis or based on
unsubstantiated claims. Breton’s urgency, these organizations argued, could
cause the platforms to remove evidence of war crimes, limit legitimate public
debate, and censor marginalized voices. It reminded them of how authoritarian
governments behave, with regular demands for content removal, rather than what
is promised by the DSA.
Breton showed that
the DSA’s careful bureaucratic design can be abused for political purposes.
This is not an idle concern. Last July, during riots in France following the
police shooting of a youth, Breton also threatened to use the DSA against
social media platforms if they continued to post “hateful content.” He said
that the European Commission could impose a fine and even “ban the operation
[of the platforms] on our territory,” which are steps beyond his authority and
outside the scope of the DSA.
European legal norms
and judicial authorities, and the commission rank-and-file’s commitment to a
successful DSA, may check the potential for political abuse. But this status
quo may not last. June’s European Parliament elections may tilt leadership in directions
hostile to freedom of expression online. New commissioners could take lessons
from Breton’s political approach to DSA enforcement and issue new threats to
social media companies. Indeed, Breton’s actions may have legitimized
politicization in ways that could be used to limit public debate, rather than
going through the careful, if technical, approaches of DSA risk assessment,
researcher access, and transparency.
Save The Children
Whereas Europe
focuses on process, the United Kingdom’s attention is more directly on content.
Its Online Safety Act, enacted in October 2023, places Internet harms at its
center. The UK government began considering online safety legislation in 2017
and, since then, has repeatedly pledged “to make Britain the safest place in
the world to be online.” Although the Online Safety Act seeks to address many
online harms, including terrorist content and harassment, nothing consolidated
consensus in favor of the legislation as much as threats to children. In part,
this is because of the attention generated by the 2017 suicide of 14-year-old
Molly Russell. A 2022 inquest concluded that her death had been influenced by
online romanticization of self-harm and discouragement of seeking help from
others. Michelle Donelan, UK Secretary of State for Science, Innovation and
Technology, responded to the inquest by declaring that “we owe it to Molly’s
family to do everything in our power to stop this happening to others. Our
Online Safety Bill is the answer.”
It was partly to
address this goal that the UK Parliament passed the act, a massive piece of
legislation that orders technology companies to “identify, mitigate and manage
the risks of harm” from content that is either illegal—by promoting terrorism,
for example—or harmful, particularly to children. The act delegates some of the
hardest questions to Ofcom, the independent British telecommunications
regulator. One particularly oblique provision requires companies to act against
content where they have “reasonable grounds to infer” that it may be illegal.
As has been widely documented, social media companies have a notoriously uneven
record in their ability to moderate content, stemming from their inability to
assess large volumes of it, let alone evaluate the intent of the user
generating it. As the legal analyst Graham Smith has noted, putting new
pressure on these platforms could simply cause them to take down potentially
controversial content—for instance, robust conversations about the war in Gaza,
or any other number of contentious topics—in order to steer clear of
controversy or penalties.
One concern is that
the UK legislation defines content harmful to children so broadly that it could
cause companies to block legitimate health information, such as that related to
gender identity or reproductive health, that is critical to childhood development
and those who study it. Moreover, the act requires companies to conduct age
verification, a difficult process that may oblige a user to present some form
of official identification or age assurance, perhaps by using biometric
measures. This is a complicated area involving a range of approaches that will
have to be the focus of Ofcom’s attention since the act does not specify how
companies should enforce this. But, as the French data privacy regulator has
found, age verification and assurance schemes pose serious privacy concerns for
all users, since they typically require personal data and enable tracking of
online activity. These schemes also often fail to meet their objectives,
instead posing new barriers to access to information for everyone, not just
children.
The Online Safety Act
gives Ofcom the authority to require a social media platform to identify and
swiftly remove publicly posted terrorist or child sexual abuse content. This is
not controversial, since such material should not be anywhere on the Internet;
child sexual abuse content, in particular, is vile and illegal, and there are
public tools designed to facilitate its detection, investigation, and removal.
But the act also gives Ofcom the authority to order companies to apply
technology to scan private, user-to-user content for child sexual abuse
material. It sounds legitimate, but doing so would require monitoring private
communications, at the risk of disrupting the encryption that is fundamental to
Internet security generally. If required, it would open the door to the type of
monitoring that would be precisely the tool authoritarians would like in order
to gain access to dissident communications. The potential for such interference
with digital security is so serious that the heads of Signal and WhatsApp, the
world’s leading encrypted messaging services, indicated that they would leave
the British market if the provision were to be enforced. For them, and those
who use the services, encryption is a guarantee of privacy and security,
particularly in the face of criminal hacking and interference by authoritarian
governments. Without encryption, all communications would be potentially
subject to snooping. So far, it seems that Ofcom is steering clear of such
demands. Yet the provision stands, leaving many uncertain about the future of
digital security in the UK.
Legislating For The Culture Wars
In Washington,
meanwhile, U.S. Senators Richard Blumenthal and Marsha Blackburn proposed the
Kids Online Safety Act (KOSA) in February 2022, which combines elements of both
the EU and the UK approaches. After its latest modification to address some
criticisms, the bill has now received the support of enough senators to give it
its best chance of adoption. To be sure, KOSA has some positive elements
designed to protect children online. For instance, it has strong rules to
prevent manipulative targeted advertising to minors and borrows from some of
the DSA’s innovations to boost platforms’ transparency. Rather than demanding
specific age verification approaches, KOSA would require the government’s
National Institute of Standards and Technology (NIST) to study alternatives and
make proposals about appropriate approaches.
Yet at its core, KOSA
regards the Internet as a threat from which young people ought to be protected.
The bill does not develop a theory for how an Internet for children, with its
vast access to information, can be promoted, supported, and safeguarded. As
such, critics including the Electronic Frontier Foundation, the American Civil
Liberties Union, and many advocates for LGBTQI communities still rightly argue
that KOSA could undermine broader rights to expression, access to information,
and privacy. For example, the bill would require platforms to take reasonable
steps to prevent or mitigate a range of harms, pushing them to filter content
that could be said to harm minors. The threat of litigation would be ever
present as an incentive for companies to take down even lawful, if awful,
content. This could be mitigated if enforcement were in the hands of a
trustworthy, neutral body that, like Ofcom, is independent. But KOSA places
enforcement not only in the hands of the Federal Trade Commission but also, for
some provisions, of state attorneys general—elected officials who have become
increasingly partisan in national political debates in recent years. Thus, it
will be politicians in each state who could wield power over KOSA’s
enforcement. When Blackburn said that her bill pursued the goal of “protecting
minor children from the transgender in this culture,” she was not reassuring
those fearing politicized implementation. U.S. Senator Ron Wyden, a longtime
champion of Internet speech and privacy, warned that KOSA would enable state
officials to “wage war on important reproductive and LGBTQ content.” If
enforced by the culture warriors in government, KOSA could lead to the
situation where young people are denied access to information that could be
essential to their own development and ideas.
Even apart from KOSA,
states, individual litigants, and the courts are also getting deeply involved
in Internet regulation. For example, Texas and Florida adopted laws in 2022
aimed at limiting companies’ ability to moderate content they want to scrub from
their platforms. Both states enacted laws that prohibit platforms from
“censoring” political content. Texas disallows companies from moderating
expression that is lawful even if it violates platform terms of service (for
example, certain hateful content, disinformation, and harassment), and it
authorizes actions brought by users and by the attorney general. Florida’s law
imposes stringent penalties on companies that “de-platform” political
candidates, among other actions, and also delegates power to individuals and
executive departments to enforce the law, with the possibility of significant
damage awards in each instance to the individual claimant. Both have thus
created processes according to which politicians can make demands of companies
to leave up or remove particular content, putting government directly in the
middle of content moderation. This is, then, a demand for regular government
monitoring of speech. Both laws are before the Supreme Court, which very well
may strike them down. But the trend toward government speech regulation is
unlikely to go away.
Wrong Way
There is no doubt
that a long-awaited reckoning for Internet platforms has arrived. Yet, for all
the harms platforms cause or facilitate, they still remain sources of
information and debate for people of all ages in democracies around the world.
For all the talk of platform safety, it is user speech that is at issue in
these laws, not the underlying business model that makes the platforms so
dominant in democracies’ information environments. The fundamental risk of
safety-driven efforts is that they fail to protect or promote what is valuable
about online speech and access to information. In guarding against this, the EU
is far ahead of the others. Brussels has largely taken a process-driven,
transparency-focused, risk-assessment approach that, although it imposes
pressures on platforms to behave well, mostly avoids giving governments the
tools to demand takedowns of specific legal content. It is for this reason that
Breton’s political posturing raises such deep concerns among DSA supporters.
Not only did he risk company and civil society support for transparency and
risk-assessment measures, but he also provided a precedent for others, with
possibly nefarious intentions, to weaponize the DSA for their political
purposes.
The picture is not
altogether dark. The backlash against Breton’s threats may put the DSA on a
sturdy footing as the European Commission works on its implementation, and the
legislation could push the platforms toward transparency and risk assessment
globally. In the UK, Ofcom’s early efforts to implement the Online Safety Act
suggest it is alert to the risks of over-enforcement and thus may seek to
interpret the rules in ways that promote access to information and increase the
platforms’ attention to online harms. The U.S. Supreme Court could put the
brakes on politicized online speech rules at the state and federal levels,
forcing legislators to look for new ways of holding platforms to account.
Without other
guardrails, however, there will be a continual risk that government content
rules will simply provide new tools for politically motivated parties to crack
down on speech they dislike. Instead of focusing single-mindedly on safety,
governments should focus on introducing reforms aimed at protecting
individuals’ rights, especially privacy and freedom of expression, at the same
time as limiting the power and dominance of major platforms. They should be
focusing on how to promote diversity of choice and information sources for
people online, interoperability among platforms to enable user autonomy, and
the health and vitality of access to independent media, particularly public
service media.
Certainly, the
European Union deserves credit as a first-mover for a comprehensive approach to
Internet harms. But for others who are far behind, a genuine package of reforms
must be introduced. At a minimum, it should include comprehensive privacy regulation
that addresses the industry’s business model by banning its behavioral tracking
of users, marketing of their preferences, and sales to data brokers. Sound and
creative antitrust policy must also be introduced to address the legitimate
concern that companies are limiting user choice and autonomy, and turbocharging
their addictive features. Finally, there must be a transparent human rights
risk assessment and public oversight of this legislation to limit political
intimidation and empower civil society and independent agencies. By taking a
content-only focus, with all the tradeoffs that are involved, the current race
to crack down on Internet harms is unlikely to solve the problems and may lead
only to new forms of politicized speech control.
For updates click hompage here