LONDON — The European Union is nearing agreement on a set of new rules aimed at protecting internet users by forcing big tech companies like Google and Facebook to step up their efforts to curb the spread of illegal content, hate speech and disinformation.
EU officials were negotiating today over the final details of the legislation, dubbed the Digital Services Act. It’s part of a sweeping overhaul of the 27-nation bloc’s digital rulebook, highlighting the EU’s position at the forefront of the global movement to rein in the power of online platforms and social media companies.
While the rules need to be approved by the European Parliament and European Council that represents the 27 member countries, the bloc is far ahead of the United States and other countries in drawing up regulations for tech giants to force them to protect people from harmful content that proliferates online.
Negotiators from the EU’s executive Commission, member countries and France, which holds the rotating EU presidency, were working to hammer out a deal before the end of Friday, ahead of French elections Sunday.
The new rules, which are designed to protect internet users and their “fundamental rights online,” would make tech companies more accountable for content on their platforms. Social media platforms like Facebook and Twitter would have to beef up mechanisms to flag and remove illegal content like hate speech, while online marketplaces like Amazon would have to do the same for dodgy products like counterfeit sneakers or unsafe toys.
These systems will be standardized so that they will work the same way on any online platform.
That means “any national authority will be able to request that illegal content is removed, regardless of where the platform is established in Europe,” the EU’s single market commissioner, Thierry Breton, said on Twitter.
Companies that breach the rules face fines amounting to as much as 6% of their annual global revenue, which for tech giants would mean billions of dollars. Repeat offenders could be banned from the EU market.
Google and Twitter declined to comment. Amazon and Facebook didn’t respond to requests for comment.
The Digital Services Act also includes measures to better protect children by banning advertising targeted at minors. Online ads targeted to users based on their gender, ethnicity and sexual orientation would be prohibited.
There also would be a ban on so-called dark patterns — deceptive techniques to nudge users into doing things they didn’t intend to.
Tech companies would have to carry out regular risk assessments on illegal content, disinformation and other harmful information and then report back on whether they’re doing enough to tackle the problem.
They will have to be more transparent and provide information to regulators and independent researchers on content moderation efforts. This could mean, for example, making YouTube turn over data on whether its recommendation algorithm has been directing users to more Russian propaganda than normal.
To enforce the new rules, the European Commission is expected to hire more than 200 new staffers. To pay for it, tech companies will be charged a “supervisory fee,” which could be up to 0.1% of their annual global net income, depending on the negotiations.
The EU reached a similar political agreement last month on its Digital Markets Act, a separate piece of legislation aimed at reining in the power of tech giants and making them treat smaller rivals fairly.
Meanwhile, Britain has drafted its own online safety legislation that includes prison sentences for senior executives at tech companies who fail to comply.