A PROMINENT nationalist blogger is arrested over allegations of harassment on Twitter. Two Scottish Tory councillors are reinstated after apologising for Twitter posts that were deemed offensive to Catholics and people of colour. Yvette Cooper, the chair of the House of Commons Home Affairs Committee, says that social media websites are failing to deal with “sexist, racist and misogynistic abuse”. The Director of Public Prosecutions in England, Alison Saunders, has launched a crack-down on online hate speak. It’s been another difficult week for the social networks.

Clearly, public attitudes are hardening against online abuse and against the social media giants that dominate the internet. In a landmark ruling in June, the German parliament passed what is being called the “Facebook Law”, imposing fines of up to €50 million on social media companies who fail to take down hate speak, defamation and incitements to violence within 24 hours. The European Union has produced similar guidelines against fake news and terrorist videos. It can surely be only a matter of time before other governments impose similar regulation on web companies, if only because it will take international action to suppress online terrorism and racial hate crime.

Social media is heading for the greatest crisis in its short and turbulent history, and it is an open question whether the tech giants which dominate the online world are going to survive it. Companies like Twitter and Facebook, it seems to me, are roughly where Volkswagen was when the diesel emissions scandal broke two years ago. They insist that they are they are cracking down on emissions of fake news and hate speak, but everyone knows they aren’t serious. The reason they aren’t serious is that social media companies dare not take formal responsibility for the content posted on their sites. If they did, they’d have to admit that they are publishers and not tech companies; that they are part of the media, like newspapers and broadcasting.

Web behemoths are protected by what is called the “safe harbour” provision in the US Communications Act of 1996. The Clinton administration wanted to build the internet superhighway, and was persuaded to clear away any regulatory roadblocks. So it gave web “platforms” an extraordinary legal immunity. Clause 230 of the Act says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider”. At a stroke, this freed the web from laws on defamation, plagiarism, race hatred, and incitement that apply to the conventional media.

Newspapers, for all their many faults, cannot publish content that is libellous, racist or involve threats, either in print or on their websites. The comments on the online version of The Herald are very closely monitored, unlike similar posts on Twitter, for the very simple reason that the editor of this paper, or the publishers, could be fined or even jailed.

The law does not permit newspapers or news websites like Buzzfeed to display damaging lies or racist abuse on their pages for 24 hours, let alone weeks and months. Yet, in May, the Commons Home Affairs Committee said it had found neo-Nazi material and terrorist recruitment videos remaining online on Twitter and Facebook even after MPs had complained about them. Posts encouraging child abuse and sexual images of children were accessible months after the social media companies had been notified by journalists.

YouTube is the biggest broadcaster in the world, but it takes no responsibility for Islamic State (IS) recruitment videos or the racist and anti-semitic material which you can access in a matter of minutes. The BBC is an international broadcasting brand of similar stature, but it could not transmit Islamic hate videos on its iPlayer website, and it is increasingly difficult to see why there is a difference. The days are long gone when the world wide web was a creaky set of bulletin boards and discussion groups run by enthusiastic academics and internet idealists. Social media companies are among the wealthiest on the planet, and they are essentially monopolies. Facebook’s market capitalisation in July was $500 billion and, like Google, it makes billions from selling advertising space around content which it does not generate and for which it accepts no liability.

Defenders of tech companies say you wouldn’t sue a public library for content on its shelves, so why should Google be responsible for content in its search results? There is something in this argument, but it is also fatally flawed. I certainly don’t recall seeing jihadi beheading videos in my local library’s DVD shelves, and nor do you see posters on the walls of the Mitchell Library threatening to kill politicians. Moreover, libraries do not make billions out of selling advertising around their bookshelves.

Indeed, the only time social media companies appear to be genuinely concerned about the content on their websites is when advertisers complain about it, as was the case when adverts for Mercedes-Benz, Waitrose and Marie Curie were found to be appended to IS and neo-Nazi videos on YouTube. Google, which owns YouTube, moved like lightning to protect its business interests. It is the money that web companies make from their websites that makes them unable, ultimately, to shrug off responsibility.

Regulation is coming to the internet whether they like it or not, but we need to be sure that this will not throw the internet baby out with the social media bathwater. The world wide web represents a massive advance in human civilisation. The world’s knowledge is now at the fingertips of anyone with a mobile phone. Humanity has become interconnected almost in real time. Machine intelligence, and big data, could take on most of the boring and repetitive administrative tasks, and liberate us from bureaucracy. At its best, the internet has been a great engine of free speech, allowing different views to collide and coalesce over the digital pathways.

We can’t allow the handful of companies which dominate the internet to endanger all this by harnessing their business models to the promotion of the very worst aspects of the human condition: the dissemination of lies and hate, fear and terror. It is time to review Section 230, apply editorial standards, and restore order to the wild west of the world wide web.