The Twitter Compromise is Much More than a Cybersecurity Hack …

And it’s all about CRUD

The Twitter Compromise is Much More than a Cybersecurity Hack …

Social Media Companies are Hosters of Content and Not Publishers!

In cybersecurity, we define a thing called CRUD … create, read, update and delete. It basically defines the rights that someone has on something, and where the owner has the rights to define this access policy. It is extremely important to get this right, and so while someone might have rights to read some information, they might not have the rights to delete it or update it. It’s fundamental.

And so, as a high level policy, you would think that Twitter (and Facebook) would be able to “Read” … that’s a given for their platform, and “Delete” … to stop abuse … but “Create” and “Update”? Surely these would be so privileged that even Jack Dorsey, himself, couldn’t do that without it being checked by every government in the world? What we saw with the CryptoForHealth hack, was that Twitter has full CRUD rights, and that’s a worry for our democracy, and goes much higher than social engineering or password hacks.

The flaw in this is the “admin” account, and where the administrator has the complete rights to overrule any policy — or always has complete rights to do whatever they want. We did a research project a few years ago — and which is now a very successful spin-out called Symphonic — and it related to the gathering of health care data from a patient give them the rights to share data with whoever they wanted. By default, no one apart from the citizen had rights to their data. And the first thing we did, was to remove the administrator account in gaining any access to the data, as they were the core security risk.

It’s more fundamental than a cybersecurity compromise

While some might say that the Twitter comprise is just another in a long line of hacks, it actually cuts into the core of our democracy on the Internet. As Cybersecurity professionals, we might wave a finger and shake our heads, and say that they this should have been done, or that. But not in this case, as social media now has a world-wide power that few governments would ever come near. It is a power which is increasing by the day and is becoming the window into our world.

At the core of this is power. Social media now has more power than most governments in the world. In fact, social media has the power to bring down a government in an instance, and where many of the prompts for change often come from social media, rather than from respected commentators. But if social media wants, they can take away anyone’s voice, with little in the way of recourse.

Like it or not, we are replacing our ancient democratic structures — which were built on ancient borders, the law of the land, and respected media sources — with new digital governance, and the power now lies with the social media companies. For them, there are little in the way of physical borders that separate us. And at the core of the debate is the demands of governments of the world to regulate the social media providers and make them responsible for the content that they post.

Why do Twitter have access to our accounts?

Well, that’s an easy question. Twitter needs to strongly police their platform, as Twitter has so much power for influence and political gain. As we saw with the live streaming of the shooter in New Zealand, terrible abuse can be propagated in an instance. Social media companies must thus detect abuse on their platforms, and stop its spread.

And, so they have a tightrope to walk. They want to protect the rights of freedom of speech, but they must protect their users. Whenever a new event happens these days, we see people immediately turning to Twitter to see the true (or fake) events. Broadcast media does it too. And so Twitter must prime itself with a whole lot of media filters and detectors — in the way that companies now have to use Security Operations Centres (SOCs) to detect cybersecurity threats. These then try and detect abuse or fake news as it happens. While some can be done with machine learning, we still need humans there to make sense of it all.

The dilemma is this for social media companies is this … how can they “police” the Internet, without giving away too many So who decides what is possible to post on social media? Well, increasingly, it is a new police force … the social media moderators. As the print and broadcast press is under increasing regulation on the publishing of their content, the social media networks have generally been given a free rein on posts that would never be allowed on other forms of media. Most people now turn to Twitter whenever a serious new event happens, as the traditional media channels are often limited in the things they can broadcast.

To be seen to be doing something about the distribution of objectionable content, Facebook recently reacted with the recruitment of over 3,000 moderators. In order to train them, the company has over 100 manuals which inform the moderator on posts that are too violent, sexual, racist, hateful, or which support terrorism.

Most automated methods use sentiment analysis, but those posting the content can overcome this by using calming language. Facebook is also asking its users to become its policing force and inform them about objectionable content. A recent study in the UK found that, over a three week period, over 6,000 women in the UK received abusive and misogynistic tweets with terms such as”slut” and “whore”.

Recently, too, in the UK, the Labour MP Anna Turley proposed the Malicious Communications (Social Media) Bill, and where objectionable social media posts could carry a penalty of up to £2 million (or 5% of global turnover). In the Bill it was proposed that Ofcom would widen their scope, and regulate social media companies. Along with this, it was proposed that there would be an automatic filter on social media for those under the age of 18, and that social media companies would have to verify the age of their users.

Section 230

And so, for over 30 years of the Web’s existence, it has basically run wild and policed itself, but now new laws may be coming its way, and the days of self-policing are gone. At the core of this freedom is Section 230 and which ruled on whether ISPs were just hosters of content and not publishers. The ruling defined that ISPs were not responsible for the content they hosted, and led to the 26 words that made the Internet:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

But the EARN (Eliminating Abusive and Rampany Neglect of Interactive Technologies) Act of 2019 aims to overcome this with [here]:

Basically we have now reached a point where the rights of citizens to privacy is pitched against the rights of society to protect itself. With the ever-increasing problem of cybercrime, we must now fix the broken Internet we have created and built it properly, and with security at its core. This is done with data encryption, but this is increasingly closing the door on the ability to investigate data on devices.

The development of the Internet saw great opportunities for Internet service providers and that Section 230 passed in the 1990s provided a way for them to develop without being held responsible for what was said or done on their platform. In this way, Twitter could host a site where its users could abuse each other, without being held responsible. It provided immunity for these companies. Along with this, the CALEA (Communications Assistance for Law Enforcement) Act supported the opportunity for wiretapping within communication providers. But this wiretapping has little effect, as most of the communications involve encryption tunnels, and where there is no requirement for Web hosting companies to store the encryption keys involved. With the increasing using of end-to-end encryption, the only entities with the keys are the two end points.

But now we are in a phase of “techlash”, and where we are raining against the overpowering control of the Internet in our lives, and in the growth of surveillance capitalism. For many, she argues, that the immunity of the tech companies is now a threat and that the providers of these Internet services could now be held accountable for the bad things that they platform hosts. For every billion posts of family pictures on Facebook, there is a gunman who films himself live.

She outlines that humans are sometimes not nice people and that this is not new in our world. But their actions are made a whole lot worse in the Internet-enabled world.

Fake News!

We have built an almost completely untrusted Internet, where little that we see can really be fully trusted. But, now, countries of the world are fighting back, with Singapore has ordered Facebook to tag online falsehoods from repeat offenders. This is part of their online falsehood law — Protection from Online Falsehoods and Manipulation Act (POFMA) — and aims to address the rise in fake news. The fines are severe with up to three to five years imprisonment, or a SG$30,000/SG$50,000 fine, or both. If a bot is involved, or where it is spread by unauthorized accounts, the fine will double. Those involved as intermediates could also be fined up to SG$1 million in fines, and an additional daily fine of SG$100,000 fine.

The Singapore government give an example of a Facebook post named States Times Review (“STR”) on 13 February 2020 related to the COVID-19 situation:

  1. The Government is unable to trace the source of infection for any of the infected COVID-19 cases in Singapore.
  2. The Government is “the only one” telling the public not to wear a mask.
  3. Each “China worker” will also get S$100 a day for 14 days of Leave of Absence, fully paid for by the Singapore government.
  4. Minister for Manpower, Mrs Josephine Teo, said that she was working hard to bring more workers from China into Singapore.
  5. Seven countries have since banned travel to Singapore, citing lack of confidence in the Singapore government’s public health measures

Each of which the government reported as being completely inaccurate.

Conclusions

The fundamental question is not that Twitter has access to accounts and to be able to detect/disable, but that they were allowed to actually post things from every account, and without much in the way of safeguards. Used in a bad way, we become a Big Brother society, and where social media defines who and who shouldn't have a voice. The sooner we have a distributed internet, be better, as we increasingly have the power of the Internet in the hands of a few. Twitter’s policy should be that the can read and delete (within strict rules), and cannot create or update (unless they want to become a proper media outlet, of course).