Wednesday, July 31, 2013

Can Twitter be blamed for bad behaviour?


The case of the abusive tweets sent to campaigner Caroline Criado-Perez have once again highlighted the difficult issues that surround user generated content.
Ms Criado-Perez’s situation clearly demonstrates the potential for social networks to be used a vehicle for harassment. After being involved in the successful call for a woman to be featured on a forthcoming banknote, she received a torrent of abusive messages including threats of rape.
When Ms Criado-Perez reported the matter to Twitter the response was not what she expected – she was told to report the matter to the police who have now arrested a 21 year old man on suspicion of harassment offences.
Twitter now faces a welter of criticism about its reaction to the situation and its policy about reporting abuse. It’s now reported to be planning to introduce a “report abuse” button similar to those seen on many news sites or forums.
At the end of 2012, the then Director of Public Prosecutions, responding to prison sentences being handed out to twitter users gave guidance that prosecutions would only be sought where material published was grossly offensive or criminal. In Ms Criado-Perez’s case the inclusion of threats clearly pushed this over the line.
However, like the argument around the blocking of pornography by ISPs, the ability to report abuse on social networking sites would require the companies to become an arbiter of a complex and finely shaded law and take on the cost of providing the huge resource to monitor the flow of content.
The problem, however, is not completely new. Threatening phone calls and text messages have been creating misery for users for many years. Networks always advise reporting the case to the police and their ability to support the abused customer is limited (changing phone number for example). Companies like twitter could be forgiven perhaps for wondering why they are required to go further than long established mobile and telecoms operators.
The treatment of people who express strong views in public is cause for concern and this seems to be a particular issue when the views expressed are those of a woman, but there is a risk of confusing the issue of this abuse and the medium it is carried by.
As the medium of much modern discourse, it’s right to expect social networks to work closely with law enforcement when abuse takes place. This means that records must be kept to enable investigations, but this is quite different from a further growth of privatised censorship.
Just as with the debate around other forms of abuse, the answer here is to attack the problem at its’ root – in this case the irrational reaction of people toward women who express strong opinions in public – rather than trying to sweep the problem under the carpet by moderating a messaging system carrying over 33,000 messages per second.

Monday, July 22, 2013

No Budget to Block Porn? Confuse the Public and Rope In ISPs...

For the past month or so, the UK government has increased its hot-air output on the subject of online pornography. I hope their aims are admirable (and I have to assume they are), but there seems to be relatively little method and much more madness right now. Where are they going wrong, and what can be done about it?

 Not all porn is Child Abuse. Following two recent, high profile cases where child murderers were found to have viewed child abuse images, there were a number of hasty pronouncements, fuelled in large part by "enthusiastic" press coverage. Most of these centred around "regular" legal pornography. 

This is a problem. Even if most viewers of abuse imagery do also view legal porn it doesn’t follow that viewing legal porn leads to viewing child abuse imagery. Users of illegal drugs also purchase headache tablets in the supermarket - should we ban all painkillers because users might turn to illegal drugs? I fear, however, that good sense makes poor headlines, so we're probably stuck with this crooked thinking.

 It is difficult to decide what is "porn": in order to protect the children, there is a suggestion that ISPs block access to porn "by default" (though there seems to be some weaselling on the cards here with the word "default"). However this happens, the question will arise "who decides what is pornography?". In this case, it won't be the government, as they've devolved responsibility to a private organisation (your ISP) who will further devolve this to a filtering company.

I know a little about the inner workings of one such filter company - we at Smoothwall spend quite some effort on making sure things are as well categorised as they can be. It's a difficult question - one US judge managed to come up with an interesting answer: "I know it when I see it. Our lists aren't perfect, but the "lowest bidder" is likely to be some faceless off-shore corporate who frankly won't give a <censored> if your favourite sports forum has been misidentified as pornographic.

Update: The BBC have picked up on this outsourcing of filtering and identified TalkTalk's filtering partner as Huawei, who have been stuck with the "they must be up to no good because they're from China" tag - a nasty generalisation, but one prevalent in the media right now. It's interesting to note that TalkTalk themselves appeared to distance themselves from Huawei by overplaying links with Symantec (having spoken with industry insiders on this, this is not news...). This shows that we're already seeing a company viewed as "undesirable" making moral decisions on behalf of TalkTalk's customers. See also, wedge: thin end.

Many very popular sites have plenty of porn and ISP level blocking is going to be pretty brutal. I will have a good old nibble of my hat if we get anything better than domain blocking, but if there's full HTTPS inspection, I'll eat the thing whole, and the matching gloves, before moving to a country with a less invasive government (and preferably hot weather, as I will have ingested my hat & gloves).

Let's take an example of why we need granularity to be any good. Twitter. Whilst indulging in a spot of online ornithology, you might enter a search term "great tits". There you go, plenty of porn-over-https on a domain you can't block. Time to legislate seven shades out of twitter, and the next site, and the next...

Finally, lets touch on an old favourite hobby horse of mine: the Internet is not The Web - and there are plenty of non-web services out there, from the old school like NNTP news groups, to the more modern like encrypted peer-to-peer, and a bunch in between where some of the worst images are found. If we aim at google, we're preaching to the choir, they already work with the relevant bodies to keep their results as clean as possible. Again, this is focusing in the wrong place if the real aim is to clean up child abuse imagery.

My suggestion? Make sure the bodies responsible for this sort of thing are adequately funded. I would like to see the creation and distribution of Child Abuse Images come to a complete stop. These latest proposals take aim at two targets though, and when you try to aim at two things at once, one of those shots is likely to miss the target let alone the bulls-eye.

Friday, July 5, 2013

Meet the sarcasm monitor - coming to a social network near you...

Okay, so we already know our personal details are ‘out there’ in the hands of companies who want our data to sell it to third parties. Big Data is big business!

Tracking technologies like marketing analytics, digital footprinting, and cookies all help to build a detailed picture of you: what you had for breakfast, where you ate last night and even your home address.

Spotter, a French company, has reportedly taken things a step further with the development of a tool that detects if a comment posted online has a “sarcastic” tone. Presumably their clients will use the findings as some form of business intelligence.

Obviously it depends on where your company does business. For an international company like Smoothwall this could be relevant if we wanted to track our British customers, because this is a trait of our humour. However, this will probably be next to useless for monitoring comments of customers in parts of the world where sarcasm isn’t part of daily conversation. It would also be interesting to see if it can identify the full spectrum of irony.

The UK sales director at Spotter, Richard May assures us that “the company monitored material that was "publicly available". Thanks for the reassurance! (Did you get that one Spotter?). Seriously though, how can we be sure?


Search giant Google was slammed for circumventing the default settings on Apple’s Safari browser which installed cookies even when the users opted for non-third party cookies. Facebook is also not so friendly, reportedly scanning your personal messages to increase its “like” counter.

Spotter’s chosen time to come to market doesn’t seem so good. People are already more aware than ever that Big Brother is watching. In a global survey by Big Brother Watch 79% said they were concerned about their online privacy. Wherever we are, we must watch what we say online. Many cases have been in the media, with people getting disciplined or fired for being vocal online about things that happen at work.


The Ed Snowden revelations have made us more worried. Just how much do they know? The answer: a lot! As I write GCHQ could be trawling through your Facebook posts, internet histories and phone calls. It is for our own good you know. To protect our freedom, says William Hague. How free do you feel? Not so much?