Friday, May 15, 2015

Game of 72 Myth or Reality?

I can’t pretend that, in the mid 90s, I didn't pester my mum for a pair Adidas poppers joggers. Or that I didn't, against my better judgement, strut around in platform sneakers in an attempt to fit in with the in crowd. But emulating popular fashion was as far as I got. I don’t remember ever doing stupid or dangerous dares to impress my classmates. Initially, I thought, maybe I was just a good kid, but a quick straw poll around Smoothwall Towers, showed that my colleagues don’t recall hurting themselves or anyone else for a dare either. The closest example of a prank we could come up with between us was knock and run and egg and flour - hardly show stopping news.
But now, teenagers seem to be taking daring games to a whole new level through social media, challenging each other to do weird and even dangerous things. Like the #cinnamonchallenge on Twitter (where you dare someone to swallow a mouthful of cinnamon powder in 60 seconds without water). A quick visual check for the hashtag shows it’s still a thing today, despite initially going viral in 2013, and doctors having warned teens about the serious health implications. Now, apparently there’s another craze doing the rounds. #Gameof72 dares teens to go missing for 72 hours without contacting their parents. The first suspected case was reported in a local French newspaper in April, when a French student disappeared for three days and later told police she had been doing Game of 72. Then, in a separate incident, on 7 May, two schoolgirls from Essex went missing for a weekend in a suspected Game of 72 disappearance. Police later issued a statement to say the girls hadn't been playing the game. So why then, despite small incident numbers, and the absence of any actual evidence that Game of 72 is real, are parents and the authorities so panicked? Tricia Bailey from the Missing Children’s Society warned kids of the “immense and terrifying challenges they will face away from home.” And Stephen Fields, a communications coordinator at Windsor-Essex Catholic District School Board said, “it’s not cool”, and has warned students who participate that they could face suspension. It’s completely feasible that Game of 72 is actually a myth, created by a school kid with the intention of worrying the adults. And it’s worked; social media has made it seem even worse, when in reality, it’s probably not going to become an issue. I guess the truth is, we’ll probably never know, unless a savvy web filtering company finds a way of making these twitter-mobile games trackable at school, where peer pressure is often at its worst. Wait a minute...we already do that. Smoothwall allows school admins to block specific words and phrases including, Twitter hashtags. Say for instance that students were discussing Game of 72, or any other challenge, by tweet, and that phrase had been added to the list of banned words or phrases; the school’s administrator would be alerted, and their parents could be notified. Sure it won’t stop kids getting involved in online challenges, because they could take it to direct message and we’d lose the conversation. But, I think you’ll probably agree, the ability to track what students are saying in tweets is definitely a step in the right direction.

Wednesday, May 6, 2015

Bloxham Students Caught Buying Legal Highs at School


Bloxham Students Caught Buying Legal Highs at School


It’s true what they say: History repeats itself. This is especially true in the world of web security where tech-savvy students, with an inquisitive nature try to find loopholes in school filters to get to where they want to be or to what they want to buy.

Back in September we blogged about two high profile web filtering breaches in the US; highlighting the cases of Forest Grove and Glen Ellyn Elementary District. Both made the headlines because students had successfully circumvented web filtering controls.

Now the media spotlight is on Bloxham School in Oxfordshire, England, after pupils were caught ordering legal highs from their dorms. See what I mean about history repeating itself? Okay, so the cases aren’t identical, but there is a unifying element. The Forest Grove student was found looking at erotica on Wattpad, students from Glen Ellyn students were caught looking at pornography, and at Bloxham it’s “legal” highs. The unifying factor in all three cases is that they were facilitated by a failure in the school’s web filter. 

The difficulty, though, is working out what exactly went wrong with Bloxham’s filter, because none of the details surrounding the technicalities have been announced. Were students allowed access to website selling recreational drugs, or was there an oversight on the part of the web filtering management? In the original story broken by the Times, a teenage pupil was reported to have been expelled, and other students disciplined following an investigation by the school which found they had been on said websites.

Without knowing the details, it is probably wrong to speculate, however, i’m going to do it anyway! It’s entirely possible Bloxham chose a more corporate focussed web filter. In a corporate environment, “legal" highs may not present as much of an issue as in an education setting. With a strong focus on education, Smoothwall’s content filter has always been good at picking up these types of site. This is aided by the real-time content filter not reliant on a domain list, as these sites are always on the edge of the law, and move rapidly. Because the law is different depending upon where you live - and, indeed, rapidly changing regarding these substances, Smoothwall doesn’t attempt to differentiate between the grey area of “legal highs” and those recreational substances on the other side of the law. All of them come under the “drugs” category. This gives a solid message across all age ranges, geographies and cultures: it’s best not to take chances with your health!

Wednesday, April 22, 2015

A new option to stem the tide of nefarious Twitter images...

Smoothwall's team of intrepid web-wranglers have recently noticed a change in Twitter's behaviour. Where once, it was impossible to differentiate the resources loaded from twimg.com, Twitter now includes some handy sub-domains so we can differentiate the optional user-uploaded images from the CSS , buttons, etc.

This means it's possible to prevent twitter loading user-content images without doing HTTPS inspection - something that's a bit of a broad brush, but given the fairly hefty amount of adult content swilling around Twitter, it's far from being the worst idea!

Smoothwall users: Twitter images are considered "unmoderated image hosting" - if you had previously made some changes to unblock CSS and JS from twimg, you can probably remove those now.

Tuesday, March 31, 2015

Pukka Firewall Lessons from Jamie Oliver

Pukka Firewall Lessons from Jamie Oliver

In our office I’m willing to bet that food is discussed on average three times a day. Monday mornings will be spent waxing lyrical about the culinary masterpiece we’ve managed to prepare over the weekend. Then at around 11 someone will say, “Where are we going for lunch?” Before going home that evening, maybe there’s a question about the latest eatery in town. 

I expect your office chit chat is not too dissimilar to ours, because food and what we do with it has skyrocketed in popularity over the past few years. Cookery programmes like Jamie Oliver's 30 minute meals, the Great British Bake-off and Masterchef have been a big influence. 

Our food obsession, however, might be putting us all at risk, and I don’t just mean from an expanded waistline. Cyber criminals appear to have turned their attention to the food industry, targeting Jamie Oliver’s website with malware. This is the second time that malware has been found on site. News originally broke back in February, and the problem was thought to have been resolved. Then, following a routine site inspection on the 13th of March, webmasters found that the malware had returned or had never actually been completely removed. 

It’s no surprise that cyber criminals have associated themselves with Jamie Oliver, since they’ve been leeching on pop culture and celebrities for years. Back in 2008, typing a star’s name into a search engine and straying away from the official sites was a sure fire way to get malware. Now it seems they’ve cut out the middleman, going straight to the source. This malware was planted directly onto JamieOliver.com.

Apart from bad press, Jamie Oliver has come away unscathed. Nobody has been seriously affected and the situation could have been much worse had the malware got into an organisational network. 

Even with no real damage there’s an important lesson to be learned. Keep your firewall up to date so it can identify nefarious code contained within web pages or applications. If such code tries to execute itself on your machine, a good firewall will identify this as malware.

Wednesday, March 18, 2015

5 Important Lessons from the Judges Who Were Caught Watching Porn


5 Important Lessons from the judges who were caught watching porn

I've never been in court before or stood in a witness box, and I hope I never do. If I am, however, called before a judge, I’d expect him or her to be donning a funny wig and a gown, to be above average intelligence, and to judge my case fairly according to the law of the land. What I would not expect is for that judge to be indulging while in the office, as these District Judges have done. Four of Her Majesty’s finest have been caught watching porn on judicial owned IT equipment. While, the material didn't contain illegal content or child images, it’s easy to see why the case has attracted so much media attention. I mean, it’s the kind of behaviour you would expect from a group of lads on a stag, not from a District Judge! Now the shoe is on the other foot, and questions will be asked about how a porn culture was allowed to develop at the highest levels of justice. Poor web usage controls and lack of communication were more than likely to blame. But speculation aside, the world may have passed the point where opportunity can remain unrestricted to allow things like this to happen. Employees, especially those in high positions, are more vulnerable and need protection. So here are 5 important lessons on web filtering from 4 District Judges: 1. Know Your Organisational Risk – The highest levels of staff pose the highest risk to the organisation. Failures on their part risk the credibility of the whole organisation. 2. Recognise Individual Risk – While not always the case, veteran leadership may be the least computer literate and risk stumbling into ill-advised territory accidentally. 3. Communicate with Staff – Notification of acceptable use policies can go a long way to getting everyone on the same page and help with legal recourse when bad things do happen. 4. Be Proactive – Use a web filter for what’s not acceptable instead of leaving that subject matter open to traffic. If you still want to give your staff some flexibility, try out a limit-to-quota feature. 5. Trust No One (Blindly) – Today’s internet environment makes a blind, trust-based relationship foolish. There is simply too much shady stuff out there and much of it is cleverly disguised. If there is anyone out there who’s reading and thinking, this would never happen in my organisation; my staff would never do that, think again, my friend. Nobody is perfect; the ability to look at inappropriate content knows no bounds, including the heights of hierarchy. We’re all potential infringers, as proved by Judges Timothy Bowles, Warren Grant, Peter Bullock and Andrew Maw.

Thursday, March 5, 2015

Statement: Smoothwall and the "FREAK" Vulnerability

In light of the recent "FREAK" vulnerability, in which web servers and web browsers can be cajoled into using older, more vulnerable ciphers in encrypted communications, we would like to assure customers that the web server configuration on an up-to-date Smoothwall system is not vulnerable to this attack.

Similarly, if you are using "HTTPS Decrypt & Inspect" in Smoothwall, your clients' browsers will afforded some protection from attack, as their traffic will be re-encrypted by the web filter, which does not support downgrading to these "Export Grade" ciphers.

Wednesday, March 4, 2015

Searching Safely When HTTPS is Mandatory

Searching Safely when HTTPS is Mandatory


Nobody wants anyone looking at their search history. I get it. I mean, look at mine  —oh wait, don't—that's quite embarrassing. Those were for a friend, honestly.

Fortunately for us, it's pretty difficult to dig into someone's search history. Google even forces you to log in again before you can view it in its entirety. Most search engines now encrypt our traffic by default, too —some even using HSTS to make sure our browsers always go secure. This is great news for consumers, and means our privacy is protected (with the noticeable exception of the search provider, who knows everything and owns your life, but that's another story).

This all comes a little unstuck though - sometimes we want to be able to see inside searches. In a web filtered environment it is really useful to be able to do this. Not just in schools where it's important to prevent searches for online games during lessons, but also in the corporate world where, at the very least, it would be prudent to cut out searches for pornographic terms. It's not that difficult to come up with a handful of search terms that give potentially embarrassing image results.

So, how can we prevent users running wild with search engines? The first option is to secure all HTTPS traffic with "decrypt and inspect" type technology —your Smoothwall can do this, but you will need to distribute a certificate to all who want to use your network to browse the web. This certificate tells the browser: "trust this organisation to look at my secure traffic and do the right thing". This will get all the bells and whistles we were used to in the halcyon days of HTTP: SafeSearch, thumbnail blocking, and search term filtering and reporting.

Full decryption isn't as easy when the device in question is user-owned. The alternative option here is to force SafeSearch (Google let us do this without decrypting HTTPS) but it does leave you at their mercy in terms of SafeSearch. This will block anything that's considered porn, but will leave a fair chunk of "adult" content and doesn't intend to cover subjects such as gambling —or indeed online games. You won't be able to report on any of this either, of course.

Some people ask "can we redirect to the HTTP site" - this is a "downgrade attack", and exactly what modern browsers will spot, and prevent us from doing. We also get asked "can we resolve DNS differently, and send secure traffic to a server we have the cert for?" - well, yes, you can, but the browser will spot this too. You won't get a certificate for "google.com", and that's where the browser thinks it is going, so that's where it expects the certificate to be for.

In conclusion: ideally, you MITM or you force Google's SafeSearch & block access to other search engines. For more information read our whitepaper: 'The Risks of Secure Google Search'. It examines the problems associated with mandatory Google HTTPS searches, and suggests methods which can be used to remedy these issues.