Showing posts with label filtering. Show all posts
Showing posts with label filtering. Show all posts

Friday, June 12, 2015

Time For a Digital Detox or Better Filtering?



Being easily distracted has been a thorn in my side since Oldbury Park Primary School. I remember the day when mum and dad sat me down and read out my year 6 school report. Things were going so well, and then - boom - a comment from Mrs Horn that rained on my previously unsullied education record. ‘’Sarah can organize herself and her work quite competently if she wishes, but of late has been too easily distracted by those around her.” She had a point, but try telling that to a distraught eleven year who valued the opinion of her teachers. I made a vow after that. I would never let my report card be sullied again. Working on my concentration in secondary school and college helped me to pass my GCSEs and A-levels.


Then, when I entered the world of work I found an environment not too dissimilar to school. There were managers to impress, friends to win, and office politics instead of playground politics. Comme ci comm. But I was more informed this time, and found ways to stay focused: wearing headphones (a great way to show your otherwise engaged), meditation (limited to the park, never in the office), and writing to-do lists. But these are workplace tactics, if I were a student now, my report would probably be worse. I'd be lost with access to so many devices and so much time-wasting material.
So there, I’ve laid bare more than I should have, but I think my personal character assassination has been worth it, because it’s proved a point. Kids have always been distracted; tech has just made the problem worse. In addition to the usual classroom distractions, teachers now have to manage digital distractions, and it’s all affecting students’ progress.


For the head of the Old Hall School in Telford, Martin Stott, observing this trend was worrying. He said, “It seems to me that children’s ability to take on board the instructions for multi-step tasks has deteriorated. For a lot of children, all their conversation revolves around these games. It upsets me to see families in restaurants and as soon as they sit down the children get out their iPads.” Stott isn’t the first to raise the issue of digital dependency, (there are digital detox centers for adults who want to have a break from tech). He might, however, be the first to bring the issue to the education arena and get significant media coverage, by introducing a week’s digital embargo at his school. Students have to put away the Xboxes, iPads, and turn off the TV in an attempt to discover other activities like reading, board games and cards.

I’m split on the whole digital detox idea. The cynic asks how can a one week break to make any real change to the amount of time kids spend on devices. And restricting them completely is a sure fire way to spark rebellion. But my optimistic side says it’s a step in the right direction. It raises awareness by asking kids to realize that there’s life outside Minecraft and social media. Now that’s not so bad.

Nonetheless I do think that the problems with device dependency at Old Hall School could be solved with better filtering instead of a digital detox. As existing users will tell you, there’s a trusty little tool in our web filter known as ‘limit to quota’. Admins can configure the amount of time users can spend on different types of material, including material classified as time-wasting. According to predefined rules, users can use their allocation in bite-sized chunks, and be prompted every five or ten minutes, with an alert stating how much they’ve used. That way they’ll be no nasty shocks; when the timer eventually runs out after 60 minutes, they’ll be able to continue using the safe parts of the web that support their educational needs, without the distractions. Now that’s got to be more appealing than dropping the devices cold turkey, isn’t it?


Wednesday, March 18, 2015

5 Important Lessons from the Judges Who Were Caught Watching Porn


5 Important Lessons from the judges who were caught watching porn

I've never been in court before or stood in a witness box, and I hope I never do. If I am, however, called before a judge, I’d expect him or her to be donning a funny wig and a gown, to be above average intelligence, and to judge my case fairly according to the law of the land. What I would not expect is for that judge to be indulging while in the office, as these District Judges have done. Four of Her Majesty’s finest have been caught watching porn on judicial owned IT equipment. While, the material didn't contain illegal content or child images, it’s easy to see why the case has attracted so much media attention. I mean, it’s the kind of behaviour you would expect from a group of lads on a stag, not from a District Judge! Now the shoe is on the other foot, and questions will be asked about how a porn culture was allowed to develop at the highest levels of justice. Poor web usage controls and lack of communication were more than likely to blame. But speculation aside, the world may have passed the point where opportunity can remain unrestricted to allow things like this to happen. Employees, especially those in high positions, are more vulnerable and need protection. So here are 5 important lessons on web filtering from 4 District Judges: 1. Know Your Organisational Risk – The highest levels of staff pose the highest risk to the organisation. Failures on their part risk the credibility of the whole organisation. 2. Recognise Individual Risk – While not always the case, veteran leadership may be the least computer literate and risk stumbling into ill-advised territory accidentally. 3. Communicate with Staff – Notification of acceptable use policies can go a long way to getting everyone on the same page and help with legal recourse when bad things do happen. 4. Be Proactive – Use a web filter for what’s not acceptable instead of leaving that subject matter open to traffic. If you still want to give your staff some flexibility, try out a limit-to-quota feature. 5. Trust No One (Blindly) – Today’s internet environment makes a blind, trust-based relationship foolish. There is simply too much shady stuff out there and much of it is cleverly disguised. If there is anyone out there who’s reading and thinking, this would never happen in my organisation; my staff would never do that, think again, my friend. Nobody is perfect; the ability to look at inappropriate content knows no bounds, including the heights of hierarchy. We’re all potential infringers, as proved by Judges Timothy Bowles, Warren Grant, Peter Bullock and Andrew Maw.

Wednesday, March 4, 2015

Searching Safely When HTTPS is Mandatory

Searching Safely when HTTPS is Mandatory


Nobody wants anyone looking at their search history. I get it. I mean, look at mine  —oh wait, don't—that's quite embarrassing. Those were for a friend, honestly.

Fortunately for us, it's pretty difficult to dig into someone's search history. Google even forces you to log in again before you can view it in its entirety. Most search engines now encrypt our traffic by default, too —some even using HSTS to make sure our browsers always go secure. This is great news for consumers, and means our privacy is protected (with the noticeable exception of the search provider, who knows everything and owns your life, but that's another story).

This all comes a little unstuck though - sometimes we want to be able to see inside searches. In a web filtered environment it is really useful to be able to do this. Not just in schools where it's important to prevent searches for online games during lessons, but also in the corporate world where, at the very least, it would be prudent to cut out searches for pornographic terms. It's not that difficult to come up with a handful of search terms that give potentially embarrassing image results.

So, how can we prevent users running wild with search engines? The first option is to secure all HTTPS traffic with "decrypt and inspect" type technology —your Smoothwall can do this, but you will need to distribute a certificate to all who want to use your network to browse the web. This certificate tells the browser: "trust this organisation to look at my secure traffic and do the right thing". This will get all the bells and whistles we were used to in the halcyon days of HTTP: SafeSearch, thumbnail blocking, and search term filtering and reporting.

Full decryption isn't as easy when the device in question is user-owned. The alternative option here is to force SafeSearch (Google let us do this without decrypting HTTPS) but it does leave you at their mercy in terms of SafeSearch. This will block anything that's considered porn, but will leave a fair chunk of "adult" content and doesn't intend to cover subjects such as gambling —or indeed online games. You won't be able to report on any of this either, of course.

Some people ask "can we redirect to the HTTP site" - this is a "downgrade attack", and exactly what modern browsers will spot, and prevent us from doing. We also get asked "can we resolve DNS differently, and send secure traffic to a server we have the cert for?" - well, yes, you can, but the browser will spot this too. You won't get a certificate for "google.com", and that's where the browser thinks it is going, so that's where it expects the certificate to be for.

In conclusion: ideally, you MITM or you force Google's SafeSearch & block access to other search engines. For more information read our whitepaper: 'The Risks of Secure Google Search'. It examines the problems associated with mandatory Google HTTPS searches, and suggests methods which can be used to remedy these issues.

Tuesday, February 11, 2014

Safer Internet Day: 4 Things You Might Not Realise Your Webfilter Can Do

Since it's Safer Internet Day today, I thought i'd use it as an excuse to write a blog post. Regular readers will know I don't usually need an excuse, but I always feel better if I do.

Yesterday, I was talking to our Content Filter team about a post on the popular Edugeek forum, where someone asked "is it possible to block adult content in BBC iPlayer?". Well, with the right web filter, the answer is "yes", but how many people think to even ask the question? Certainly we hadn't thought much about formalising the answer. So I'm going to put together a list of things your web filter should be capable of, but you might not have realised...


1. Blocking adult content on "TV catch up" services like iPlayer. With use of the service soaring, it's important that any use in education is complemented with the right safeguards. We don't need students in class seeing things their parents wouldn't want them watching at home. There's a new section of the Smoothwall blocklist now which will deal with anything on iPlayer that the BBC deem unsuitable for minors.

2. Making Facebook and Twitter "Read Only". These social networks are great fun, and it can be useful to relax the rules a bit to prevent students swarming for 4G. A read-only approach can help reduce the incidence of cyber-bullying and keep users more focused.

3. Stripping the comments out of YouTube. YouTube is a wonderful resource, and the majority of video is pretty safe (use Youtube for Schools if you want to tie that down further — your filter can help you there too). The comments on videos, however, are often at best puerile and at worst downright offensive. Strip out the junk, and leave the learning tool - win win!

4. Busting Google searches back down to HTTP and forcing SafeSearch. Everybody appreciates a secure service, but when Google moved their search engine to HTTPS secure traffic by default, they alienated the education community. With SSL traffic it is much harder to vet search terms, log accesses in detain, and importantly force SafeSearch. Google give you DNS trickery to force the site back into plain HTTP - but that's a pain to implement, especially on a Windows DNS server. Use your web filter to rewrite the requests, and have the best of both.

Monday, July 22, 2013

No Budget to Block Porn? Confuse the Public and Rope In ISPs...

For the past month or so, the UK government has increased its hot-air output on the subject of online pornography. I hope their aims are admirable (and I have to assume they are), but there seems to be relatively little method and much more madness right now. Where are they going wrong, and what can be done about it?

 Not all porn is Child Abuse. Following two recent, high profile cases where child murderers were found to have viewed child abuse images, there were a number of hasty pronouncements, fuelled in large part by "enthusiastic" press coverage. Most of these centred around "regular" legal pornography. 

This is a problem. Even if most viewers of abuse imagery do also view legal porn it doesn’t follow that viewing legal porn leads to viewing child abuse imagery. Users of illegal drugs also purchase headache tablets in the supermarket - should we ban all painkillers because users might turn to illegal drugs? I fear, however, that good sense makes poor headlines, so we're probably stuck with this crooked thinking.

 It is difficult to decide what is "porn": in order to protect the children, there is a suggestion that ISPs block access to porn "by default" (though there seems to be some weaselling on the cards here with the word "default"). However this happens, the question will arise "who decides what is pornography?". In this case, it won't be the government, as they've devolved responsibility to a private organisation (your ISP) who will further devolve this to a filtering company.

I know a little about the inner workings of one such filter company - we at Smoothwall spend quite some effort on making sure things are as well categorised as they can be. It's a difficult question - one US judge managed to come up with an interesting answer: "I know it when I see it. Our lists aren't perfect, but the "lowest bidder" is likely to be some faceless off-shore corporate who frankly won't give a <censored> if your favourite sports forum has been misidentified as pornographic.

Update: The BBC have picked up on this outsourcing of filtering and identified TalkTalk's filtering partner as Huawei, who have been stuck with the "they must be up to no good because they're from China" tag - a nasty generalisation, but one prevalent in the media right now. It's interesting to note that TalkTalk themselves appeared to distance themselves from Huawei by overplaying links with Symantec (having spoken with industry insiders on this, this is not news...). This shows that we're already seeing a company viewed as "undesirable" making moral decisions on behalf of TalkTalk's customers. See also, wedge: thin end.

Many very popular sites have plenty of porn and ISP level blocking is going to be pretty brutal. I will have a good old nibble of my hat if we get anything better than domain blocking, but if there's full HTTPS inspection, I'll eat the thing whole, and the matching gloves, before moving to a country with a less invasive government (and preferably hot weather, as I will have ingested my hat & gloves).

Let's take an example of why we need granularity to be any good. Twitter. Whilst indulging in a spot of online ornithology, you might enter a search term "great tits". There you go, plenty of porn-over-https on a domain you can't block. Time to legislate seven shades out of twitter, and the next site, and the next...

Finally, lets touch on an old favourite hobby horse of mine: the Internet is not The Web - and there are plenty of non-web services out there, from the old school like NNTP news groups, to the more modern like encrypted peer-to-peer, and a bunch in between where some of the worst images are found. If we aim at google, we're preaching to the choir, they already work with the relevant bodies to keep their results as clean as possible. Again, this is focusing in the wrong place if the real aim is to clean up child abuse imagery.

My suggestion? Make sure the bodies responsible for this sort of thing are adequately funded. I would like to see the creation and distribution of Child Abuse Images come to a complete stop. These latest proposals take aim at two targets though, and when you try to aim at two things at once, one of those shots is likely to miss the target let alone the bulls-eye.

Monday, November 19, 2012

Block or Unlock?

With facebook's announcement that they're slowly opting all their users into HTTPS, yet another large chunk of the web gets a welcome layer of encryption.

Welcome, of course, as it helps protect users' highly personal data - often all to recoverable by network sniffing tools, and decreases the possibility of cookie hijack. It's by no means perfect, but it's a great addition.

On the other hand, this SSLization of the web universe does pose a threat in businesses and schools alike - with more traffic going over HTTPS, the requirement for web filtering to intercept and decrypt this traffic rises. In many instances, the stark choice is to either block a site completely, or perform an intrusive "Man in the Middle" inspection. These issues are always going to be most keenly felt on BYOD devices where the MitM decryption would be both more intrusive technically, and socially - hey, it's my device, my traffic, keep out!

There are no silver bullets here. Sure, we can identify most HTTPS traffic's ultimate destination (it's facebook, it's google), but many organisations need a finer level of policy of they are to allow these sites - forcing safesearch is an important one for Schools, or for businesses, maybe a restriction on facebook posts.

The creeping tide of HTTPS is not going away - the only thing keeping more large sites from going fully SSL is the cost/speed tradeoff (encryption on that scale can be computationally expensive), but the need for web filtering for an ever more varied set of organisations has yet to wane either.

This is going to be a long and interesting ride... and I would welcome any comments from our readers on what they are doing to work around these problems, or what they think would be the ideal scenario.

Wednesday, July 27, 2011

Reverse Image Search

Google have recently launched a new set of reverse image search functionality for their image search service. For the uninitiated, “reverse” image search allows you to use an image as the jumping off point for your search, instead of boring textual keywords.

And why exactly would we want to do this? I can think of a few reasons:

In the simplest case this can be a more interesting, or intuitive way to image search.
Perhaps you find a 5 year old JPG in your home area and you just can’t remember where it came from. Maybe Google remembers?
You need to find a HD version of your desktop wallpaper for that shiny new monitor. No problem...
Maybe you’re a rights-holder trying to track your own images. You wouldn’t be the first.
Being scammed by online dater fakers? Reverse search that profile picture - oh yes, that *is* Pierce Brosnan.

Now this isn’t an entirely new idea, an early player in the game was TinEye. TinEye are still operating and hopefully they’ll stay around some more, giving us double the image searching fun.

Google’s new functionality comes in two pieces.
At the core is ‘Search by Image’ within Google Images. Using the search query box, you can now choose to search with an image of your choosing. This can be a link to an image available on the web, or you can upload one from your local machine. Browser permitting you can even drag and drop a file, which is cute.




As we can see the result set allows us to discover locations on the web where the desired image can be found. We can also specify a different size for the image and locate those too.
Google’s algorithm will make a best guess at the topic of your search and this “trail” can be followed in the normal way - using the suggestion as a search term.

Further down the page we find the second part of the functionality, ‘Visually similar images’. This is where it gets interesting. We can now search around other images found to be similar to our input image. Effectively we can “bootstrap” the image search process with an image of our choosing. This is a great way to find something very particular, or something hard to spell, or indeed... pornography.



Clearly this can be used to find content without stating your intention in the form of keywords. For Corporate or Education networks this might be an AUP circumvention risk. Hence, filters must move with the times. Here at Smoothwall we’ve added a new category for Reverse Image Search services, as it may not be appropriate for all users. We’ve also worked to ensure Force SafeSearch, Search term filtering and Deep URL Analysis are compatible with Google’s latest developments.
Screenshot 2 was generated behind Smoothwall Guardian, demonstrating those features. Just for fun, here’s a screenshot using A. N. Other web filter...

Note: Censored to be (semi) safe-for-work.

Tuesday, May 31, 2011

Google and Mozilla giving up on URLs?

In the past few weeks, there have been indications that two of the Internet's biggest browsers are reconsidering the central position of the URL in web browsing. Firefox and Chrome's designers are looking at ways to downsize, repurpose or remove the traditional "location bar" where traditionalists have been used to typing web addresses for years.
This comes as no great shock - even in the early days of the web, efforts were made by the likes of AOL to use keywords to navigate to websites. AOL failed, ultimately, but the concept succeeded. In today's web, entering a known URL is unusual for most people - we trust our search engines to bring back the content we require from our search terms, and we use our bookmarks to keep track of things we like - never needing to see the URL itself. Advertisers are starting to make more use of this too - it is increasingly difficult to get short, memorable domain names, and people make typos. If you can be sure your site ranks well for the name of your company, you don't need to worry about people mis-spelling your domain (and when your name is a bit tough to pronounce outside of the English speaking world.. or even in it... yeah, but we have always been called Smoothwall, so we're sticking to it, thanks!).

With the web losing some of the location-based addressing that ties content to domains and urls, and more web applications taking content from a variety of sources, this move would seem to send a warning to some popular URL-(ab)users - who needs link shorteners in a world without typing links? If everything is sent with embedded links, or transferred to meatspace as keywords rather than URL these services may see a decline. Interestingly for Smoothwall, and our users, this could accelerate the demise of the URL filter. When we no longer need sites to identify themselves as positively in URL, we can be more ambiguous - for example, bbc may no longer feel the need to have all sport under /sport - they aren't doing that to benefit a URL filter, and if there's diminishing benefit for the consumer, need they maintain these syntactic niceties?

Interesting times ahead folks.

Monday, March 21, 2011

ICANN approve XXX, Domain Registrar In Line for $$$

ICANN have finally approved the controversial .xxx top level domain. Apparently all the porn on the Web is suddenly going to up sticks and move to this new domain. Whilst our jolly pornographers get to grips with that, lets take a moment to leave fantasy island and consider the real world implications of this move.

Who is the new TLD going to help? Will it help those of us trying to keep impressionable youngsters away from pornographic material? Not really. At Smoothwall we have been blocking this non-existent domain pretty much since we started making web content filters. It is not hard to do, but it certainly does not get you much traction. Most porn sites will keep their existing domains, and even if legislation eventually forces the US and EU sites to consolidate under .xxx, there's still the less salubrious porn sites whose owners are less than concerned about that sort of legal threat. Will it help the porn industry? Unlikely. It might lead to the odd fracas over a contested domain name, where two skin merchants try to muscle in on the same .xxx domain, having come from, say a .com and a .tv. No, the only people it will help are those selling domains, and the really unimaginative self-abusers who have a hard time finding porn (if this is you, please write in at the usual address).

Entertainingly, whilst we've been fannying around trying to find a new home for our hardcore, looking at pics of naked people has finally been relegated to second spot in the "internet usage highscore table". Yes, you guessed it, social networking, that digital white noise (that this blog almost certainly counts as), has overtaken porn in the UK web traffic stakes. I'm not sure what sort of a message this sends about our society as a whole. Idle chatter probably appeals to a wider demographic than the soon-to-be denizens of .xxx and is less likely to be blocked at web filter level, despite contributing to huge levels of timewasting in offices the world over. Maybe we should lobby for a "social notworking" tld - .poke? .trivial? .inane? .waste?