Archive for the 'Encryption' Category

Oct 14 2014

Wake up to a POODLE puddle

TL:DR – Disable SSL immediately.

As of this morning SSL appears to be dead or at least dying.  The POODLE vulnerability in SSL was released last night, basically revealing a vulnerability in the way that SSL v3 uses ciphers and allows an attacker to make a plain-text attack against the encrypted traffic.  This makes the third major vulnerability released on the Internet this year and is another warning that this level of vulnerability discovery may be the new shape of things to come.

I’m not going to try to explain POODLE in detail, or give you a nice logo for it.  Instead I’ll just point to the better articles on the subject, a couple of which just happen to be written by my teammates at Akamai.  I’ll add more as I find them, but this should tell you everything you need to know for now.

Update: It’s estimated that SSLv3 accounts for between 1% and 3% of all Internet traffic.

And since there’s not an official logo for it yet, I present …. The Rabid Poodle!

Rabid Poodle

Rabid Poodle

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Oct 05 2014

Understanding Apple’s new encryption model

I understand enough about encryption to get myself in trouble, but not much more.  I can talk about it intelligently in most cases, but when we get down to the nitty gritty, bit by bit discussion of how encryption works, I want to have someone who’s really an expert explain it to me.  Which is why I’m glad that Matthew Green sat down to explain Apple’s claims of new encryption that they can’t open for law enforcement in great detail.

The Too Long; Didn’t Read (I often forget what tl;dr means) version of it is that there is a unique ID that’s hidden deep in the hardware encryption chips on your phone that software doesn’t have access to.  This UID is made part of your encryption key through complex algorithms and can’t be pulled out locally or remotely and makes for a strong encryption key that protects your encrypted data.  Do keep in mind that not all of the interesting data on your phone is encrypted, there are still nooks and crannies that can be looked at by someone with physical access to the phone.  And that some of the most interesting stuff on your phone isn’t what’s on it in many cases; it’s the list of who you’ve called, where you’ve been and the like that they can get from the carrier.  That metadata is often at least as important as what’s on your phone, and much easier to get without ever having to even see your phone.

I’m personally very glad that Apple (and Android as well) have begun encrypting phones by default.   Yes, police need to the ability to get into phones and see what people have been doing on them, but the last two years have shown that this ability has been abused for quite some time.  Various governmental officials in the US have decried the move saying they need the ability to catch pedophiles and terrorists.  Yet so far the count of cases where the information needed to catch anyone from either of those categories couldn’t be gotten by other means is still in the single digits.  At the same time the number of  lawsuits against police in the US abusing their ability to get into phones numbers in the hundreds.  Do the math and figure out for yourself if it’s worth law enforcement having easy access.

We’ll be seeing more organizations of all types moving encryption, partially to protect users and partially to defend themselves from the negative publicity being open to the police brings.  There will be a number of missteps, of poor encryption methodology and cases where people realize they can’t just get their backup from the cloud because they used serious encryption and lost the key.  There will be growing pains and there will be examples of guilty people escaping because law enforcement doesn’t have easy access to phone data.  But we need to have strong encryption to protect the privacy of average citizens who’ve done nothing more than catch the attention of the wrong person at the wrong time as well.  Our privacy is much more delicate and deserving of protection than many in power believe it is.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Jul 21 2014

Can I use Dropbox?

Published by under Encryption,Family,Privacy,Risk

I know security is coming to the public awareness when I start getting contacted by relatives and friends about the security of products beyond anti-virus.  I think it’s doubly telling when the questions are not about how to secure their home systems but about the security of a product for their business.  Which is exactly what happened this week; I was contacted by a family member who wanted to know if it was safe to use Dropbox for business.  Is it safe, is it secure and will my business files be okay if I use Dropbox to share them between team members?

Let’s be honest that the biggest variable in the ‘is it secure?’ equation is what are you sharing using this type of service.  I’d argue that anything that has the capability of substantially impacting your business on a financial or reputational basis shouldn’t be shared using any third-party service provider (aka The Cloud).  If it’s something that’s valuable enough to your business that you’d be panicking if you left it on a USB memory stick in your local coffee shop, you shouldn’t be sharing it via a cloud provider in the first place. In many cases the security concerns of leaving your data with a service provider are similar to the dropped USB stick, since many of these providers have experienced security breaches at one point or another.

What raised this concern to a level where the general public?  It turns out it was a story in the Guardian about an interview with Edward Snowden where he suggests that Dropbox is insecure and that users should switch to Spideroak instead.  Why?  The basic reason is that Spideroak is a ‘zero-knowledge’ product, where as Dropbox maintains the keys to all the files that users place on it’s systems and could use those keys in order to decrypt any files.  This fundamental difference means that Dropbox could be compelled by law to provide access to an end user’s file, while Spideroak couldn’t because they don’t have that capability.  From Snowden’s perspective, this difference is the single most important feature difference between the two platforms, and who can blame him for suggesting users move.

Snowden has several excellent points in his interview, at least from the viewpoint of a security and privacy expert, but there’s one I don’t think quite holds up.  He states that Condoleezza Rice has been appointed to the board of directors for Dropbox and that she’s a huge enemy of privacy.  This argument seems to be more emotional than factual to me, since I don’t have much historical evidence on which to base Rice’s opinions on privacy.  It feels a little odd for me to be arguing that a Bush era official might not be an enemy of privacy, but I’d rather give her the benefit of the doubt than cast aspersions on Dropbox for using her experience and connections.  Besides, I’m not sure how much influence a single member of the board of directors actually has on the direction of the product and the efficacy of its privacy controls.

On the technical front, I believe Snowden is right to be concerned.  We know as a fact that Dropbox has access to the keys to decrypt user’s files; they use the keys as part of a process that helps reduce the number of identical files stored on their system, a process called deduplication.  The fact that Dropbox has access to these keys means a few things; they also have access to decrypt the data if they’re served with a lawful order, a Dropbox employee could possibly access the key to get to the data and Dropbox could potentially be feeding into PRISM or one of the many other governmental programs that wants to suck up everyone’s data.  It also means that Dropbox could make a mistake to accidentally expose the data to the outside world, which has happened before.  Of course, vulnerabilities and misconfigurations that results in a lapse of security is a risk that you face when using any cloud service and is not unique to Dropbox.

I’ve never seen how Dropbox handles and secures the keys that are used to encrypt data and they haven’t done a lot to publicize their processes.  It could be that there are considerable safeguards in place to protect the keys from internal employees and federal agencies.  I simply don’t know.  But they do have the keys.  Spideroak doesn’t, so they don’t have access to the data end users are storing on their systems, it’s that simple.  The keys which unlock the data are stored with the user, not the company, so neither employees nor governmental organizations can access the data through Spideroak. Which is Snowden’s whole point, that we should be exploring service providers who couldn’t share our data if they wanted.  From an end-user perspective, a zero-knowledge is vastly preferable, at least if privacy is one of your primary concerns.

But is privacy a primary concern for a business?  I’d say no, at least in 90% of the businesses I’ve dealt with.  It’s an afterthought in some cases and in many cases it’s not even thought of until there’s been a breach of that privacy.  What’s important to most businesses is functionality and just getting their job done.  If that’s the case, it’s likely that Dropbox is good enough for them.  Most businesses have bigger concerns when dealing with the government than whether their files can be read or not: taxes, regulations, taxes, oversight, taxes, audits, taxes… the list goes on.  They’re probably going to be more concerned with the question of if a hacker or rival business can get to their data than if the government can.  To which the answer is probably not.

I personally use Dropbox all the time.  But I’m using it to sync pictures between my phone and my computer, to share podcast files with co-conspirators (also known as ‘co-hosts’) and to make it so I have access to non-sensitive documents where ever I am.  If it’s sensitive, I don’t place it in Dropbox, it’s that simple.  Businesses need to be making the same risk evaluation about what they put in Dropbox or any other cloud provider: if having the file exposed would have a significant impact to your business, it probably doesn’t belong in the cloud encrypted with someone else’s keys.

If it absolutely, positively has to be shared with someone elsewhere, there’s always the option of encrypting the file yourself before putting it on Dropbox.  While the tools still need to be made simpler and easier, it is possible to use tools like TrueCrypt (or it’s successor) to encrypt sensitive files separate from Dropbox’s encryption.  Would you still be as worried about a lost USB key if the data on it had been encrypted?

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Jul 10 2014

Illustrating the problem with the CA’s

You’d think that if there was any SSL certificate out there that’d be carefully monitored, it’d be Google’s.  And you’d be right; between the number of users of Chrome and the Google team itself, the certs that correspond to Google properties are under a tremendous amount of scrutiny.  So when an impostor cert is issued anywhere in the world, it’s detected relatively quickly in most cases.  But the real question is, why are Certificate Authorities (CA’s) able to issue false certs in the first place?  Mostly because we have to trust someone in the process of cert issuance and in theory the CA’s are the ones who are the most trustworthy and best protected.  Unluckily, there are still a lot of holes in the process and the protection of even the best CA’s.

Last week Google detected an unauthorized digital certificate issued in India by the National Infomatics Center(NIC). This week it was revealed that not only were the certs Google knew about issued, but an indeterminate number of other certs had been issued by the NIC.  Their issuance process had been compromised in some way and they’re still in the process of investigating the full scope of the compromise.  Users of Chrome were protected due to certificate pinning, but users of IE and other browsers might not be so lucky. What was done with these certificates, no one knows.  What could be done with them is primarily acting as a man in the middle against users of any of the compromised certs, meaning the entity that now has these certificates could intercept and decrypt email, files, etc.  There are plenty of reasons a government or criminal element would want to have control of a certificate that looks and feels like it’s an authentic Google (or MIcrosoft or…) certificate.

There’s no clear, clean way to improve the CA process.  Extended Validation (EV) certs are one way, but it also makes the whole process of getting an SSL cert much more complex.  But given the the value of privacy and how certificates play a vital role in maintaining it, this may be the price the Internet has to pay.  Pinning certs helps, as will DANE and Sunlight (aka Certificate Transparency).  Neither DANE nor Sunlight are fully baked yet, but they should both help make up for the weaknesses of current processes.  Then it’ll just take a year or three to get them into all the browsers and even longer for older browsers to be retired.  And that’s not even taking into account the fact that we don’t use SSL everywhere.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Mar 18 2014

NSP Microcast – RSAC2014 – Utimaco

I spent a few minutes with the CEO of Utimaco, Malte Pollman at RSAC this year.  Malte explains why Hardware Security Modules are important to the web of trust of the Internet, why lawful interception is a not in conflict with that web of trust.  As with all my interviews at RSAC, I asked Malte how the last year’s worth of spying revelations have affected his company and him personally.  Also, I have a problem pronouncing the company name, which for the record is you-tee-make-oh.

NSPMicrocast-RSAC2014-Utimaco

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Mar 09 2014

Mt. Gox Doxed

I’ve never owned a bitcoin, I’ve never mined a bitcoin, in fact I’ve never really talked to anyone who’s used them extensively.  I have kept half an eye on the larger bitcoin stories though, and the recent disclosures that bitcoin exchange Mt. Gox was victim of hackers who stole the entire of the content in their vault, worth hundreds of millions of dollars (or pounds) have kept my interest.  I know I’m not the only one who’s smelled something more than a little off about the whole story and I’m sure I’m not the only one.  Apparently a hacker, or hackers, who also felt something wasn’t right on the mountain decided to do something about it: they doxed* Mt. Gox and it’s CEO, Mark Karpeles.

We don’t know yet if the files that hackers exposed to the internet were actually legitimate files from Mt. Gox and Mr. Karpeles yet, but this isn’t the only disclosure the company is potentially facing.  Another hacker has claimed to have about 20Gigs of information about the company, their users and plenty of interesting documents.  Between the two, if even a little of the data is valid, it’ll spell out a lot of trouble for Mt. Gox and it’s users.  If I were a prosecutor who had any remote possiblity of being involved in this case, I’d be collecting every piece of information and disclosed file I could, with big plans for using them in court at a later date.  

In any case, I occasionally read articles that say the Mt. Gox experience shows that bitcoins are an unusable and ultimately doomed form of currency because they’re a digital only medium and that they’ll always be open to fraud and theft because of it.  I laugh at those people.  Have they looked at our modern banking system and realized that 99% of the money in the world now only exists in digital format somewhere, sometimes with hard copy, but generally not?  Yes, we’ve had more time to figure out how to secure the banking systems, but they’re still mostly digital.  And eventually someone will do the same to a bank as was done to Mt. Gox.

*Doxed:  to have your personal information discovered or stolen and published on the Internet.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

3 responses so far

Jan 06 2014

Still going to RSA

In the last couple of weeks Mikko Hyponnen from anti-virus company F-Secure announced that he won’t be speaking at the RSA Conference in San Francisco at the end of February.  His reasoning is that the company, RSA, colluded with the NSA for a fee of $10 million in order to get a weakened version of a random number generator included in the public standards, a move that makes the whole suite of encryption standards easier to crack.  As Mikko points out, RSA has not admitted to this accusation, but they haven’t denied it either.    So Mikko has pulled his talk and has publicly stated that as a foreigner, he doesn’t feel right supporting the conference.  I understand his sentiment, I see what he’s hoping to accomplish.  But I don’t think boycotting will do much, other than gain Mikko a little bit of attention short term and harm his reputation long term.

The first problem with boycotting the conference is that RSAC is, for all intents and purposes, a side company from the RSA corporation.  It has it’s own management structure, it’s own bottom line, it’s own profit and loss reporting.  And it’s only a small fraction of the overall revenue stream of the corporation. As such, any impact that boycotting the conference might have is going to be highly dilluted when it reaches the management of the central corporation.  Yes, at some point in a meeting it will be discussed that a speaker has withdrawn over NSA concerns, maybe even a dozen other speakers will join in a show of allegance.  But the conference organizers will simply pick from the dozens of alternative speakers of nearly equal capability and move on.  Senior management might lose two or three minutes of sleep that night, but nothing more.  And any impact that having a particular speaker boycott has can easily be written off as being from other, much larger changes that RSA is making to the conference lay out this year. 

The second problem I have is that while Mikko has stated he’ll be boycotting the RSA Conference, he’s said absolutely nothing about F-Secure boycotting.  As a vendor, I know that marketing departments have to commit to the conference at least a year in advance and I’ve heard that some commit to multi-year contracts in order to get better pricing.  The small booths at either end of the halls cost tens of thousands of dollars, while the big booths in the center of the floor cost the vendors several hundred thousand dollars when all is said and done.  If Mikko wanted to make a statement that would really be heard, he’d have F-Secure withdraw from the RSA Conference this year and for the next few years.  Except he can’t.  Any vendor that’s mid-size or larger in the security field has to be at the RSA conference.  In many cases, this conference is the keystone for the whole marketing effort of the year, and any talk of a boycott would be immediately quashed as an impossibility.  Quite frankly, if you’re a security vendor and you don’t have a presence at RSA, you’re not really a security vendor and everyone knows it.  

The third issue I have with the boycott has nothing to do with Mikko and is closely related to the vendor point; it’s become a popular meme since Mikko’s announcement for security professionals to say they’re going to boycott RSA as well.  I’ll be honest, I’ve never paid to go to RSA, I’ve always had a press pass, gone as a vendor, or gone as a speaker, more than once as all three at the same time.  But even if I was, the money I’d pay to go to RSA is still insignificant when you compare it to what the organization makes off of the sponsors.  It would take a huge number of attendees failing to show up in order to make an impact.  Given the growth rate of the converence over the last few years, it’s most likely that even a thousand people joining up in a boycott would simply lead to a flat growth rate at best.  Additionally, similar to vendors, most people who are attending and have their company pay for it have already purchased their tickets and a boycott at this point would be more detrimental to them than it could be to the RSA Conference.

If you think that NSA has been behaving badly and you really want to have an impact, go to the event and talk to people at the event.  If you’re a speaker, change your talk to include a slide or ten about what you believe RSA has done wrong.  You might be right or you might be wrong, but you’ll have a chance to tell your story to the several hundred people in your audience.  If you’re an attendee, go to the conference and talk to other attendees, tell them why you think the RSA Corporation has crossed the line and spread the word.  You gain almost nothing by throwing a temper tantrum and leaving the playground.  But if you attend, talk to people and raise awareness of the issues, you let others know that something isn’t right, something needs to be changed.

I wish Mikko the best, and maybe his boycott has raised awareness some.  But all the people who say “Me too!” aren’t going to have an impact.  They might feel better about themselves for a short period of time, but all their really doing is cutting themselves off from one of the biggest events in security.  It’s better to attend, be social and spread your opinions that opt out and leave your voice unheard.  I’m attending as a blogger, as a podcaster, as a speaker (panelist, really) and as a vendor.  It would have more impact on me and my career to boycott than it ever would to the RSA corporation.  

If you really want to send the RSA Corporation, quit buying their products and tell them why.  Now that’s a message they’ll hear loud and clear.

 

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

8 responses so far

Dec 04 2013

Everyone’s moving to PFS

Last month I wrote about Perfect Forward Secrecy (PFS) for the Akamai corporate blog.  But if you’d asked me two months earlier what PFS was, you would have seen me madly scrambling for Google to find out more about it.  And I’m not alone; before this summer only a few deeply technical engineers had heard of PFS, almost everyone else had either never encountered it or dismissed it as an unnecessary burden on their servers.  Except the NSA managed to change that perception over the summer.

Now most companies are looking at PFS, or looking at it again.  In a nutshell, PFS is a method used with SSL that creates a temporary key to transmit the session keys for the browser session and then dumps key from memory afterward.  You can use words like ‘ephemeral elliptic curve cryptography’, but the important part of this is that PFS enables a method of encrypting SSL communications that don’t rely on the master key on the server to protect your traffic, it creates a new key every time.  This means that even if that master key is somehow compromised, it doesn’t allow access to all the traffic for that SSL certificate, the attacker must crack each and every session individually.   Which means you have to have a lot more computing power at your disposal to crack more than a few conversations.

PFS is a good idea we should have instantiated some time ago, but it’s got a downside in that it requires a lot of server overhead. But having to view our own governments as the enemy has given tech companies around the globe the impetus to make the change to PFS.  Google is moving towards encrypting all traffic by default, with PFS being part of this effort.  Facebook has moved in the same direction, with PFS also being a critical piece in the protection puzzle.  And Twitter.  And Microsoft.  And … you get the picture.  Companies are moving to use PFS across the board because it gives them a tool they can point to in order to tell users that they really care about securing end user communications.

I have to applaud these companies for taking this step, but even more, I have to hand it to Google, Yahoo, Facebook, and Microsoft for challenging the current status quo of National Security Letters and the secrecy they entail.  There are more questions than answers when it comes to how NSL’s are being used, if they’re necessary and if they are even something a country like the US should be allowing.  Technology is great and it’ll help with some of the problems we’re just starting to understand, but the only long term changes are going to come if we examine the current issues with the NSA and other agencies slurping up every available byte of data for later analysis.  Changes to the laws probably won’t stop anything immediately, but we have to have the conversation.

Using PFS is just a start in to what will be fundamental changes in the Internet.  Encryption everywhere has to become an integral part of the Internet, something privacy boffins have been saying for years.  It may be too late for this to be an effective measure, but we have to do something. PFS makes for a pretty good first step.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Oct 14 2013

Your email won’t be any safer over here

I’m not sure why anyone has the illusion that their data would be safer in Europe than it might be in the US.  While some of the countries in Europe seem to have better laws for protecting email, it’s not a clear cut thing and there are always trade-offs.  While they might have better protections for data at rest, while in transit it might be fair game, or vice versa.  Plus, if you’re an American, you’re the foreigner to those nations, so many of the protections you might think you’re getting are null and void for you.

Rather than simply speculate, as many of us do, Cyrus Farivar at Ars Technica has written an article, Europe Won’t Save You: Why Email is Probably Safer in the US.  If you examine the laws closely, you’ll find that while countries like Germany appear to have stronger privacy laws, some of the caveats and edge cases make a lie of that appearance.  In this particular example, German law puts a  gag order in place by default that prevents your service provider from notifying you in case they’re served with a subpoena or similar device.  Think on that for a moment: if your service provider is served, you’ll never hear about it by default, rather than only when the large intelligence agencies take an interest in you.

Since I moved to the UK I’ve been hip deep in similar arguments with regards to cloud service providers.  Many folks in and around Europe seem to think that their own laws will somehow protect them from the threat of having their data raided by the NSA or some other, even more shadowy US organization.  But the reality is that in many countries they have less protection from their own governments than they do from the US.  Which barely scratches the fact that the core internet routers in many, if not all, countries are compromised by multiple governments, who are getting feeds of every packet that flows across their infrastructure.

The other concern that I hear quite often is about US businesses and information leaving the European Union.  I find this concern interesting, and believe it is likely to be a much more legitimate issue.  In the EU, the data protection laws appear to be much stronger than they are in the US, especially the Safe Harbor Principles.  But the reality is that businesses see the value of having as much personal information as they can get their hands on, so Safe Harbor is given lip service, while the businesses find ways to get around these requirements.  Or in many cases, ask users to opt out of some of the protections to get additional functionality out of a site.

Don’t think that hosting your email or other service is going to protect you if a government wants to get its digital fingers into your email.  As Farivar points out, the closest thing you’ll have to privacy is if you store your email on your own devices and encrypt it with your own encryption keys.  Storing it anywhere else leaves you open to all sorts of questionable privacy laws between you and your hosting provider.  You can’t just consider the jurisdiction you’re in, you have to consider every route your data might take between point A and point Z.  Being the Internet, you’ll never know exactly what route that is going to be.

Personally, I’m not pulling the plug on my Gmail account any time soon.  No government is worse than Google when it comes to intrusive monitoring of your email, lets be honest.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Oct 06 2013

Invasive monitoring at next Winter Olympics

If you have plans to go to the next Winter Olympics, in Sochi, Russia, prepare to have any and all of your electronic communications monitored.  The Guardian has found paperwork, including procurement documents and tenders, looking for the technology needed to monitor all communications to and from the Olympic venue.  We have to assume that this means all phone calls, all wifi access and is very likely to include ways to break into other, supposedly encrypted, channels such as Skype and the TOR network.

It’s really nothing new to think of governments monitoring the communications going on at the Olympics, but the sheer size and depth to which the Russian government will be monitoring is more than a bit daunting.  Given the current environment and the fact that citizens from every walk of life are more sensitive than ever to being spied upon, it’s very likely that this will receive more attention than if it had happened at the London Olympics.  And because it’s Russia that’s doing the monitoring, rather than a western power, it makes it more suspect in many people’s eyes.

One of the scary aspects the Guardian story hints at is that monitoring won’t be aimed simply at the security and safety of attendees of the Olympics, it will also be aimed at political dissidents and ‘illegal’ activities, such as gay rights activism.  Adding to that the probability that all data captured during the Olympics is going to be stored indefinitely and analyzed in depth, anyone who holds views that are unpopular in Russian government should be very, very nervous.  I won’t be surprised to see a number of Russian citizens who attend the Olympics arrested three to six months later as the government gets around to analyzing their communications.  Or to have these communications surfacing years later to embarrass dissidents.

Yes, I’m paranoid.  But if I have an opportunity to attend the Olympics in Sochi, I’ll have to think twice before accepting it.  I’ll take a number of precautions similar to what I’d take if I was attending a big event in China: burner phone with a local SIM, laptop that will be retired after the event, email address that only gets used during the Olympics, just for starters.  I’d also be very cognizant of the fact that I’m being monitored every moment, with my movements being analyzed by computer algorithms as well as human agents.  Most importantly, I would avoid any reading that would raise my paranoia level higher than it already was before or during the trip.

Most people will be oblivious to the monitoring at the Olympic games.  And for most people, that’s a price they’re willing to pay in order to see one of the biggest events in the world.  Which could be the right decision for the average Joe.  But if you’re not the average Joe, if you have opinions or tendencies that are unpopular with the Russian government, think twice about taking some precautions before you head to the Olympics in 2014.

Last of all, remember, the monitoring of electronic communications will just part of the equation.  There will be mics and cameras everywhere as well.  Probably even the bathrooms.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Next »