Jul 30 2014

Russia says “Hand over your code.”

Published by under Cloud,Privacy

Well, this should be interesting.  The Russian Communications Minister suggested, rather strongly, that Apple and SAP share their source code with the Russian government so that it could be reviewed to make sure it wasn’t being used to spy on Russian citizens.  Yes, Russia is playing the privacy card to sneak a peek at the crown jewels of two of the biggest high tech companies in the world.  Who says Russian politicians don’t have a sense of humor?

On the surface, the request for source code review in order to protect the privacy of Russian citizens from US spying has some merit.  Since the Snowden revelations last year, I think anyone not familiar with Apple and SAP would be willing to entertain the idea that either or both companies might have backdoors in their software.  But anyone who knows these companies understands they’re big enough that they can and would strongly resist any effort to introduce spy technologies into their software, probably vocally.  Beneath the surface of the request, what Russia is more likely looking for is a way to compromise this software themselves and get access to company secrets in order to share them with their own corporations.  Historically speaking, there’s a fair amount of evidence to support this theory.  Or maybe I’m simply too cynical.

Irony aside, between recent laws requiring traffic to be logged inside Russia and additional laws requiring all Russian data to be stored in Russia, this shouldn’t be a surprising move.  In fact, I won’t be at all startled if the next move is a law requiring any software that’s being installed on hardware within Russia to require testing by the Russian government before deployment. The two current laws are already going to make any cloud deployment that relies on global distribution (meaning all of them) nearly impossible, but adding a code audit to those requirements will make doing business in that location unviable, to say the least.

Apple and SAP could make their source code available for ministry code review, but I find that idea extremely unlikely.  The difficulties of doing such code review in environment that is acceptable to both of these companies and the Communications Ministry is going to be next to impossible to create.  Apple is well known for how jealously they guard both their source code and their developing hardware and SAP isn’t all that far off the mark, philosophically speaking.  It’s unlikely either company would be willing to allow their software to be shared for review off of the company premises, or even reviewed in an environment that would allow for the reviewer to copy the code in some way.  And it’s unlikely that any Russian officials are willing to settle on the compromises that will be mandated by the companies before a review is allowed.

The Reuters article suggests that the code review that is being requested by the Russian Communications Minister is politically motivated and being done in response to the sanctions that are being put in place by the European Union and the US in response to the situation in the Ukraine.  While there might be an element of this in the timing, I believe that this request is part of a larger movement within Russia to tighten their control over all data within their borders instead.  So far, the disclosure of source code is merely a request, without force of law behind it.  But don’t be surprised if that request changes to a legal requirement within the next year and it encompasses any software being sold into Russia.  

This situation has layers of complexity that I’m not comfortable covering in a blog post, and in fact I don’t believe I have the background to understand many of the political implications involved.  Russia has made many moves recently that seem to be inherently opposed to the openness of the Internet and to any sort of Cloud deployment.  Both of these seem like self-limiting actions by the Russian government that will keep the country from prospering in the future.  How many companies will decide the market in Russia is simply not big enough to take the risks of sharing source code or storing information inside of the country?  And how long will the companies that do share code be able to keep it secret without it being shared with Russian companies?  

I strongly suspect both Apple and SAP are currently telling the Russian Communications Minister to go pound sand in very nicely worded, politically correct ways.  And that the Minister is calmly telling them both that his request will soon carry the force of law behind them, so they’d better play nice or there will be sanctions involved in the future.  I would not want to be an employee of either of these companies who works in Russia right now, that much I’m sure of.

No responses yet

Jul 29 2014

You’ve been reported … by an ad

Published by under Government,Malware,Risk

This looks like an interesting experiment; the City of London police have started placing ads on sites for pirated music warning that the visit to the site has been recorded and reported.  Called “Operation Creative”, this is an effort by the Police Intellectual Property Crime Unit (PIPCU) to educate people visiting sites that offer pirated music and videos that it’s illegal and could result in prosecution.  As if anyone who visits a pirate site didn’t already know exactly what they were doing and what the potential consequences are.  The City of London police call it education, though intimidation might be a better word for what they’re actually doing.

The folks over at TorrentFreak are concerned with the fact that they couldn’t get the actual banners to show up.  They created a story out of what they could get to, ads for music sites that have reached agreements with the RIAA and music labels.  While this is interesting, I’m more concerned with what the results of this type of ‘education’ will be.

Let’s be honest in saying that anyone who’s using a pirate site has a pretty good idea of what they’re doing.  So the police banners aren’t going to be educational, they’re attempts to make users believe that their IP addresses has been logged for future prosecution.  While they don’t come out directly with the threat, it is implied using the word “reported”.  And who’s to say that the ad network they’re using to supply the ads isn’t using a cookie to gather IP addresses as well as various other information as well.  This definitely sounds more like a threat than most forms of education I’m familiar with.

The problem I have with this PIPCU exercise isn’t the intimidation, but rather the unintended consequences of it.  Scary warnings that the user is doing something illegal aren’t new and in fact have been used by malware authors for a long, long time.  Scareware saying the FBI is going to come knocking at your door for visiting illegal websites is a common tactic, it’s just whether they’re telling you you’ve been to porn sites with underage models or pirate sites to download music that change.  I’m certain the same groups who send these notifications already have fake ads telling users to “pay a fine of $500 or we’re coming to your house”.  If they aren’t in the ad networks, they definitely send out spam to users with the same messages, often using the same exact graphics and messages as official police web sites.  

Rather than discouraging the average pirate site user from visiting the site, this police effort is likely to create the illusion that such scareware ads might be legitimate in the eyes of the user.  In other words, while there might be some impact on the number of people using pirate sites, it’s more likely this will increase the amount of fraud perpetrated against those same users, since it’ll be hard to tell if the warning is really the police or not.  The music companies are probably perfectly happy with this as an outcome, but I doubt the police will enjoy being used as a method for increasing fraud against anyone.

My second concern is less about the fraud and more about the futility of the exercise.  Brian Krebs recently wrote about services that allow an organization to click on banner ads in order to drain the money spent on those ads.  In other words, you pay a service to click on your competitor’s ads without giving them anything of value, using up the money they paid for those ads as quickly as possible, with little or no return.  I see no reason some of the more technically savvy users of pirate site wouldn’t create scripts to do exactly the same to the police.  How hard would it be to use VPN’s or Tor in order disguise IP addresses and hit the same ads again and again?  In theory there are likely to be defenses in place to stop this type of targeted ad attack, but it’s possible to overcome any defense if you have a motivated attacker.

I’m purposefully not addressing the ethics of pirating music, nor am I addressing the efficacy of an outdated business model such as the music industry.  I’ll leave it to someone else to argue both sides of that argument.  What I’m concerned with is the how effective the efforts are going to be and what the consequences of those efforts.  Does the PIPCU expect their ad campaign to have a direct effort on piracy or do they realize this is a futile effort?  Have they thought of the negative consequences their efforts will have with regard to fraud?  Or is this simply an effort to be seen as doing *something* by the recording companies and the public, no matter how negligible the positive outcomes might be?  

I’m not sure what would constitute an effective measure to stop piracy.  For the most part I think the ads we’ve seen in the past, both in movie theaters and online, have been heavy handed and annoyed most of the people they were targeted at rather than dissuade anyone.  This effort doesn’t seem much different, but it has the added disadvantage of making it easier for the authors of scareware to intimidate the public into giving up money for no good reason.  And that’s something that should be avoided whenever possible.

No responses yet

Jul 28 2014

“Your cons are just an excuse to drink and party”

Published by under General,Humor,Social Networking

I’m sure we’ve all heard it before when trying to get approval to travel to conventions:  “This is just a boondoggle and you’re going to party the week away!”  Many people believe that the only thing that gets done at security conferences is that a lot of alcohol gets consumed and people get silly at night.  If you go by some of the things we talk about publicly, it’s no surprise that managers might believe that.  While there’s a little bit of truth in accusations, the reality is that there’s so much more going on at conferences that we don’t talk about.  

There’s obviously the talks.  While I personally only attend two or three talks a conference, I know people who spend their entire day running from talk to talk and wish they had time to see more.  There’s a lot of research being revealed at Security Summer Camp, some of which is being seen for the first time there.  It’s valuable to know what’s up and coming, what’s new and interesting and what the trends are in the security field.  The talks given at conferences are one way to find out about all of these.

A second reason to attend conferences is the contacts.  Having connections amongst your peers is easily as important as having knowledge about your field when it comes to a career in security.  There’s too much going on to know everything, there are times when you’re going to need help, so creating and cementing the relationships that will help you over the course of a career are fundamental to your success.  This happens in the hallway track between sessions, this happens during lunches and dinners and this happens even more during the parties at night.  Conferences provide a means to be social with like minded individuals that simply doesn’t exist in many other venues.

And finally there’s the break from the daily routine to de-stress and relax a little.  We need to get away from the daily routine from time to time, it’s a fact of life and why we have vacations.  Conferences provide a similar function, but in addition they give us an opportunity to gain new perspectives on our routine and exchange ideas with others that can be incredibly valuable in dealing with the problems in our normal work environment.  That shift of focus can make all the difference in the world in how you tackle a problem when you return to the routine.

So, yes, the conference parties are what a lot of people think of when they hear us asking to go to a conference.  But they’re only a small part of what’s going on at the conference and even they serve an important role as a social lubricant.  Of course, that’s assuming that you’re safe and sane when drinking and don’t do something that’s going to get you in deep trouble back at the office.  There’s always a few people who don’t know when to stop at every conference.  Don’t be ‘that guy’.

No responses yet

Jul 27 2014

Balancing digital privacy

Published by under General

I had an interesting conversation with a relative this week about privacy.  Which is, of course, why I’m writing about it on the blog.  The irony of the situation doesn’t escape me.  

“I’ve been listening to you and it’s made me very careful about what I put on the Internet.  I have almost no digit presence, I’ve used very little social media and what few accounts I do have are under pseudonyms, with no direct link to me.  When I do a Google search on my name, it turns up a few hits on me, then the rest of the results are of you and and a friend of yours who shares my name.  The few results about me that do turn up are from competitions I was in when I was younger and I’m not directly tagged in any of the pictures.”

First of all, it’s good to know my family is listening, or at least one member of my family is.  They understand the importance of limiting what you make available on the Internet and have consciously taken steps to make sure that only the information that’s available is data they’ve decided is unavoidable and necessary.  But I have to wonder if they haven’t taken my advice too far and limited their footprint too much.

In this day, it’s important to have a presence on the Internet.  We know that businesses hiring new employees, colleges looking at potential candidates and even the people you might date or meet with search the Internet to learn about us as part of the process of dealing with strangers.  And while leaving a digital trail that’s littered with detritus about when we got drunk or stupid is a negative, having no evidence that you existed on the Internet is nearly as bad to some people and organizations.  If there’s nothing out there about you, while you may not have done anything wrong, there’s no evidence you’ve done anything right either.  And some people take a lack of presence as evidence that you’ve been up to no good.

My suggestion to my relative was to carefully cultivate a digital presence.  Make some of the positives of what you do available for people to find.  Use social media sparingly, but maintain a presence.  It’s okay to have opinions and put yourself out there, as long as you’re aware that what you say will be searchable for the foreseeable future of the Internet.  Be a real person, but be a person who controls the image they present to the world.  I was very careful to also point out that I might not be the best example of limiting your presence.

The conversation degenerated from there into creating a ‘digital persona’, a search engine friendly front that presents exactly what you want to the world and nothing more.  We all wondered about the ethics of creating a persona that’s carefully crafted for future job searches and dating.  No one in the family had a good answer for that one.

3 responses so far

Jul 21 2014

Can I use Dropbox?

Published by under Encryption,Family,Privacy,Risk

I know security is coming to the public awareness when I start getting contacted by relatives and friends about the security of products beyond anti-virus.  I think it’s doubly telling when the questions are not about how to secure their home systems but about the security of a product for their business.  Which is exactly what happened this week; I was contacted by a family member who wanted to know if it was safe to use Dropbox for business.  Is it safe, is it secure and will my business files be okay if I use Dropbox to share them between team members?

Let’s be honest that the biggest variable in the ‘is it secure?’ equation is what are you sharing using this type of service.  I’d argue that anything that has the capability of substantially impacting your business on a financial or reputational basis shouldn’t be shared using any third-party service provider (aka The Cloud).  If it’s something that’s valuable enough to your business that you’d be panicking if you left it on a USB memory stick in your local coffee shop, you shouldn’t be sharing it via a cloud provider in the first place. In many cases the security concerns of leaving your data with a service provider are similar to the dropped USB stick, since many of these providers have experienced security breaches at one point or another.

What raised this concern to a level where the general public?  It turns out it was a story in the Guardian about an interview with Edward Snowden where he suggests that Dropbox is insecure and that users should switch to Spideroak instead.  Why?  The basic reason is that Spideroak is a ‘zero-knowledge’ product, where as Dropbox maintains the keys to all the files that users place on it’s systems and could use those keys in order to decrypt any files.  This fundamental difference means that Dropbox could be compelled by law to provide access to an end user’s file, while Spideroak couldn’t because they don’t have that capability.  From Snowden’s perspective, this difference is the single most important feature difference between the two platforms, and who can blame him for suggesting users move.

Snowden has several excellent points in his interview, at least from the viewpoint of a security and privacy expert, but there’s one I don’t think quite holds up.  He states that Condoleezza Rice has been appointed to the board of directors for Dropbox and that she’s a huge enemy of privacy.  This argument seems to be more emotional than factual to me, since I don’t have much historical evidence on which to base Rice’s opinions on privacy.  It feels a little odd for me to be arguing that a Bush era official might not be an enemy of privacy, but I’d rather give her the benefit of the doubt than cast aspersions on Dropbox for using her experience and connections.  Besides, I’m not sure how much influence a single member of the board of directors actually has on the direction of the product and the efficacy of its privacy controls.

On the technical front, I believe Snowden is right to be concerned.  We know as a fact that Dropbox has access to the keys to decrypt user’s files; they use the keys as part of a process that helps reduce the number of identical files stored on their system, a process called deduplication.  The fact that Dropbox has access to these keys means a few things; they also have access to decrypt the data if they’re served with a lawful order, a Dropbox employee could possibly access the key to get to the data and Dropbox could potentially be feeding into PRISM or one of the many other governmental programs that wants to suck up everyone’s data.  It also means that Dropbox could make a mistake to accidentally expose the data to the outside world, which has happened before.  Of course, vulnerabilities and misconfigurations that results in a lapse of security is a risk that you face when using any cloud service and is not unique to Dropbox.

I’ve never seen how Dropbox handles and secures the keys that are used to encrypt data and they haven’t done a lot to publicize their processes.  It could be that there are considerable safeguards in place to protect the keys from internal employees and federal agencies.  I simply don’t know.  But they do have the keys.  Spideroak doesn’t, so they don’t have access to the data end users are storing on their systems, it’s that simple.  The keys which unlock the data are stored with the user, not the company, so neither employees nor governmental organizations can access the data through Spideroak. Which is Snowden’s whole point, that we should be exploring service providers who couldn’t share our data if they wanted.  From an end-user perspective, a zero-knowledge is vastly preferable, at least if privacy is one of your primary concerns.

But is privacy a primary concern for a business?  I’d say no, at least in 90% of the businesses I’ve dealt with.  It’s an afterthought in some cases and in many cases it’s not even thought of until there’s been a breach of that privacy.  What’s important to most businesses is functionality and just getting their job done.  If that’s the case, it’s likely that Dropbox is good enough for them.  Most businesses have bigger concerns when dealing with the government than whether their files can be read or not: taxes, regulations, taxes, oversight, taxes, audits, taxes… the list goes on.  They’re probably going to be more concerned with the question of if a hacker or rival business can get to their data than if the government can.  To which the answer is probably not.

I personally use Dropbox all the time.  But I’m using it to sync pictures between my phone and my computer, to share podcast files with co-conspirators (also known as ‘co-hosts’) and to make it so I have access to non-sensitive documents where ever I am.  If it’s sensitive, I don’t place it in Dropbox, it’s that simple.  Businesses need to be making the same risk evaluation about what they put in Dropbox or any other cloud provider: if having the file exposed would have a significant impact to your business, it probably doesn’t belong in the cloud encrypted with someone else’s keys.

If it absolutely, positively has to be shared with someone elsewhere, there’s always the option of encrypting the file yourself before putting it on Dropbox.  While the tools still need to be made simpler and easier, it is possible to use tools like TrueCrypt (or it’s successor) to encrypt sensitive files separate from Dropbox’s encryption.  Would you still be as worried about a lost USB key if the data on it had been encrypted?


One response so far

Jul 17 2014

Root my ride

Published by under Government,Hacking,Risk

If you’ve never watched the anime Ghost in the Shell(GITS) and you’re in security, you’re doing yourself a great disfavor.  If nothing else, watch the Stand Alone Complex series as a primer of what we might expect from Anonymous in the future.  I know my friend Josh Corman tries to sit down to watch it every year or two in order to refresh his memory and help him understand what might be coming down the pipeline from chaotic actors.  And the authors of the manga/anime have a impressive understanding of what the future of hacking might bring in the long term.  Probably a better idea than the FBI does at least.

Earlier this week the Guardian got a copy of an unclassified document the FBI had written up exploring the future of driverless vehicles and the dangers they pose to the future. Their big revelation is that driverless cars could let hackers do things they couldn’t do while driving a normal cars.  In other words, since they wouldn’t have to actually be driving they could hack while the car drove itself.  Which ignores the fact that it’s already pretty easy to get someone else to drive a car for you, presumably much better than a driverless car will be able to do for many years.  If I’m going to commit a crime, I’d rather have someone I can trust at the wheel, rather than take my chances that the police might have a back door (pun intended) into my car’s operating system.

The Guardian story also hints that the FBI is concerned about driverless cars being hacked to be used as weapons.  I have to admit that this is a concern; hacking a target’s car to accelerate at the wrong time or muck with the car’s GPS so that it thinks the road goes straight when it should follow the curve of the cliff wouldn’t be a massive logical stretch.  Also doing the same to use a car to plow into a crowd or run over an individual is a possibility.  However, both of these are things an unskilled operator could do with a real car by cutting the brake lines or driving the car themselves, then running from the scene of the crime.

I think it’ll be much more interesting when driverless cars start becoming common place and young hackers decide they don’t like the feature set and/or controls that are present in the car.  It’s a logical extension to think that the same people who root phones and routers and televisions will eventually figure out how to re-image a car so that it has the software they want, to give the vehicle the capabilities they want.  I know that the Ford Focus has a whole community built around customizing the software in the vehicle, so why will it be any different for driverless cars in the future.

The difference with the driverless car will be that I could strip out many if not all of the safety protocols that will be in place, as well as the limiters on the engine and braking systems.  I want to pull off a robbery and use a driverless car for the get away?  Okay, ignore all stoplights, step on the gas and don’t break for anything.  You’d probably be able to rely on the safety features of other driverless cars to avoid you and you wouldn’t have to worry about the police issuing a kill signal to your car once they’ve read your license plate and other identifying codes.  I’d still rather have an old fashioned car with an actual driver, but at some point those might be hard to get and using one would cause suspicion in and of itself.

On the point of a kill signal, I strongly believe this will be a requirement for driverless cars in the future.  I’m actually surprised a law enforcement kill switch hasn’t already been legislated by the US government, though maybe they’re waiting to see how the public accepts smart phone kill signals first.  Around the same time as the kill switch is being made mandatory, I expect to see laws passed to make rooting your car illegal.  Which, of course, means only criminals will root their cars.  Well, them and the thousands of gear heads who also like to hack the software and won’t know or care about the law.

The FBI hasn’t even scratched the surface of what they should be concerned with about driverless cars.  Back to my initial point about Ghost in the Shell: think about what someone could do if they hacked into the kill switch system that’s going to be required by law.   Want to cause massive chaos?  Shut down every car in Las Angeles or Tokyo.  Make the cars accelerate and shut down the breaks.  Or simply change the maps the car’s GPS is using.  There are a lot of these little chaos producing tricks used through out the GITS series, plus even more that could be adapted easily to the real world.

Many of these things will never happen.  The laws will almost definitely be passed and you’ll have a kill switch in your new driverless car, but there’s little chance we’ll ever see a hack of the system on a massive scale.  On the other hand, given the insecurity we’re just starting to identify in medical devices, the power grid and home networks, I’m not sure that any network that supports driverless cars will be much better secured. Which will make for a very interesting future.

No responses yet

Jul 16 2014

Patching my light bulb?

Published by under Cloud,Hacking

You know things are getting a bit out of hand when you have to patch the light bulbs in your house.  But that’s exactly what the Internet of Things is going to mean in the future.  Everything in the household from the refrigerator to the chairs you sit in to the lights will eventually have an IP address (probably IPv6), will have functions that activate when you walk into the room and will communicate that back out to a database on the Internet.  And every single one of the will have vulnerabilities and problems with their software that will need to be patched.  So patching your lights will only be the start of the wonders of the Internet of Things.

We already know our televisions are tracking our viewing habits.  Not just what we watch from the cable boxes, but what shows we stream, what content we download and they’re enumerating all the shares on our networks to find what’s there as well.  For each new device we add to the home network, we’re also adding a new way for our networks to be compromised, to allow an outsider into our digital home.  How many home users are going to be able to set up a network that cuts these digital devices off from what’s important on the network?  How many security conscious individuals are going to bother?

It’s interesting to watch the ‘what we can do’ run amok with little or no regard for ‘what we should do’.  Ever since the first computers were built we’ve been fighting this battle.  But as it moves from the corporate environment as the battlefront to the home environment, it’ll be interesting to see how the average citizen reacts.  Will we start seeing pressure for companies to create stable, secure products or will we simply continue to see a race to be first to market, with the mentality that “we’ll fix it later”?

One response so far

Jul 13 2014

Impostor syndrome

Published by under General,Personal

What am I doing here?  When are they going to realize I don’t know what I’m doing?  How long until they fire me for faking it?  I don’t belong with these people, they’ve actually done something, while nothing I’ve done is remarkable or interesting.  I’m not worthy of this role, of being with these people, of even working in this environment.  I’m making it up as I go along and nothing I could do would ever put me on the same level as the people around me.  How did I end up here?

I know I’m not the only one who has these thoughts.  It seems to be common in the security community and not uncommon in any group of successful people.  It’s called ‘impostor syndrome‘ and it’s often considered a sub-set of the Dunning-Kruger effect.  Basically it’s a form of cognitive dissonance where a successful person has a hard time acknowledging his or her success and overemphasizes the many mistakes everyone makes on a daily.  To put it simply, it’s the thought we all have from time to time that “I’m not good enough” writ large.

It’s not hard to feel this way sometimes.  In security, we create heroes and rock stars from within our community.  We look at the researchers who discover new vulnerabilities and put them on a stage to tell everyone how great their work is.  We venerate intelligence, we stand in awe of the technical brilliance of others and wish we could do what they do.  We all tend to wonder “Why can’t I be the one doing those things?”

It’s easy to feel like this, to feel you’re not worthy.  We know the mistakes we made getting to where we are.  We know how hard it was, how rocky the road has been, where the false starts and dead ends are and all the things we didn’t accomplish in getting to where we are.  When we look at other people we only see the end results and don’t see all the trials and tribulations they went through to get there.  So it’s all to common to believe they didn’t go through exactly the same road of mistakes and failure that we did.  As if they don’t feel just as out of their depth as we do.

I don’t think there’s a cure for impostor syndrome, nor do I think there should be.  We have a lot of big egos in the security community and sometimes these feelings are the only thing keeping them from running amok.  The flip side of impostor syndrome, illusory superiority, the feeling that you have abilities that far outstrip what you actually have, is almost worse than thinking your an impostor.  And I’d rather feel a little inadequate while working to be better than to feel I’m more skilled than I am and stop working to get better.

If you feel like an impostor in your role as a security professional, I can almost guarantee you’re not.  The feeling of inferiority is an indicator that you think you’re capable of more and want to be worthy of the faith and trust those around you have put into you.  You might be faking it on a daily basis, making things up as you go, but the secret is that almost all of us are doing the exact same thing.  It’s when you know exactly what you’re doing day in and day out that you have to be careful to fight complacency and beware of illusory superiority.  It’s better to think you’re not good enough and strive for more than to think you’ve made it and are the best you can be.

One response so far

Jul 10 2014

Illustrating the problem with the CA’s

You’d think that if there was any SSL certificate out there that’d be carefully monitored, it’d be Google’s.  And you’d be right; between the number of users of Chrome and the Google team itself, the certs that correspond to Google properties are under a tremendous amount of scrutiny.  So when an impostor cert is issued anywhere in the world, it’s detected relatively quickly in most cases.  But the real question is, why are Certificate Authorities (CA’s) able to issue false certs in the first place?  Mostly because we have to trust someone in the process of cert issuance and in theory the CA’s are the ones who are the most trustworthy and best protected.  Unluckily, there are still a lot of holes in the process and the protection of even the best CA’s.

Last week Google detected an unauthorized digital certificate issued in India by the National Infomatics Center(NIC). This week it was revealed that not only were the certs Google knew about issued, but an indeterminate number of other certs had been issued by the NIC.  Their issuance process had been compromised in some way and they’re still in the process of investigating the full scope of the compromise.  Users of Chrome were protected due to certificate pinning, but users of IE and other browsers might not be so lucky. What was done with these certificates, no one knows.  What could be done with them is primarily acting as a man in the middle against users of any of the compromised certs, meaning the entity that now has these certificates could intercept and decrypt email, files, etc.  There are plenty of reasons a government or criminal element would want to have control of a certificate that looks and feels like it’s an authentic Google (or MIcrosoft or…) certificate.

There’s no clear, clean way to improve the CA process.  Extended Validation (EV) certs are one way, but it also makes the whole process of getting an SSL cert much more complex.  But given the the value of privacy and how certificates play a vital role in maintaining it, this may be the price the Internet has to pay.  Pinning certs helps, as will DANE and Sunlight (aka Certificate Transparency).  Neither DANE nor Sunlight are fully baked yet, but they should both help make up for the weaknesses of current processes.  Then it’ll just take a year or three to get them into all the browsers and even longer for older browsers to be retired.  And that’s not even taking into account the fact that we don’t use SSL everywhere.

No responses yet

Jul 09 2014

Civil disobedience against surveillance

Published by under Government,Privacy,Video

Last year I moved to the UK and spend a considerable amount of time in London.  Therefore I’m often on 10, 12, 16 or more cameras at any one time.  I dislike it intensely, but it was something I knew I’d have to be dealing with when I moved.  There’s no evidence that cameras prevent any serious crimes or even less serious ones, and there’s little evidence they’re very useful in catching perpetrators after the fact.  They do, however, cause a lot of innocent people to modify their behaviors slightly since they know they’re on camera.  It’s a subtle societal shift that most people will never even notice.

But one group has noticed and they’re very actively doing something about it.  It’s an anti-surveillance group called Camover that started in Germany and is working its way onto the global scene.  I’d never heard of them before yesterday, when Salon wrote a story highlighting their growth into the US.  I’m of mixed feelings about this group and their growth; part of me wants to work to change society through lawful means, while another part wants to join in on pulling down the cameras and destroying them where ever they intrude on my ever disappearing privacy.  No, I’m not of an anarchist bent at all, am I?

The part that bothers me is that while the members of this group probably see much of what they’re doing as a bit of relatively harmless vandalism, law enforcement probably paints them as felons and terrorists.  Yes, terrorists.  They’ll be painted as destroying the cameras that protect our freedoms and help catch terrorist.  And when they’re caught, they’ll be treated as if they are terrorists, with all the extra-legal, non-judicial treatment that surrounds that designation.  It won’t be a fun adventure for them, that much is sure.

I see a need for anarchists like this to rise up and show us that surveillance can be fought.  I think we need more people to be aware of exactly how our society is being rapidly turned into a state where our every move is watched and judged.  But I don’t think it’s worth risking disappearing into a detention center somewhere, with all of your rights suspended because an agent somewhere decided to label you as a terrorist.

No responses yet

Next »