Jul 21 2014

Can I use Dropbox?

Published by under Encryption,Family,Privacy,Risk

I know security is coming to the public awareness when I start getting contacted by relatives and friends about the security of products beyond anti-virus.  I think it’s doubly telling when the questions are not about how to secure their home systems but about the security of a product for their business.  Which is exactly what happened this week; I was contacted by a family member who wanted to know if it was safe to use Dropbox for business.  Is it safe, is it secure and will my business files be okay if I use Dropbox to share them between team members?

Let’s be honest that the biggest variable in the ‘is it secure?’ equation is what are you sharing using this type of service.  I’d argue that anything that has the capability of substantially impacting your business on a financial or reputational basis shouldn’t be shared using any third-party service provider (aka The Cloud).  If it’s something that’s valuable enough to your business that you’d be panicking if you left it on a USB memory stick in your local coffee shop, you shouldn’t be sharing it via a cloud provider in the first place. In many cases the security concerns of leaving your data with a service provider are similar to the dropped USB stick, since many of these providers have experienced security breaches at one point or another.

What raised this concern to a level where the general public?  It turns out it was a story in the Guardian about an interview with Edward Snowden where he suggests that Dropbox is insecure and that users should switch to Spideroak instead.  Why?  The basic reason is that Spideroak is a ‘zero-knowledge’ product, where as Dropbox maintains the keys to all the files that users place on it’s systems and could use those keys in order to decrypt any files.  This fundamental difference means that Dropbox could be compelled by law to provide access to an end user’s file, while Spideroak couldn’t because they don’t have that capability.  From Snowden’s perspective, this difference is the single most important feature difference between the two platforms, and who can blame him for suggesting users move.

Snowden has several excellent points in his interview, at least from the viewpoint of a security and privacy expert, but there’s one I don’t think quite holds up.  He states that Condoleezza Rice has been appointed to the board of directors for Dropbox and that she’s a huge enemy of privacy.  This argument seems to be more emotional than factual to me, since I don’t have much historical evidence on which to base Rice’s opinions on privacy.  It feels a little odd for me to be arguing that a Bush era official might not be an enemy of privacy, but I’d rather give her the benefit of the doubt than cast aspersions on Dropbox for using her experience and connections.  Besides, I’m not sure how much influence a single member of the board of directors actually has on the direction of the product and the efficacy of its privacy controls.

On the technical front, I believe Snowden is right to be concerned.  We know as a fact that Dropbox has access to the keys to decrypt user’s files; they use the keys as part of a process that helps reduce the number of identical files stored on their system, a process called deduplication.  The fact that Dropbox has access to these keys means a few things; they also have access to decrypt the data if they’re served with a lawful order, a Dropbox employee could possibly access the key to get to the data and Dropbox could potentially be feeding into PRISM or one of the many other governmental programs that wants to suck up everyone’s data.  It also means that Dropbox could make a mistake to accidentally expose the data to the outside world, which has happened before.  Of course, vulnerabilities and misconfigurations that results in a lapse of security is a risk that you face when using any cloud service and is not unique to Dropbox.

I’ve never seen how Dropbox handles and secures the keys that are used to encrypt data and they haven’t done a lot to publicize their processes.  It could be that there are considerable safeguards in place to protect the keys from internal employees and federal agencies.  I simply don’t know.  But they do have the keys.  Spideroak doesn’t, so they don’t have access to the data end users are storing on their systems, it’s that simple.  The keys which unlock the data are stored with the user, not the company, so neither employees nor governmental organizations can access the data through Spideroak. Which is Snowden’s whole point, that we should be exploring service providers who couldn’t share our data if they wanted.  From an end-user perspective, a zero-knowledge is vastly preferable, at least if privacy is one of your primary concerns.

But is privacy a primary concern for a business?  I’d say no, at least in 90% of the businesses I’ve dealt with.  It’s an afterthought in some cases and in many cases it’s not even thought of until there’s been a breach of that privacy.  What’s important to most businesses is functionality and just getting their job done.  If that’s the case, it’s likely that Dropbox is good enough for them.  Most businesses have bigger concerns when dealing with the government than whether their files can be read or not: taxes, regulations, taxes, oversight, taxes, audits, taxes… the list goes on.  They’re probably going to be more concerned with the question of if a hacker or rival business can get to their data than if the government can.  To which the answer is probably not.

I personally use Dropbox all the time.  But I’m using it to sync pictures between my phone and my computer, to share podcast files with co-conspirators (also known as ‘co-hosts’) and to make it so I have access to non-sensitive documents where ever I am.  If it’s sensitive, I don’t place it in Dropbox, it’s that simple.  Businesses need to be making the same risk evaluation about what they put in Dropbox or any other cloud provider: if having the file exposed would have a significant impact to your business, it probably doesn’t belong in the cloud encrypted with someone else’s keys.

If it absolutely, positively has to be shared with someone elsewhere, there’s always the option of encrypting the file yourself before putting it on Dropbox.  While the tools still need to be made simpler and easier, it is possible to use tools like TrueCrypt (or it’s successor) to encrypt sensitive files separate from Dropbox’s encryption.  Would you still be as worried about a lost USB key if the data on it had been encrypted?

 


One response so far

Jul 17 2014

Root my ride

Published by under Government,Hacking,Risk

If you’ve never watched the anime Ghost in the Shell(GITS) and you’re in security, you’re doing yourself a great disfavor.  If nothing else, watch the Stand Alone Complex series as a primer of what we might expect from Anonymous in the future.  I know my friend Josh Corman tries to sit down to watch it every year or two in order to refresh his memory and help him understand what might be coming down the pipeline from chaotic actors.  And the authors of the manga/anime have a impressive understanding of what the future of hacking might bring in the long term.  Probably a better idea than the FBI does at least.

Earlier this week the Guardian got a copy of an unclassified document the FBI had written up exploring the future of driverless vehicles and the dangers they pose to the future. Their big revelation is that driverless cars could let hackers do things they couldn’t do while driving a normal cars.  In other words, since they wouldn’t have to actually be driving they could hack while the car drove itself.  Which ignores the fact that it’s already pretty easy to get someone else to drive a car for you, presumably much better than a driverless car will be able to do for many years.  If I’m going to commit a crime, I’d rather have someone I can trust at the wheel, rather than take my chances that the police might have a back door (pun intended) into my car’s operating system.

The Guardian story also hints that the FBI is concerned about driverless cars being hacked to be used as weapons.  I have to admit that this is a concern; hacking a target’s car to accelerate at the wrong time or muck with the car’s GPS so that it thinks the road goes straight when it should follow the curve of the cliff wouldn’t be a massive logical stretch.  Also doing the same to use a car to plow into a crowd or run over an individual is a possibility.  However, both of these are things an unskilled operator could do with a real car by cutting the brake lines or driving the car themselves, then running from the scene of the crime.

I think it’ll be much more interesting when driverless cars start becoming common place and young hackers decide they don’t like the feature set and/or controls that are present in the car.  It’s a logical extension to think that the same people who root phones and routers and televisions will eventually figure out how to re-image a car so that it has the software they want, to give the vehicle the capabilities they want.  I know that the Ford Focus has a whole community built around customizing the software in the vehicle, so why will it be any different for driverless cars in the future.

The difference with the driverless car will be that I could strip out many if not all of the safety protocols that will be in place, as well as the limiters on the engine and braking systems.  I want to pull off a robbery and use a driverless car for the get away?  Okay, ignore all stoplights, step on the gas and don’t break for anything.  You’d probably be able to rely on the safety features of other driverless cars to avoid you and you wouldn’t have to worry about the police issuing a kill signal to your car once they’ve read your license plate and other identifying codes.  I’d still rather have an old fashioned car with an actual driver, but at some point those might be hard to get and using one would cause suspicion in and of itself.

On the point of a kill signal, I strongly believe this will be a requirement for driverless cars in the future.  I’m actually surprised a law enforcement kill switch hasn’t already been legislated by the US government, though maybe they’re waiting to see how the public accepts smart phone kill signals first.  Around the same time as the kill switch is being made mandatory, I expect to see laws passed to make rooting your car illegal.  Which, of course, means only criminals will root their cars.  Well, them and the thousands of gear heads who also like to hack the software and won’t know or care about the law.

The FBI hasn’t even scratched the surface of what they should be concerned with about driverless cars.  Back to my initial point about Ghost in the Shell: think about what someone could do if they hacked into the kill switch system that’s going to be required by law.   Want to cause massive chaos?  Shut down every car in Las Angeles or Tokyo.  Make the cars accelerate and shut down the breaks.  Or simply change the maps the car’s GPS is using.  There are a lot of these little chaos producing tricks used through out the GITS series, plus even more that could be adapted easily to the real world.

Many of these things will never happen.  The laws will almost definitely be passed and you’ll have a kill switch in your new driverless car, but there’s little chance we’ll ever see a hack of the system on a massive scale.  On the other hand, given the insecurity we’re just starting to identify in medical devices, the power grid and home networks, I’m not sure that any network that supports driverless cars will be much better secured. Which will make for a very interesting future.


No responses yet

Jul 16 2014

Patching my light bulb?

Published by under Cloud,Hacking

You know things are getting a bit out of hand when you have to patch the light bulbs in your house.  But that’s exactly what the Internet of Things is going to mean in the future.  Everything in the household from the refrigerator to the chairs you sit in to the lights will eventually have an IP address (probably IPv6), will have functions that activate when you walk into the room and will communicate that back out to a database on the Internet.  And every single one of the will have vulnerabilities and problems with their software that will need to be patched.  So patching your lights will only be the start of the wonders of the Internet of Things.

We already know our televisions are tracking our viewing habits.  Not just what we watch from the cable boxes, but what shows we stream, what content we download and they’re enumerating all the shares on our networks to find what’s there as well.  For each new device we add to the home network, we’re also adding a new way for our networks to be compromised, to allow an outsider into our digital home.  How many home users are going to be able to set up a network that cuts these digital devices off from what’s important on the network?  How many security conscious individuals are going to bother?

It’s interesting to watch the ‘what we can do’ run amok with little or no regard for ‘what we should do’.  Ever since the first computers were built we’ve been fighting this battle.  But as it moves from the corporate environment as the battlefront to the home environment, it’ll be interesting to see how the average citizen reacts.  Will we start seeing pressure for companies to create stable, secure products or will we simply continue to see a race to be first to market, with the mentality that “we’ll fix it later”?


One response so far

Jul 13 2014

Impostor syndrome

Published by under General,Personal

What am I doing here?  When are they going to realize I don’t know what I’m doing?  How long until they fire me for faking it?  I don’t belong with these people, they’ve actually done something, while nothing I’ve done is remarkable or interesting.  I’m not worthy of this role, of being with these people, of even working in this environment.  I’m making it up as I go along and nothing I could do would ever put me on the same level as the people around me.  How did I end up here?

I know I’m not the only one who has these thoughts.  It seems to be common in the security community and not uncommon in any group of successful people.  It’s called ‘impostor syndrome‘ and it’s often considered a sub-set of the Dunning-Kruger effect.  Basically it’s a form of cognitive dissonance where a successful person has a hard time acknowledging his or her success and overemphasizes the many mistakes everyone makes on a daily.  To put it simply, it’s the thought we all have from time to time that “I’m not good enough” writ large.

It’s not hard to feel this way sometimes.  In security, we create heroes and rock stars from within our community.  We look at the researchers who discover new vulnerabilities and put them on a stage to tell everyone how great their work is.  We venerate intelligence, we stand in awe of the technical brilliance of others and wish we could do what they do.  We all tend to wonder “Why can’t I be the one doing those things?”

It’s easy to feel like this, to feel you’re not worthy.  We know the mistakes we made getting to where we are.  We know how hard it was, how rocky the road has been, where the false starts and dead ends are and all the things we didn’t accomplish in getting to where we are.  When we look at other people we only see the end results and don’t see all the trials and tribulations they went through to get there.  So it’s all to common to believe they didn’t go through exactly the same road of mistakes and failure that we did.  As if they don’t feel just as out of their depth as we do.

I don’t think there’s a cure for impostor syndrome, nor do I think there should be.  We have a lot of big egos in the security community and sometimes these feelings are the only thing keeping them from running amok.  The flip side of impostor syndrome, illusory superiority, the feeling that you have abilities that far outstrip what you actually have, is almost worse than thinking your an impostor.  And I’d rather feel a little inadequate while working to be better than to feel I’m more skilled than I am and stop working to get better.

If you feel like an impostor in your role as a security professional, I can almost guarantee you’re not.  The feeling of inferiority is an indicator that you think you’re capable of more and want to be worthy of the faith and trust those around you have put into you.  You might be faking it on a daily basis, making things up as you go, but the secret is that almost all of us are doing the exact same thing.  It’s when you know exactly what you’re doing day in and day out that you have to be careful to fight complacency and beware of illusory superiority.  It’s better to think you’re not good enough and strive for more than to think you’ve made it and are the best you can be.


One response so far

Jul 10 2014

Illustrating the problem with the CA’s

You’d think that if there was any SSL certificate out there that’d be carefully monitored, it’d be Google’s.  And you’d be right; between the number of users of Chrome and the Google team itself, the certs that correspond to Google properties are under a tremendous amount of scrutiny.  So when an impostor cert is issued anywhere in the world, it’s detected relatively quickly in most cases.  But the real question is, why are Certificate Authorities (CA’s) able to issue false certs in the first place?  Mostly because we have to trust someone in the process of cert issuance and in theory the CA’s are the ones who are the most trustworthy and best protected.  Unluckily, there are still a lot of holes in the process and the protection of even the best CA’s.

Last week Google detected an unauthorized digital certificate issued in India by the National Infomatics Center(NIC). This week it was revealed that not only were the certs Google knew about issued, but an indeterminate number of other certs had been issued by the NIC.  Their issuance process had been compromised in some way and they’re still in the process of investigating the full scope of the compromise.  Users of Chrome were protected due to certificate pinning, but users of IE and other browsers might not be so lucky. What was done with these certificates, no one knows.  What could be done with them is primarily acting as a man in the middle against users of any of the compromised certs, meaning the entity that now has these certificates could intercept and decrypt email, files, etc.  There are plenty of reasons a government or criminal element would want to have control of a certificate that looks and feels like it’s an authentic Google (or MIcrosoft or…) certificate.

There’s no clear, clean way to improve the CA process.  Extended Validation (EV) certs are one way, but it also makes the whole process of getting an SSL cert much more complex.  But given the the value of privacy and how certificates play a vital role in maintaining it, this may be the price the Internet has to pay.  Pinning certs helps, as will DANE and Sunlight (aka Certificate Transparency).  Neither DANE nor Sunlight are fully baked yet, but they should both help make up for the weaknesses of current processes.  Then it’ll just take a year or three to get them into all the browsers and even longer for older browsers to be retired.  And that’s not even taking into account the fact that we don’t use SSL everywhere.


No responses yet

Jul 09 2014

Civil disobedience against surveillance

Published by under Government,Privacy,Video

Last year I moved to the UK and spend a considerable amount of time in London.  Therefore I’m often on 10, 12, 16 or more cameras at any one time.  I dislike it intensely, but it was something I knew I’d have to be dealing with when I moved.  There’s no evidence that cameras prevent any serious crimes or even less serious ones, and there’s little evidence they’re very useful in catching perpetrators after the fact.  They do, however, cause a lot of innocent people to modify their behaviors slightly since they know they’re on camera.  It’s a subtle societal shift that most people will never even notice.

But one group has noticed and they’re very actively doing something about it.  It’s an anti-surveillance group called Camover that started in Germany and is working its way onto the global scene.  I’d never heard of them before yesterday, when Salon wrote a story highlighting their growth into the US.  I’m of mixed feelings about this group and their growth; part of me wants to work to change society through lawful means, while another part wants to join in on pulling down the cameras and destroying them where ever they intrude on my ever disappearing privacy.  No, I’m not of an anarchist bent at all, am I?

The part that bothers me is that while the members of this group probably see much of what they’re doing as a bit of relatively harmless vandalism, law enforcement probably paints them as felons and terrorists.  Yes, terrorists.  They’ll be painted as destroying the cameras that protect our freedoms and help catch terrorist.  And when they’re caught, they’ll be treated as if they are terrorists, with all the extra-legal, non-judicial treatment that surrounds that designation.  It won’t be a fun adventure for them, that much is sure.

I see a need for anarchists like this to rise up and show us that surveillance can be fought.  I think we need more people to be aware of exactly how our society is being rapidly turned into a state where our every move is watched and judged.  But I don’t think it’s worth risking disappearing into a detention center somewhere, with all of your rights suspended because an agent somewhere decided to label you as a terrorist.


No responses yet

Jul 08 2014

What to see at Security Summer Camp

Published by under Hacking,Public Speaking

It’s coming, and there’s no avoiding it.  That week in Las Vegas when security practitioners from across the globe come together to attend Black Hat, Defcon and BSides LV.  We jokingly call it security summer camp, but if you set foot outside of the hotels and casinos in the heat of the day, chances are you’ll fry your brain and that lily white skin hackers, and people living in London, seem to cultivate so well.  It’s probably the biggest gathering of serious security professionals, less serious security practitioners and general troublemakers from nearly every country in the world and people come to see the talks, catch up with old friends, make new friends and party.  It should probably be called the security frat party, but that’d be even harder to get past bosses and accounting departments than it already is.

Personally, the social aspects of the event is why I go to conferences.  Not the parties, though I drink more at these events than I would normally, but instead the meetings with friends to find out what they’ve been up to, what they’re working on and what the tides of change have brought during the previous year or so.  I go to a few talks at each event, but the reality is between the podcasting and my social circles, if there’s a really good talk, I can probably arrange to talk to the speaker face to face.  And in most cases, you can too, if you’re willing to put yourself out there and treat the speaker with a modicum of respect while hunting them down.  Just don’t be too stalker-ish about it.   Most of the people who talk at these events are approachable, especially if you buy them a drink and treat them like people.

But I do try to make a few talks every event, simply because there are still some things that are better experienced watching a person present on stage.  I understand how a vulnerability works better if I can talk to the researcher, but seeing the narrative a storyteller develops, seeing the persona they project on stage is a totally different experience than talking to them once their energy level has resumed their normal steady state.  And a few people in the security industry are such showmen that it’s worth seeing their talk even if you can talk to them in person later.  Or maybe because of it.

In any case, here’s my short list of the talks I’m going to try to see during the week:

Black Hat, August 6th, 09:00 – CyberSecurity as Realpolitik, Dan Geer

Black Hat, August 6th, 14:15 – Government as Malware Authors, Mikko Hypponen

Black Hat, August 6th, 15:30 – Pulling Back the Curtain at Airport Security, Billy Rios

Defcon, August 8th, 14:00 – Defcon Comedy Jam – aka The Fail Panel – I’ve been helping on this one for a few years.  Expect bad behavior

Defcon, August 9th, 10:00 – Mass Scanning the Internet, Graham, McMillan, Tentler

Defcon, August 9th, 12:00 – Don’t DDoS Me, Bro: Practical DDoS Defense,  Self, Berrell

And one I can’t see because I’ll be headed to the airport

Defcon, August 10th 15:00 – Elevator Hacking, Ollam and Payne

I haven’t seen the BSides talk tracks yet, but I’ll update the post once I do.

 

 


No responses yet

Jul 07 2014

Intrusive Healthcare

Published by under Big Data,Privacy

Soon your doctor may be giving you a call to discuss your buying habits and what they mean to your health.  Carolinas HealthCare is starting a program that looks at your buying habits based on public records, store loyalty programs and credit card purchases.  Most of which was stuff we thought was supposed to be private and protected by law, but turns out to be accessible by anyone with enough money and the big data computing power to comb through it all.

On the surface, this effort is laudable.  Your doctor and your health care provider have a vested interest in helping you develop good habits such as exercise and taking your prescriptions regularly.  The better your health, the happier your life tends to be and the less money they have to spend on you overall.  It makes sense when you look at it as a long term trend to combat a nation that’s growing wider all the time and it’s an extension of trying to push for more proactive health care overall.  But the potential for abuse is simply staggering!

One of the examples used in the Business Week article suggests a asthmatic who’s in the emergency room, so the doctor checks to see if he’s been buying cigarettes, the pollen count where he lives, etc.  Why would giving a hospital and the doctor this level of access into a patient’s life ever be thought of as a good idea?  The number of things that could go wrong with this boggle the mind.  Yes, most doctors are ethical and wouldn’t take advantage of the data.  But it doesn’t take much for the temptation offered by this level of access into a patient’s life to blossom into a form of cyber-voyeurism. It wouldn’t take much self-justification to turn the best of intentions into intrusiveness that’s inappropriate at the best of times.  I don’t want to get a call from my doctor when I pick up an extra tub of Ben & Jerry’s Chocolate Fudge Brownie at the store.  (It was for the Spawn, honest!)

The potential for abuse by doctors is just one of the first direct problems I have with my data being shared to health care.  If doctors have access to my non-healthcare data who else is going to have access to it?  I’m sure the billing department would love to have a direct line to the information as well, so they could hunt me down if I was late making a payment or so they could vet me before authorizing an expensive procedure.  There’s also all the administrators of the systems and everyone who has access to those systems when they’re left unlocked around the hospital.  

The biggest worry I have though is actually the third parties who’d want the data.  Hospitals are already a tempting target for evil doers of all kind because of the data they have.  If we add credit card & loyalty card data to that mix, it becomes the ultimate treasure trove for identity theft and financial data.  While hospitals try to keep their networks secure, when it comes down to it, the ability of a doctor to access data in order to save a life trumps security by an order of magnitude, so security comes in a distant second.  So why would we think it’s a good idea to pool even more of our data in these facilities?

Final thought:  why are the credit card companies and store loyalty programs even allowed to sell access to this data in the first place?  Inquiring minds would like to know.


No responses yet

Jul 06 2014

The dominoes of Internet Balkanization are falling

Published by under Cloud,Government,Hacking,Privacy,Risk

We knew it was coming; it was inevitable.  The events put in motion last June played right into the hands of the people who wanted to cement their control, giving them every excuse to seize the power and claim they were doing it in defense of their people and their nation.  Some might even say it was always destined to happen, it was just a matter of how soon and how completely.  What am I talking about?  The Balkanization of the Internet.  It’s happening now and with Russia entering the competition to see who can control the largest chunk most completely, it’s only a matter of time before others follow the lead and make the same changes within their own country.

Let’s make no mistakes here, there have been countries and governments that have wanted to circumscribe their boundaries in the virtual domain and create an area where they control the content, they control what the people can and can’t see and they have the ability to see everything everyone is looking at as long as the Internet has been in existence.  But prior to the last year, very few countries had either the political impulse or the technical means to filter what came into and out of their countries except China and a few countries in the Middle East.  China had this power because they’d recognized early on the threat the Internet posed to them and the countries in the Middle East have comparatively limited Internet access to begin with, so filtering and controlling their access is a relatively easy exercise.  In both cases though, the efforts have been coarse with plentiful ways to circumvent them, including the use of Tor.  Though it now looks like Tor was itself has long been subverted by the US government to spy as well.

But then Edward Snowden came forth with a huge cache of documents from inside the NSA.  And it turned out all the things that the US had long been shaking its finger at other governments about, things that the US considered to be immoral and foreign to individual freedoms, were the exact things that the NSA had been doing all along.  Sure, it was only foreigners.  Oh, and only ‘people of interest’.  And people with connections to people of interest.  Four or five degrees of connection that is.  And foreign leaders.  And … the list goes on.  Basically, the logical justification was that anyone could be a terrorist, so rather than taking a chance that someone might slip through the cracks, everyone had become a suspect and their traffic on the Internet was to be collected, categorized and collated for future reference, just in case.  Any illusion of moral superiority, or personal freedom from monitoring was blown to shreds. American politicians carefully constructed arguments to assume high ground and tell other countries what they should and should not do torn down and America suddenly became the bad guys of the Internet.  Not that everyone who knew anything about the Internet hadn’t already suspected this had always been going on and the that the US is far from the only country performing this sort of monitoring of the world.  Every government is monitoring their people to one degree or another, the USA and the NSA were simply the ones who got their hands caught in the cookie jar.

The cries to stop data from being sent to the USA have been rising and falling since June and Mr. Snowden’s revelations.  At first they were strident, chaotic and impassioned.  And unreasonable.  But as time went by, people started giving it more thought and many realized that stopping data on the Internet from being exfiltrated to the USA in the Internet’s current form was near unto impossible.  One of the most basic routing protocols of the Web make it nearly impossible to determine ahead of time where a packet is going to go to get to it’s destination; traffic sometimes circumnavigates the globe in order to get to a destination a couple hundred miles away.  That didn’t stop Brazil from demanding that all traffic in their country stay on servers in their country, though they quickly realized that this was an impossible demand.  Governments and corporations across the European Union have been searching for way to ensure that data in Europe stays in Europe, though the European Data Protective Directives have been hard pressed to keep up with the changing situation.

And now Russia has passed a law through both houses of their Parliament that would require companies serving traffic within Russia to stay in Russia and be logged for at least six months by September of 2016.   They’re also putting pressure on Twitter and others to limit and block content concerning actions in the Ukraine, attempting to stop any voice of dissent from being heard inside Russia.  For most companies doing business, this won’t be an easy law to comply with, either from a technical viewpoint or from an ethical one.  The infrastructure needed to retain six months of data in country is no small endeavor; Yandex, a popular search engine in Russia says that it will take more than two years to build the data centers required to fulfill the mandates of the law.  Then there’s the ethical part of the equation: who and how will these logs be accessed by the Russian government?  Will a court order be necessary or will the FSB be able to simply knock at a company’s door and ask for everything.  Given the cost of building an infrastructure within Russian borders (and the people to support it, an additional vulnerability) and the ethical questions of the law, how does this change the equation of doing business in Russia for companies on the Internet?  Is it possible to still do business in Russia, is the business potential too great to pull out now or do companies serve their traffic from outside Russia and hope they don’t get blocked by the Great Firewall of Russia, which is the next obvious step in this evolution?

Where Brazil had to bow to the pressure of international politics and didn’t have the business potential to force Internet companies to allocate servers within it’s borders, Russia does.  The ruling affluent population of Russia has money to burn; many of them make the US ’1%’ look poor.  There are enough start ups and hungry corporations in Russia who are more than willing to take a chunk of what’s now being served by Twitter, Google, Facebook and all the other American mega-corporations of the Internet.  And if international pressure concerning what’s happening in the Ukraine doesn’t even make Russia blink, there’s nothing that the international community can do about Internet Balkanization.

Once Russia has proven that the Balkanization of the Internet is a possibility and even a logical future for the Internet, it won’t take long for other countries to follow.  Smaller countries will follow quickly, the EU will create laws requiring many of the same features that Russia’s laws do and eventually even the US will require companies within it’s borders to retain information, where they will have easy access it.   The price to companies ‘in the Cloud’ will sky rocket as the Cloud itself has to be instantiated within individual regions and the economy of scale it currently enjoys is brought down by the required fracturing.  And eventually much of the innovation and money created by the great social experiment of the Internet will grind to a halt as only the largest companies have the resources needed to be available on a global scale.

 


One response so far

Jun 18 2014

Network Security Podcast, Episode 332

Published by under Podcast

We’d suspected this day would come for quite some time, but it’s time to make it official: The Network Security Podcast will no longer be a regular, weekly podcast, Rich Mogull and Zach Lanier will not be a consistent part of the podcast. The podcast will continue in some form, but it’ll be Martin doing any of the publishing.  Which isn’t really all that big of a change anyway.

Basically, all three of us have become incredibly busy in the last year.  Zach has a wedding to plan, a new job and has moved again.  Rich has more business and work than any time in living memory and has had to cut out anything not related to work or family.  And Martin moved to Europe and is on the road close to 50% of the time, further complicating everything.

There will still be microcasts and occasional interviews published through the podcast site, but for the most part we’re shutting down production.  It’s a sad day as we’ve been doing this podcast in one form or another for nearly almost 9 years.  We’ll miss talking to each other and our audience, but the needs of life have intervened and require our attention elsewhere.  You can catch all three of us at various conferences, either presenting or attending and know that we’ve always loved hearing feedback from you.

Keep an eye and ear open as there are already plans in process for what comes next.  You didn’t think Martin could stop talking, did you?

Network Security Podcast, Episode 332 – The End of an Era

Time: 50:58

 

Show Notes:


No responses yet

Next »