Archive for the 'Risk' Category

Mar 07 2014

You have been identified as a latent criminal!

This afternoon, while I ate lunch, I watched a new-to-me anime called Pscho-Pass.  The TL:DR summary of the show is a future where everyone is chipped and constantly monitored.  If their Criminal Coefficient becomes to high, they are arrested for the good of society.  It doesn’t matter whether they’ve commited a crime or not, if the potential that they will commit a crime exceeds the threshold set by the computer, they’re arrested, or killed if they resist arrest. Like many anime, it sounds like a dystopian future that could never happen.  Except when I got back to my desk, I saw Bruce Schneier’s post, Surveillance by Algorithm.  And once again what I thought was an impossible dystopian future seems like a probable dystopian present.  

As Bruce points out, we already have Google and Amazon suggesting search results and purchases based on our prior behaviours online.  With every search I make online, they build up a more detailed and accurate profile of what I like, what I’ll buy and, by extension, what sort of person I am.  They aren’t using people to do this, there’s an extensive and thoroughly thought out algorithm that measures my every action to create a statistically accurate profile of my likes and dislikes in order to offer up what I might like to buy next based on their experience of what I’ve purchased in the past.  Or there would be if I didn’t purposefully share and account with my wife in order to confuse the profiling software Amazon uses.

Google is a lot harder to fool and they have access to a lot more of the data that reveals the true nature of who I am, what I’ve done and what I’m planning to do.  They have every personal email, my calendar, my searches, in fact, about 90% of what I do online is either directly through Google or indexed by Google in some way or shape.  Even my own family and friends probably don’t have as accurate an indicator of who I really am behind the mask as Google does, if they choose to create a psychological profile of me.  You can cloud the judgement of people, since they’re applying their own filters that interfere with a valid assessment of others, but a well written computer algorithm takes the biases of numerous coders and tries to even them out to create an evaluation that’s closer to reality than that of most people.

It wouldn’t take much for a government, the US, the UK or any other government, to start pushing to have an algorithm that evaluates the mental health and criminal index of every user on the planet and alerts the authorities when something bad is being planned.  Another point Bruce makes is that this isn’t considered ‘collection’ by the NSA, since they wouldn’t necessarilly have any of the data until an alert had been raised and a human began to review the data.  It would begin as something seemingly innoccuous, probably similar to the logical fallacies that governments already use to create ‘protection mechanisms': “We just want to catch the peodophiles and terrorists; if you’re not a peodophile or terrorist, you have nothing to fear.”  After all, these are the exact phrases that have been used numerous times to create any number of organizations and mechanisms, including the TSA and the NSA itself.  And they’re all that much more powerful because there is a strong core of truth to them.

But what they don’t address is a few of the fatal flaws to any such system based on a behavioural algorithm.  First of all, inclination, or even intent, doesn’t equal action.  Our society has long ago established that the thought of doing something isn’t the same as doing the action, whether it’s well-intentioned or malign.  If I mean to call my mother back in the US every Sunday, the thought doesn’t count unless I actually follow through and do so.  And if I want to run over a cyclist who’s slowing down traffic, it really doesn’t matter unless I nudge the steering wheel to the left and hit them.  Intent to commit a crime is not the same as the crime itself, until I start taking the steps necessary to perform the crime, such as purchasing explosives or writing a plan to blow something up.  If we were ever to start allowing the use of algoritms to denote who ‘s a potential criminal and treat them as such before they’ve commited a crime, we’ll have lost something essential to the human condition.

A second problem is that the algorithms are going to be created by people.  People who are fallable and biased.  Even if the individual biases are compensated for, the biases of the cultures are going to be evident in any tool that’s used to detect thought crimes.  This might not seem like much of a problem if you’re an American who agrees with the mainstream American values, but what if you’re not?  What if you’re GLBT?  What if you have an open relationship?  Or like pain?  What if there’s some aspect of your life that falls outside what is considered acceptable by the mainstream of our society?  Almost everyone has some aspect of their life they keep private because it doesn’t meet with societal norms on some level.  It’s a natural part of being human and fallable.  Additionally, actions and thoughts that are perfectly innocuous in the US can become serious crimes if you travel to the Middle East, Asia or Africa and the other way as well.  Back to the issue of sexual orientation, we only have to look at the recent Olympics and how several laws were passed in Russia to make non-heterosexual orientation a crime.  We have numerous examples of laws that have passed in the US only later to be thought to be unfair by more modern standards, with Prohibition being one of the most prominent examples.  Using computer algorithms to uncover people’s hidden inclinations would have a disastrous effect on both individuals and society as a whole.

Finally, there’s the twin ideas of false positives and false negatives.  If you’ve ever run an IDS, WAF or any other type of detection and blocking mechanism, you’re intimately familiar with the concepts.  A false positive is an alert that erroneously tags something as being malicious when it’s not.  It might be that a coder used a string that you’ve written into your detection algorithms and it’s caught by your IDS as an attack.  Or it might be a horror writer looking up some horrible technique that the bad guy in his latest novel is going to use to kill his victims.  In either case, it’s relatively easy to identify a false positive, though a false positive by the a behavioural algorithm has the potential to ruin a persons life before everything is said and done. 

Much more pernicous are false negatives.  This is when your detection mechanism has failed to catch an indicator and therefore not alerted you.  It’s much harder to find and understand false negatives because you don’t know if you’re failing to detect a legitimate attack or if there are simply no malicous attacks to catch.  It’s hard enough when dealing with network traffic to understand and detect false negatives, but when you’re dealing with people who are consciously trying to avoid displaying any of the triggers that would raise alerts, false negatives become much harder to detect and the consequences become much greater.  A large part of spycraft is to avoid any behaviour that will alert other spies to what you are; the same ideas apply to terrorists or criminals of any stripe who have a certain level of intelligence.  The most successful criminals are the ones who make every attempt to blend into society and appear to be just like every other successful businessman around them.  The consequences of believing your computer algorithms have identified every potential terrorist are that you stop looking for the people that might be off the grid for whatever reasons.  You learn to rely to heavily on the algorithm to the exclusion of everything else, a consequence we’ve already seen.

So much of what goes on society is a pendulum that swings back and forth as we adjust to the changes in our reality.  Currently, we have a massive change in technologies that allow for surveillance that far exceeds anything that’s ever been available in the past.  The thought that it might swing to the point of having chips in every persons head that tells the authorities when we start thinking thoughts that are a little too nasty is a far fetched scenario, I’ll admit.  But the thought that the NSA might have a secret data center in the desert that runs a complex algorithm on every packet and phone call that is made in the US and the world to detect potential terrorists or criminal isn’t.  However well intentioned the idea might be, the failings of the technology, the failings of the people implementing the technology and the impacts of this technology on basic human rights and freedoms are something that not only should be considered, they’re all issues that are facing us right now and must be discussed.  I, for one, don’t want to live in a world of “thought police” and “Minority Report“, but that is where this slippery slope leads.  Rather than our Oracle being a group of psychics, it might be a computer program written by … wait for it … Oracle.  And if you’ve ever used Oracle software, that should scare you as much as anything else I’ve written.

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Mar 05 2014

DDoS becoming a bigger pain in the …

Published by under Cloud,General,Hacking,Risk

I’m in the middle of writing the DDoS section of the 2013 State of the Internet Report, which is something that makes me spend a lot of time thinking about how DDoS is affecting the Internet (Wouldn’t be all that valuable if I didn’t put some thought into it, now would it?).  Plus I just got back from RSA where I intereviewed DOSarrest’s Jag Bains and talked to our competitors at the show. Akamai finally closed the deal on Prolexic about three weeks ago, so my new co-workers are starting to get more involved and being more available.  All of which means that there’s a ton of DDoS information available at my fingertips right now and the story it tells doesn’t look good.  From what I’m seeing, things are only going to get worse as 2014 progresses.

This Reuters story captures the majority of my concerns with DDoS.  As a tool, it’s becoming cheaper and easier to use almost daily.  The recent NTP reflection attacks show that the sheer volume of traffic is becoming a major issue.  And even if volumetric attacks weren’t growing, the attack surface for application layer attacks grows daily, since more applications come on line every day and there’s no evidence anywhere I’ve ever looked that developers are becoming at securing them (yes, a small subset of developers are, but they’re the exception).  Meetup.com is only the latest victim of a DDoS extortion scam, and while they didn’t pay, I’m sure there are plenty of other companies who’ve paid simply to make the problem go away without a fuss.  After all, $300 is almost nothing compared to the cost of a sustained DDoS on your infrastructure, not to mention the reputational cost when you’re offline.

I’d hate to say anything like “2014 is the Year of DDoS!”  I’ll leave that sort of hyperbole to the marketing departments, whether it’s mine or someone else’s.  But we’ve seen a definite trend that the number of attacks are growing year over year at an alarming rate.  And it’s not only the number of attacks that are growing, it’s the size of the volumetric attacks and the complexity of the application layer attacks.  Sure, the majority of them are still relatively small and simple, but the outliers are getting better and better at attacking, Those of us building out infrastructure to defend against these attacks are also getting better, but the majority of companies still have little or no defense against such attacks and they’re not the sort of defenses you can put in quickly or easily without a lot of help.

I need to get back to other writing, but I am concerned about this trend.  My data agrees with most of my competitors; DDoS is going to continue to be a growing problem.  Yes, that’s good for business, but as a security professional, I don’t like to see trends like this.  I think the biggest reason this will continue to grow is that it’s an incredibly difficult crime to track back to the source; law enforcement generally doesn’t have the time or skills needed to find the attackers and no business I know of has the authority or inclination to do the same.  Which means the attackers can continue to DDoS with impunity.  At least the one’s who’re smart enough to not attack directly from their own home network, that is.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Jan 05 2014

Much needed vacation

Published by under General,Personal,Risk

I’m back after a two week self-inforced haitus from all things security and work related.  For the last 14 days, I haven’t checked emails, I haven’t been on twitter, I haven’t checked the news, I haven’t read the news sites.  I’ve simply spent time with my family, played Minecraft, watched anime and eaten my way through the Christmas holidays.  And there was gifts in there somewhere as well.  Vacation started as a weekend in Munich, but the vast majority of it was spent at home near London with no deadlines, except a couple of shopping trips with the wife and kids.  All in all, it was one of the most relaxing times I’ve had in years.  And it was sorely needed.

All jobs are stressful to one degree or another, it’s just a fact of life.  But security is a more stressful job then most.  I’ve done a few panels with other security professionals talking about the stress we face, and we’ve done (okay, mainly folks like Jack Daniel and K.C. Yerrid have done) some research into it and found that our high stress is an actual fact, not just something we say to make ourselves feel more important.  Our chosen career is difficult to be good at, we’re constantly under multiple conflicting demands and it almost never slows down.  Is it any wonder that we feel stressed?

It’s almost a joke when you talk to security professionals about substance abuse in our industry.  It’s nearly expected of people to get stupid at conferences.  But it’s not a joke at all, something that was graphically illustrated by the loss of Barnaby Jack last year.  Substance abuse may not be an industry wide problem, but it’s definitely something that we need to be aware of.  I can think of at least half a dozen people who I’ve jokingly made comments about in the last couple of years who might be in real danger.  Most of them know they can come to me if they need support, but I know that’s the best I can do if they don’t want to change.  How many people do you know in a similar position?  Have you expressed concern or at least let them know you will help if they ask?

It’s not my place to get preachy or say I’m any better than anyone else, but I do think we need to be aware and check our own stress levels from time to time.  Let your friends in the industry know you’ll support them if they need help, but more importantly, know when you need to take a break and get away from the  whole scene once in a while.  We do important work, but we can’t do it if we’re too wrapped up in our own problems to function properly.  

Now to get caught up on two weeks of work emails.  Luckily, most of my co-workers took the Christmas holidays off, at least in part, so it won’t be quite as bad as it could be.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Dec 01 2013

Security in popular culture

One of the shows I’ve started watching since coming to the UK is called “QI XL“.  It’s a quiz show/comedy hour hosted by Stephen Fry where he asks trivia questions of people who I assume are celebrities here in Britain.  As often as not I have no clue who these people are.  It’s fun because rather than simply asking his questions one after another, the group of them riff off one another and sound a little bit like my friends do when we get together for drinks.  I wouldn’t say it’s a show for kids though, since the topics and the conversation can get a little risque, occasionally straying into territory you don’t want to explain to anyone under 18.

Last night I watched a show with someone I definitely recognized: Jeremy Clarkson from Top Gear.  A question came up about passwords and securing them, which Clarkson was surprisingly adept at answering, with the whole “upper case, lower case, numbers and symbols” mantra that we do so love in security.  He even knew he wasn’t supposed to write them down.  Except he was wrong on that last part.  As Stephen Fry pointed out, “No one can remember all those complex passwords!  At least no one you’d want to have a conversation with.”

Telling people not to write down their passwords is a disservice we as a community have been pushing for far too long.  Mr. Fry is absolutely correct that no one can remember all the passwords we need to get by in our daily life.  I don’t know about anyone else, but I’ll probably have to enter at least a dozen passwords before the end of today, each one different, with different levels of security and confidentiality needed.  I can’t remember that many passwords, and luckily I don’t have to since I use 1Password to record them for me.  

But lets think about the average user for a moment; even as easy as 1Password or LastPass are to use, they’re probably still too complex for many users.  I’m not trying to belittle users, but many people don’t have the time or interest to learn how to use a new tool, no matter how easy.  So why can’t they use something they’re intimately familiar with, the pen and paper?  The answer is, they can, they just have to learn to keep those secrets safe, rather than taping the password on a note under their keyboard.

We have a secret every one of us carry with us every day, our keys.  You can consider it a physical token as well, but really it’s the shape of your keys in particular that are the secret.  If someone else knows the shape of your keys, they can create their own and open anything your keys will open.  This is a paradigm every user is familiar with and they know how to secure their keys.  So why aren’t more of us teaching our users to write down their passwords in a small booklet and treat it with the same care and attention they give their keys?  Other than the fact it’s not what we were taught by our mentors from the beginning, that is.

A user who can write down their passwords is more likely to choose a long, complex passsword, something they’d probably have a hard time remembering otherwise.  And as long as they are going to treat that written password as what it is, a key to their accounts, then we’ll all end up with a little more security on the whole.  So next time your preparing to teach a security awareness class, go back to the stationary store and pick up one of those little password notebooks we’ve all made fun of and hand them out to your users, but rememind them they need to keep the booklet as safe as they do their other keys.  If you’re smart, you’ll also include a note with a link to LastPass or 1Password as well; might as well give them a chance to have even a little better security.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

3 responses so far

Nov 25 2013

Two more years of Snowden leaks

Published by under Cloud,Government,Privacy,Risk

I’ve been trying to avoid NSA stories since this summer, really I have.  I get so worked up when I start reading and writing about these stories and I assume no one wants to read my realistic/paranoid ranting when I get like that.  Or at least that’s what my cohosts on the podcast have told me.  But one of the things I’ve been pointing out to people since this started is that there were reportedly at least 2000 documents contained in the systems Edward Snowden took to Hong Kong with him.  There could easily be many, many more, but the important point is that we’ve only seen stories concerning a very small number of these documents so far.

One of the points I’ve been making to friends and coworkers is that given how many documents we’ve seen release, we have at least a year more of revelations ahead of us, more likely two or more.  And apparently people who know agree with me: “Some Obama Administration officials have said privately that Snowden downloaded enought material to fuel two more years of news stories.”  This probably isn’t what many businesses in the US who are trying to sell overseas, whether they’re Cloud-based or not.  

These revelations have done enormous damage to the reputation of the US and American companies; according to Forrester, the damage could be as much as $35 billion over the next three years in lost revenue.  You can blame Mr. Snowden and Mr. Greenwald for releasing the documents, but I prefer to blame our government (not just the current administration) for letting their need to provide safety to the populace no matter what the cost.  I don’t expect everyone to agree with me on this and don’t care if they do.  It was a cost calculation that numerous people in power made, and I think they chose poorly.

Don’t expect this whole issue to blow over any time soon.  Greenwald has a cache of data that any reporter would love to make a career out of.  He’s doing what reporters are supposed to do and researching each piece of data and then exposing it to the world.  Don’t blame him for doing the sort of investigative reporting that he was educated and trained to do.  This is part of what makes a great democracy, the ability of reporters (and bloggers) to expose secrets to the world.  Democracy thrives on transparency.

As always, these are my opinions and don’t reflect upon my employer.  So, if you don’t like them, come to me directly.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Nov 04 2013

Attacking the weakest link

Published by under Cloud,Government,Hacking,Privacy,Risk

I spend far too much time reading about governmental spying on citizens, both US and abroad.  It’s a job hazard, since it impacts my role at work, but it’s also what I would be researching and reading about even if it wasn’t.  The natural paranoia that makes me a good security professional also feeds the desire to know as much as possible about the people who really are spying on us.  You could almost say it’s a healthy paranoia, since even things I never would have guessed have come to pass.  

But every time I hear about someone who’s come up with a ‘solution’ that protects businesses and consumers from spying, I have to take it with a grain of salt.  A really big grain of salt.  The latest scheme is by Swisscom, a telecommunications company in Switzerland that wants to build a datacenter in that country to offer up cloud services in an environment that would be safe from the US and other countries’ spying.  The theory is that Swiss law offers many more protections than other countries in the EU and the rest of the world and that these legal protections would be enough to stop the data at rest (ie. while stored on a hard drive in the cloud) from being captured by spies.  The only problem is that even the Swisscom representatives admit that it’s only the data at rest that would be protected, not the data in transit.  In other words, the data would be safe while sitting still, but when it enters or leaves Swiss space, it would be open to interception.  

It was recently revealed that the NSA doesn’t need to get to the data at rest, since they simply tap into the major fiber optic cables and capture the information as it traverses the Internet.  Their counterparts here in the UK do the same thing and the two organizations are constantly sharing information in order to ‘protect us from terrorists’.  Both spy organizations have been very careful to state that they don’t get information from cloud providers without court orders, but they haven’t addressed the issue of data in motion. 

So while the idea of a Swiss datacenter built to protect your data is a bit appealing, the reality is that it wouldn’t do much to help anyone keep their data safe, unless you’re willing to move to Switzerland.  And even then, this solution wouldn’t help much; this is the Internet and you never know exactly where your data is going to route through to get to your target.  If it left Swiss ‘airspace’ for even one hop, that might be enough for spy agencies to grab it.  And history has proven that at least GCHQ is willing to compromise the data centers of their allies if it’ll help them get the data they believe they need.  

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Oct 24 2013

LinkedIn Outro

“I know!  Let’s build a man in the middle (MITM) attack into our iPhone app so that we can inject small bits of information into their email that show how useful our site and service are.  At the same time we’ll now have access to every piece of email our users send, and even if we only have the metadata, well, that’s good enough for the NSA and other national spying agencies, isn’t it?  Let’s do it!”

I have to imagine the thinking was nothing like that when LinkedIn decided to create Intro, but that’s basically what the decided to do anyway.  If you read the LinkedIn blog post, you can see that they knew that what they were doing is a MITM attack against your email, even if they are calling it a proxy.  They’ve broken the trusted, or semi-trusted, link between you and your IMAP provider in order to get access to your email so they could insert a piece of HTML code into each and every email you receive.  Additionally, they’ve figured out how to make it so that this code is executable directly in you’re email.

Basically, what LinkedIn is asking you to do is create a new profile that makes them the proxy for all your email.  This is similar to what you do for your corporate email when setting it up on a new phone, but rather than having something that’s finely tuned for that corporation, LinkedIn makes the new profile on the fly by probing your phone’s configuration and basing it on the settings it finds.  

I have a hard time believing that someone at LinkedIn didn’t wave a red flag when this was brought up.  You’re asking users to install a new profile making you their new trusted source for all email, you’re asking that they trust you with their configuration and you’re capturing, or at least having access to the stream of all authentication data for their email.  Didn’t anyone at LinkedIn see a problem with that?  I have to imagine there are plenty of corporate email administrators who’ll have a problem with it.

Given recent history and the revelations that metadata about a person’s communications, LinkedIn is  audacious to say the least.  They know what they have, or at least want to have: information similar to what Google and Facebook have about your daily contacts and habits.  This is a huge data mining operation for them, aimed at learning everything they can about their users and applying that to advertising.  But I think they have overreached in their their desire to have this information and are going to get shut down hard by Apple.  And this doesn’t even take into account the fact that they’ve already had data breaches and are being sued for reaching into consumers’ calendars and contact information.

I don’t think LinkedIn has been a good steward of the information they’ve had before, and there’s no way I’d install Intro onto one of my iDevices if I was a heavy user.  The fact is, I have an account that I mostly keep open out of habit and this is nearly enough to make me shut it down for good.  If I wanted my every move tracked, I’d just keep open a Facebook tab in my browser. And while they may not be much of an example when it comes to privacy, I guess Facebook is a great example when it comes to profitability.  Way to go LI.

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Oct 17 2013

What’s a micromort?

Published by under Family,Humor,Risk

One of the cool things we’ve found on TV since moving to the UK is QI XL.  It’s a BBC show hosted by Stephen Fry where they take a rather comedic romp through a bunch of facts that may or may not have anything to do with one another.  Last night’s show was about Killers and a term that was completely new to me came up, a unit of measure called the ‘micromort’.  It’s basically a measurement equal to a one in a million chance of dying because of a specific event.  Really, it’s a scientifically valid measurement of risk.  And yes, our family has a strange idea of ‘cool’.

Why is the micromort important and relative to security?  Because humans, and security professionals are included in that category, have a horrible sense of the the risks involved in any action.  For example, you are 11 times more likely to die from a 1 mile bike ride, .22 micromorts, than you are from a shark attack, .02 micromorts.  Yet the same people who fear sharks greatly but are willing to go on a bike ride on a daily basis.  And many of those people smoke, which is a single micromort for each 1.4 cigarettes smoked.  People suck at risk analysis.

So could we come up with a similar unit of measurement for the risk in a million of a single action leading to a breach?  Someone needs to find a better name for it, but for the sake of argument, let’s call it a microbreach.  Every day you go without patching a system inside your perimeter is worth a microbreach.  Deploying a SQL server directly into the DMZ is 1000 microbreaches.  And deploying any Windows system directly onto the Internet is 10 million microbreaches, because you know that it’ll be scanned and found by randomly scanning botnets within minutes, if not seconds.

The problem is that the actuarial tables that the micromort measurements are drawn from millions of daily events.  People die every day, it’s an inevitability and we have a very black and white way of measuring when a person is dead.  We can’t even really agree on what constitutes a breach in security at this point in time, we don’t have millions of events to draw our data from (I hope) and even if we do, we’re not reporting them in a way that could be used to create statistical data about the cause of these events.

Some day we might be able to define a microbreach and the cost of any action in scientific terms.  There are small sections of the security community that argue endlessly about the term ‘risk’ and I have to believe they’re inching slowly towards a more accurate way to measure said risks.  I don’t expect those arguments to be settled any time soon, and perhaps not even in my lifetime.  So instead I’ll leave you with an entertaining video on the micromort to watch.  Thanks to David Szpunar (@dszp on twitter) for pointing me to it.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Oct 15 2013

Don’t ask for my password or PIN, United!

I’ve been a United Airlines customer for years.  I’ve been very loyal to United and the Star Alliance.  I’ve flown over 300k miles with them, I’ll have flown over 100k miles this year alone as of my next trip.  I’m in the top tier of their frequent flyer program and they generally treat me very well, with the kinds of exceptions that plague every airline, like maintenance and weather delays.  But they do one thing that really, really bugs me and they need to change it: When I call in use my mileage or alter a ticket, their customer service representative asks for my PIN!

When you log into the United site, you have two choices; you can use your password or a four digit PIN to log in.  The same PIN or password can be used to login to the mobile application as well.  This login allows access to all aspects of the account’s capabilities, allowing the user to change flights, get updates and spend frequent flier miles.  In other words, total control of the account.  And the customer service reps need this PIN in order to make changes to my account.

This is why I’m extremely annoyed by the way United treats my PIN.  In effect, every time I call in to United, I have to give up total control of my account to a complete stranger.  I have to either trust that they are well vetted by airline, something I’m not entirely sure is true or go through the hoops of changing my PIN every time I call in to United’s customer care services.  Alternatively, I can ignore both of those options and simply hope that nothing happens when I give up my password.  I’ve done all three at various times, but it still makes me angry that I have to choose one of these options.

I’ve complained to United several times when calling in.  I’ve talked to the agent on the phone, I’ve asked to speak to a manager, but as recently as last week they show no sign of understanding that this is a problem or making any changes.  The requirement to give up my password seemed to coincide with the merger of United and Continental and the adoption of the Continental computer systems.  The impression I’ve received from sources inside of United and out is that the Continental system was developed in the mid-70’s and has been largely unchanged since then.  Yes, they slapped some lipstick on the pig in the form of a web interface, but the back end is still a mainframe of some sort with a security model that hasn’t changed since it’s inception.

I have to appeal to United’s security teams:  Please, please, please find some way of changing your system so that I don’t get asked for a sensitive piece of information like my password or PIN every time I need to talk to your agents for a change to my flight!  I realize there is no credit card data directly available from my account, but my flight information is and it opens up the ability to change my flights or spend my mileage.  This really is something that shouldn’t be allowed in the modern age, from a multi-national corporation that really should know something about security and securing customer data.  Between moving to the UK and your poor security, I’m seriously thinking it’s time for a different airline.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Oct 14 2013

Your email won’t be any safer over here

I’m not sure why anyone has the illusion that their data would be safer in Europe than it might be in the US.  While some of the countries in Europe seem to have better laws for protecting email, it’s not a clear cut thing and there are always trade-offs.  While they might have better protections for data at rest, while in transit it might be fair game, or vice versa.  Plus, if you’re an American, you’re the foreigner to those nations, so many of the protections you might think you’re getting are null and void for you.

Rather than simply speculate, as many of us do, Cyrus Farivar at Ars Technica has written an article, Europe Won’t Save You: Why Email is Probably Safer in the US.  If you examine the laws closely, you’ll find that while countries like Germany appear to have stronger privacy laws, some of the caveats and edge cases make a lie of that appearance.  In this particular example, German law puts a  gag order in place by default that prevents your service provider from notifying you in case they’re served with a subpoena or similar device.  Think on that for a moment: if your service provider is served, you’ll never hear about it by default, rather than only when the large intelligence agencies take an interest in you.

Since I moved to the UK I’ve been hip deep in similar arguments with regards to cloud service providers.  Many folks in and around Europe seem to think that their own laws will somehow protect them from the threat of having their data raided by the NSA or some other, even more shadowy US organization.  But the reality is that in many countries they have less protection from their own governments than they do from the US.  Which barely scratches the fact that the core internet routers in many, if not all, countries are compromised by multiple governments, who are getting feeds of every packet that flows across their infrastructure.

The other concern that I hear quite often is about US businesses and information leaving the European Union.  I find this concern interesting, and believe it is likely to be a much more legitimate issue.  In the EU, the data protection laws appear to be much stronger than they are in the US, especially the Safe Harbor Principles.  But the reality is that businesses see the value of having as much personal information as they can get their hands on, so Safe Harbor is given lip service, while the businesses find ways to get around these requirements.  Or in many cases, ask users to opt out of some of the protections to get additional functionality out of a site.

Don’t think that hosting your email or other service is going to protect you if a government wants to get its digital fingers into your email.  As Farivar points out, the closest thing you’ll have to privacy is if you store your email on your own devices and encrypt it with your own encryption keys.  Storing it anywhere else leaves you open to all sorts of questionable privacy laws between you and your hosting provider.  You can’t just consider the jurisdiction you’re in, you have to consider every route your data might take between point A and point Z.  Being the Internet, you’ll never know exactly what route that is going to be.

Personally, I’m not pulling the plug on my Gmail account any time soon.  No government is worse than Google when it comes to intrusive monitoring of your email, lets be honest.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

« Prev - Next »