Archive for the 'Security Advisories' Category

Oct 14 2014

Wake up to a POODLE puddle

TL:DR – Disable SSL immediately.

As of this morning SSL appears to be dead or at least dying.  The POODLE vulnerability in SSL was released last night, basically revealing a vulnerability in the way that SSL v3 uses ciphers and allows an attacker to make a plain-text attack against the encrypted traffic.  This makes the third major vulnerability released on the Internet this year and is another warning that this level of vulnerability discovery may be the new shape of things to come.

I’m not going to try to explain POODLE in detail, or give you a nice logo for it.  Instead I’ll just point to the better articles on the subject, a couple of which just happen to be written by my teammates at Akamai.  I’ll add more as I find them, but this should tell you everything you need to know for now.

Update: It’s estimated that SSLv3 accounts for between 1% and 3% of all Internet traffic.

And since there’s not an official logo for it yet, I present …. The Rabid Poodle!

Rabid Poodle

Rabid Poodle

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Aug 21 2014

“I’m proud of my ignorance”

It’s true, we don’t want little things like experience and a broad knowledge of the landscape of technology getting in the way of our policy makers, now do we?  Or at least that seems to be the way US White House cybersecurity coordinator, Michael Daniel thinks.  Why get lost in an understanding of the big picture when you can make decisions based on the information fed to you by consultants and advisers with their own agendas to push?

In a way, I understand what Mr. Daniel’s point is; it’s very important for someone in his position to be able to understand the in and out of policy, perhaps at least as important as understanding the technology.  I wouldn’t want most of the people I see at Defcon or a BSides event making policy decisions; they don’t have the understanding of the long term consequences policy has on the wider world.  But by the same thought process, someone who doesn’t understand the deeper aspects of underlying technologies he’s making decisions about can’t understand the long term consequences of his decisions either.  How can someone make informed decisions if they don’t understand the difference between a hashing algorithm and an encryption technology?

The cybersecurity coordinator role is a management role and most of us have worked with senior managers and C-level execs responsible for security with little or no security experience.  And we know how well that’s worked out.  In rare cases, you find a manager who knows how to listen to people and, perhaps more importantly, knows how to tell the difference between a trustworthy adviser and someone pushing their agenda forward without regard to the outcome.  Those people can be successful as non-technical managers of technical people.  But more often you get non-technical managers who don’t understand the landscape they’re expected to be responsible for, who don’t understand the decisions they’re being asked to make and who are easily led astray by those around them.  And having a non-technical manager with the understanding to communicate with the management team above them is nearly unheard of.

Willful ignorance is never a feature to be lauded or boasted about.  Being proud of your ignorance is a red flag, one that should be a warning to everyone around the individual that they are not currently mature enough for their position.  Better to say, “I’m ignorant, but I’m learning.” to say that you know your limitations but are willing to overcome them than to embrace your limitations and act like they’re really a strength.  Yes, your other experience can help you overcome the areas you’re lacking in, but you have to acknowledge the weakness and work to make yourself better.

As the Vox article points out, we’d never have a Surgeon General who didn’t have decades of experience in medicine, we’d never allow an Attorney General who wasn’t a lawyer and had spent years in a courtroom.  So why are we allowing a person who couldn’t even qualify for to take the CISSP test to advise the leaders of the United States on how to deal with information security issues?  Think about that for a moment: the person who’s advising the White House doesn’t have the experience necessary to apply to for one of the starting rungs on the information security career ladder.  Scary.

Update:  You might also want to listen to the interview with Micheal Daniel and the subsequent defense of his statement about his own ignorance.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Jul 10 2014

Illustrating the problem with the CA’s

You’d think that if there was any SSL certificate out there that’d be carefully monitored, it’d be Google’s.  And you’d be right; between the number of users of Chrome and the Google team itself, the certs that correspond to Google properties are under a tremendous amount of scrutiny.  So when an impostor cert is issued anywhere in the world, it’s detected relatively quickly in most cases.  But the real question is, why are Certificate Authorities (CA’s) able to issue false certs in the first place?  Mostly because we have to trust someone in the process of cert issuance and in theory the CA’s are the ones who are the most trustworthy and best protected.  Unluckily, there are still a lot of holes in the process and the protection of even the best CA’s.

Last week Google detected an unauthorized digital certificate issued in India by the National Infomatics Center(NIC). This week it was revealed that not only were the certs Google knew about issued, but an indeterminate number of other certs had been issued by the NIC.  Their issuance process had been compromised in some way and they’re still in the process of investigating the full scope of the compromise.  Users of Chrome were protected due to certificate pinning, but users of IE and other browsers might not be so lucky. What was done with these certificates, no one knows.  What could be done with them is primarily acting as a man in the middle against users of any of the compromised certs, meaning the entity that now has these certificates could intercept and decrypt email, files, etc.  There are plenty of reasons a government or criminal element would want to have control of a certificate that looks and feels like it’s an authentic Google (or MIcrosoft or…) certificate.

There’s no clear, clean way to improve the CA process.  Extended Validation (EV) certs are one way, but it also makes the whole process of getting an SSL cert much more complex.  But given the the value of privacy and how certificates play a vital role in maintaining it, this may be the price the Internet has to pay.  Pinning certs helps, as will DANE and Sunlight (aka Certificate Transparency).  Neither DANE nor Sunlight are fully baked yet, but they should both help make up for the weaknesses of current processes.  Then it’ll just take a year or three to get them into all the browsers and even longer for older browsers to be retired.  And that’s not even taking into account the fact that we don’t use SSL everywhere.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Apr 05 2014

Hack my ride

Published by under Hacking,Risk,Security Advisories

Important:  Read the stuff at the end of this post.  I got a lot of feedback and I’ve added it there.  Unlike some people, I actually want to be told when I’m wrong and learn from the experience.

I don’t own a Tesla S and probably never will.  They’re beautiful cars, they’re (sort of) ecologically friendly, and they show that you have more money than common sense.  I use a car to get my family from point A to point B and showing off my wealth (or lack there of) has never actually been part of the equation in buying a car.  And one more reason I don’t think I’ll ever buy a Tesla is that I’m beginning to think they’re as insecure as all get out, at least from the network perspective.

Last week hacker* Nitesh Dhanjani wrote his experience with exploring the remote control possibilities of the Tesla Model S P85+.  It starts with being able to unlock the doors, check the location, etc.  And it ends with a total lack of security for the site and tools needed to control the car.  The web site for controling your new Tesla has minimal password complexity controls, six characters with at least one letter and one number.  I have no idea if it’ll even let you use symbols, but I’m guessing that’s either not supported or only a minimal subset of symbols are available.  Which means password complexity is very low by almost any standard.  Then there’s the fact that Tesla doesn’t have rudimentary controls around the web site, such as rate limits on password guesses or account lock out, which they’ve hopefully changed by now.  Which gives you an easily guessed password combined with a site that allows unlimited guesses, making the possibility of brute forcing the password very real.  That’s not even including the fact that so many people reuse account names and passwords, so there’s a good chance you can find a compromised account database with the owner’s details if you search for a little while.

That’s great so far.  Now let’s add to this the fact that your Tesla S has wifi/4G wireless access.  And there’s also a 4-pin connector in the dashboard that leads to the network inside your car.  It’s running all sorts of wonderful things in that network too, none of which could possibly be vulnerable to outside attacks, right?  SSH? Check.  DNS? Check.  Web Server? Check. Telnet?  Check.  Wait, telnet?  Seriously?  Oh, and “1050/tcp open java or OTGfileshare”.  Yes, I really want either java or an open file share running in my car.  At least one person was able to get Firefox running on the console of their Tesla, [Correction: x-11 forwarding misconfiguration, not running on the Tesla]  even if was flipped on its head for some odd reason.  Any or all of these services running on the car’s internal network could have vulnerabilities that allow configuration change, remote code execution or even full root access to the system.  Or maybe they just allow for the systems to be rebooted, not something you really want when your driving on the winding coastal roads of California. [I've been told it's just the displays that would be affected, none of the handling characteristics would change. Still disconcerting]

So now we’ve got two fairly egregious methods of connecting to your Tesla with minimal security standards.  The first is remote and allows for control of doors, sunroofs, braking and suspension profiles.  The last two should concern everyone.  While there are probably physical controls in place to keep the profiles of brakes and suspension from getting too far outside of the range of acceptable usage, I wouldn’t be willing to bet on it, given the otherwise lax security measures on the remote controls for the car.  The second method of connecting to the Tesla does require physical access, but it sounds like this is built for the engineers and technicians who work on Teslas [Correction:  The connection only allows for access to the entertainment system and there is an airgap between that and the CANBUS systems.  However, I don't trust airgaps], and is likely to allow much greater control of the car and the various parameters of its design.  Even less technologically advanced cars have the ability to make fairly advanced modification of the functioning of a car once you have access to the software, so Tesla probably has extremely advanced configuration capabilities.  Meaning everything from how the car charges when plugged in to what shows up on the dash as you’re driving to manipulating acceleration and braking are within the realm of possibility.

As the Internet of Things becomes our daily reality, this sort of lax security on something as potentially deadly as an automobile is inexcusable.   It wouldn’t take much of a tweak to the normal operation of a car to make it uncontrollable in the wrong situation.  We haven’t seen anyone killed by having their car hacked yet, but it’s only a matter of time if companies aren’t willing to take the time to properly secure the systems that go into making the car run.  While it’s important in the current marketing environment to make every device as configurable from you phone as possible, there have to be sufficient controls in place to make that configurability safe and secure as well.  Yes, it might mean that you, Tesla, have to make your users go through two or three more steps in order to set up their systems for control, but it’s worth the effort.  After all, who will be liable, who will be in the courts for years when the first person claims that their car was hacked, which is what caused the accident?  Even if having a car hacked isn’t the cause of an accident, it can’t be too long before someone uses that as their defense and still costs the company millions in legal defense.

Let’s end this with a little thought experiment.  The four pin connector in the Tesla has a full TCP stack and runs on a known set of IP’s, 192.168.90.100-102.  Say I grabbed a Teensy 3.1, with built in wi-fi capabilities, and added an ethernet shield.  With the current arduino libraries, I can create a wi-fi receiver that takes my traffic and routes it to the wired network, which just happens to have an accessible network inside the Tesla.  Now I have a device that’s a small portal directly into your car that I can connect to from several hundred feet away, farther if I want to make myself a high gain pringles can yagi antenna.  We’re not talking high technology spy gear, we’re talking about a weekend project I could do with my kids that would result in a package no bigger than a pack of cards.  I could put this in the glove box with a single cable leading to the car’s ethernet port.  Anything a Tesla engineer could control on the car, I could control remotely.  Suddenly I have the biggest remote control car on the block, which just happens to be the Tesla you’re sitting in.

This is why we have to secure the Internet of things.  If I can imagine it, you better believe there’s already someone else out there working on it.

* Hacker == someone who makes technology do things the engineers who designed it didn’t intend it to do.

Added, 9:15 GMT:  So I got some feedback very quickly after posting this.  And I admit a lot of what I’m saying here is based on guesswork, assumptions and third party statements.  It’s my blog, I get to do that.  Both Beau and the Kos have a lot more to say about why many of my assumptions are wrong.  And they probably know more about cars than I do.  So teach me.  There will be follow up.

Thanks to @chriseng for basic spell checking.  I do indeed know the difference between ‘break’ and ‘brake’, just not before breakfast.

@beauwoods: “Infotainment network and CANBUS are separate. The other issue is the equivalent threat model of a big rock to a window.” In other words, there’s a very real airgap between the two systems and it’d be impossible to control one from the other.

@theKos called my statement about the password limitations silly, stated that running Firefox on the system was a x-forward misconfiguration, that rebooting the displays won’t affect the running of the car, and that all products have vulnerabilities.  I really have to challenge the last statement as a fallacy: knowing that all products have vulnerabilities doesn’t make them more acceptable in any way.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

3 responses so far

Mar 09 2014

Mt. Gox Doxed

I’ve never owned a bitcoin, I’ve never mined a bitcoin, in fact I’ve never really talked to anyone who’s used them extensively.  I have kept half an eye on the larger bitcoin stories though, and the recent disclosures that bitcoin exchange Mt. Gox was victim of hackers who stole the entire of the content in their vault, worth hundreds of millions of dollars (or pounds) have kept my interest.  I know I’m not the only one who’s smelled something more than a little off about the whole story and I’m sure I’m not the only one.  Apparently a hacker, or hackers, who also felt something wasn’t right on the mountain decided to do something about it: they doxed* Mt. Gox and it’s CEO, Mark Karpeles.

We don’t know yet if the files that hackers exposed to the internet were actually legitimate files from Mt. Gox and Mr. Karpeles yet, but this isn’t the only disclosure the company is potentially facing.  Another hacker has claimed to have about 20Gigs of information about the company, their users and plenty of interesting documents.  Between the two, if even a little of the data is valid, it’ll spell out a lot of trouble for Mt. Gox and it’s users.  If I were a prosecutor who had any remote possiblity of being involved in this case, I’d be collecting every piece of information and disclosed file I could, with big plans for using them in court at a later date.  

In any case, I occasionally read articles that say the Mt. Gox experience shows that bitcoins are an unusable and ultimately doomed form of currency because they’re a digital only medium and that they’ll always be open to fraud and theft because of it.  I laugh at those people.  Have they looked at our modern banking system and realized that 99% of the money in the world now only exists in digital format somewhere, sometimes with hard copy, but generally not?  Yes, we’ve had more time to figure out how to secure the banking systems, but they’re still mostly digital.  And eventually someone will do the same to a bank as was done to Mt. Gox.

*Doxed:  to have your personal information discovered or stolen and published on the Internet.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

3 responses so far

Mar 07 2014

You have been identified as a latent criminal!

This afternoon, while I ate lunch, I watched a new-to-me anime called Pscho-Pass.  The TL:DR summary of the show is a future where everyone is chipped and constantly monitored.  If their Criminal Coefficient becomes to high, they are arrested for the good of society.  It doesn’t matter whether they’ve commited a crime or not, if the potential that they will commit a crime exceeds the threshold set by the computer, they’re arrested, or killed if they resist arrest. Like many anime, it sounds like a dystopian future that could never happen.  Except when I got back to my desk, I saw Bruce Schneier’s post, Surveillance by Algorithm.  And once again what I thought was an impossible dystopian future seems like a probable dystopian present.  

As Bruce points out, we already have Google and Amazon suggesting search results and purchases based on our prior behaviours online.  With every search I make online, they build up a more detailed and accurate profile of what I like, what I’ll buy and, by extension, what sort of person I am.  They aren’t using people to do this, there’s an extensive and thoroughly thought out algorithm that measures my every action to create a statistically accurate profile of my likes and dislikes in order to offer up what I might like to buy next based on their experience of what I’ve purchased in the past.  Or there would be if I didn’t purposefully share and account with my wife in order to confuse the profiling software Amazon uses.

Google is a lot harder to fool and they have access to a lot more of the data that reveals the true nature of who I am, what I’ve done and what I’m planning to do.  They have every personal email, my calendar, my searches, in fact, about 90% of what I do online is either directly through Google or indexed by Google in some way or shape.  Even my own family and friends probably don’t have as accurate an indicator of who I really am behind the mask as Google does, if they choose to create a psychological profile of me.  You can cloud the judgement of people, since they’re applying their own filters that interfere with a valid assessment of others, but a well written computer algorithm takes the biases of numerous coders and tries to even them out to create an evaluation that’s closer to reality than that of most people.

It wouldn’t take much for a government, the US, the UK or any other government, to start pushing to have an algorithm that evaluates the mental health and criminal index of every user on the planet and alerts the authorities when something bad is being planned.  Another point Bruce makes is that this isn’t considered ‘collection’ by the NSA, since they wouldn’t necessarilly have any of the data until an alert had been raised and a human began to review the data.  It would begin as something seemingly innoccuous, probably similar to the logical fallacies that governments already use to create ‘protection mechanisms': “We just want to catch the peodophiles and terrorists; if you’re not a peodophile or terrorist, you have nothing to fear.”  After all, these are the exact phrases that have been used numerous times to create any number of organizations and mechanisms, including the TSA and the NSA itself.  And they’re all that much more powerful because there is a strong core of truth to them.

But what they don’t address is a few of the fatal flaws to any such system based on a behavioural algorithm.  First of all, inclination, or even intent, doesn’t equal action.  Our society has long ago established that the thought of doing something isn’t the same as doing the action, whether it’s well-intentioned or malign.  If I mean to call my mother back in the US every Sunday, the thought doesn’t count unless I actually follow through and do so.  And if I want to run over a cyclist who’s slowing down traffic, it really doesn’t matter unless I nudge the steering wheel to the left and hit them.  Intent to commit a crime is not the same as the crime itself, until I start taking the steps necessary to perform the crime, such as purchasing explosives or writing a plan to blow something up.  If we were ever to start allowing the use of algoritms to denote who ‘s a potential criminal and treat them as such before they’ve commited a crime, we’ll have lost something essential to the human condition.

A second problem is that the algorithms are going to be created by people.  People who are fallable and biased.  Even if the individual biases are compensated for, the biases of the cultures are going to be evident in any tool that’s used to detect thought crimes.  This might not seem like much of a problem if you’re an American who agrees with the mainstream American values, but what if you’re not?  What if you’re GLBT?  What if you have an open relationship?  Or like pain?  What if there’s some aspect of your life that falls outside what is considered acceptable by the mainstream of our society?  Almost everyone has some aspect of their life they keep private because it doesn’t meet with societal norms on some level.  It’s a natural part of being human and fallable.  Additionally, actions and thoughts that are perfectly innocuous in the US can become serious crimes if you travel to the Middle East, Asia or Africa and the other way as well.  Back to the issue of sexual orientation, we only have to look at the recent Olympics and how several laws were passed in Russia to make non-heterosexual orientation a crime.  We have numerous examples of laws that have passed in the US only later to be thought to be unfair by more modern standards, with Prohibition being one of the most prominent examples.  Using computer algorithms to uncover people’s hidden inclinations would have a disastrous effect on both individuals and society as a whole.

Finally, there’s the twin ideas of false positives and false negatives.  If you’ve ever run an IDS, WAF or any other type of detection and blocking mechanism, you’re intimately familiar with the concepts.  A false positive is an alert that erroneously tags something as being malicious when it’s not.  It might be that a coder used a string that you’ve written into your detection algorithms and it’s caught by your IDS as an attack.  Or it might be a horror writer looking up some horrible technique that the bad guy in his latest novel is going to use to kill his victims.  In either case, it’s relatively easy to identify a false positive, though a false positive by the a behavioural algorithm has the potential to ruin a persons life before everything is said and done. 

Much more pernicous are false negatives.  This is when your detection mechanism has failed to catch an indicator and therefore not alerted you.  It’s much harder to find and understand false negatives because you don’t know if you’re failing to detect a legitimate attack or if there are simply no malicous attacks to catch.  It’s hard enough when dealing with network traffic to understand and detect false negatives, but when you’re dealing with people who are consciously trying to avoid displaying any of the triggers that would raise alerts, false negatives become much harder to detect and the consequences become much greater.  A large part of spycraft is to avoid any behaviour that will alert other spies to what you are; the same ideas apply to terrorists or criminals of any stripe who have a certain level of intelligence.  The most successful criminals are the ones who make every attempt to blend into society and appear to be just like every other successful businessman around them.  The consequences of believing your computer algorithms have identified every potential terrorist are that you stop looking for the people that might be off the grid for whatever reasons.  You learn to rely to heavily on the algorithm to the exclusion of everything else, a consequence we’ve already seen.

So much of what goes on society is a pendulum that swings back and forth as we adjust to the changes in our reality.  Currently, we have a massive change in technologies that allow for surveillance that far exceeds anything that’s ever been available in the past.  The thought that it might swing to the point of having chips in every persons head that tells the authorities when we start thinking thoughts that are a little too nasty is a far fetched scenario, I’ll admit.  But the thought that the NSA might have a secret data center in the desert that runs a complex algorithm on every packet and phone call that is made in the US and the world to detect potential terrorists or criminal isn’t.  However well intentioned the idea might be, the failings of the technology, the failings of the people implementing the technology and the impacts of this technology on basic human rights and freedoms are something that not only should be considered, they’re all issues that are facing us right now and must be discussed.  I, for one, don’t want to live in a world of “thought police” and “Minority Report“, but that is where this slippery slope leads.  Rather than our Oracle being a group of psychics, it might be a computer program written by … wait for it … Oracle.  And if you’ve ever used Oracle software, that should scare you as much as anything else I’ve written.

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Jan 24 2014

Can’t get there from here

I’ve had an interesting problem for the last few days.  I can’t get to the Hack in the Box site, HITB.org, or the HITB NL site from my home near London.  Turns out I can’t get to the THC.org site or rokabear.com either.  That makes four hacking conferences who’s sites I can’t get to.  And I’m not the only one, since apparently a number of people who are using Virgin Media in the UK as their ISP can’t get to these sites, while other people on other ISP’s in Britain can get to all four of these sites.  I can even get to them if I log into my corporate VPN, just not while the traffic is flowing out through my home network.  I’m not going to accuse Virgin Media of blocking these sites, but I’m also not ruling chicanery on their part out as a cause either.  I also make no claims that I poses the network kung-fu to verify that any of my testing is more than scratching the surface of this problem.

So here’s how this all started:  Yesterday morning I decided I saw a tweet that the early bird sign up for Hack in the Box Amsterdam was going to end soon.  I know some of the organizers of the event, I’ve wanted to go for a long time, so I decided to get my ticket early and save the company a few bucks.  I opened up a new tab in Chrome, typed in haxpo.nl and … nothing, the request timed out.  Hmm.  Ping gave me an IP, so the DNS records were resolving, but the site itself was timing out.  I switched to the work computer, to find the same thing was happening.  The I logged into the corporate VPN and tried again, suddenly everything worked.  Curious.

At first I thought this might be a stupid DNS trick played at the ISP, so I changed my DNS resolvers to a pair of servers I have relative certainty aren’t going to play tricks, Google’s 8.8.8.8 and the DNS server from my old ISP back in the US, Sonic.net (who I highly recommend, BTW).  This didn’t change anything, I still couldn’t get to HITB.  I had to get working, so I did what any smart security professional does, I threw up a couple of tweets to see if anyone else was experiencing similar issues.  And it turns out there were a number of people, all using Virgin Media, who had the identical problem.  This is how I found out that THC and Rokabear are also not accessible for us.

As yesterday went by, I got more and more confirmations that none of these hacking sites are available for those of us on Virgin Media.  At first I thought it might simply be VM blackholing the sites, but VM’s social media person sent me a link to review who was being blocked by court order by Virgin Media.  I didn’t find any of the hacking sites listed in this, besides which Virgin Media actually throws up a warning banner page when they block a page, they don’t simply blackhole the traffic.  They will limit your internet access if they feel you’re downloading too many big files during peak usage hours, but that’s a discussion for another day.

The next step was tracert.  I a little chagrined to admit I didn’t think of tracert earlier in the process, but to be honest, I haven’t really needed to use it in a while.  What I found was a bit interesting (and no, you don’t get the first two hops in my network chain, you have no need to know what my router’s IP is).

 C:\Users\Martin>tracert www.hitb.org

Tracing route to www.hitb.org [199.58.210.36]

3     9 ms     7 ms     7 ms  glfd-core-2b-ae3-2352.network.virginmedia.net [8.4.31.225]

 4    11 ms     7 ms     7 ms  popl-bb-1b-ae3-0.network.virginmedia.net [213.10.159.245]

 5    10 ms    11 ms    10 ms  nrth-bb-1b-et-700-0.network.virginmedia.net [62.53.175.53]

 6    11 ms    15 ms    14 ms  tele-ic-4-ae0-0.network.virginmedia.net [62.253.74.18]

 7    13 ms    16 ms    14 ms  be3000.ccr21.lon02.atlas.cogentco.com [130.117.1.141]

 8    16 ms    14 ms    16 ms  be2328.ccr21.lon01.atlas.cogentco.com [130.117.4.85]

 9    17 ms    15 ms    16 ms  be2317.mpd22.lon13.atlas.cogentco.com [154.54.73.177]

10    88 ms   102 ms   103 ms  be2350.mpd22.jfk02.atlas.cogentco.com [154.54.30.185]

11    99 ms   100 ms    91 ms  be2150.mpd21.dca01.atlas.cogentco.com [154.54.31.129]

12    97 ms    94 ms    96 ms  be2177.ccr41.iad02.atlas.cogentco.com [154.54.41.205]

13   102 ms   100 ms   105 ms  te2-1.ccr01.iad01.atlas.cogentco.com [154.54.31..62]

14   101 ms   210 ms   211 ms  te4-1.ccr01.iad06.atlas.cogentco.com [154.54.85.8]

15    90 ms    91 ms    99 ms  edge03-iad-ge0.lionlink.net [38.122.66.186]

16    90 ms    94 ms    98 ms  23.29.62.12

17  nlayer.lionlink.net [67.208.163.153]  reports: Destination net unreachable.

Rather than doing what I thought would be the logical thing and simply hoping across the channel and hitting Amsterdam fairly directly, my traffic leaves the VM network through Cogent Networks, hits a few systems in the US owned by a company called Lionlink Networks LLC and dies.  So my traffic leaves the UK, travels to Switzerland, then to the US, over to Washington DC and then dies.  And this happens with four separate hacker conference sites, but doesn’t appear to happen anywhere else.  Oh, and all four hacking sites take the same basic route and all die shortly after hitting LionLink.  Hmmmm.

I know I’m a professional paranoid.  I know how BGP works and that it’s not unusual for traffic to bounce around the internet and go way, way, way, out of what a human would consider a direct route, but the fact that all four EU hacking sites all route back to the US and that they all die when they hit Lionlink is more than a little suspicious to me.  It’s almost like someone is routing the traffic through Switzerland and the US so it can be monitored for hacker activity, since both countries have laws that allow for the capture of traffic that transgresses their borders.  But of course, that would just be paranoid.  Or it would have been in a pre-Snowden world.  In a post-Snowden world, I have to assume most of my traffic is being monitored for anomalous behavior and that the only reason I noticed is because someone at Lionlink screwed up a routing table, exposing the subterfuge.  But that would just be my paranoia speaking, wouldn’t it?

I’m hoping someone with deeper understanding of the dark magiks of the Internets can dig into this and share their findings with me.  It’s interesting that this routing problem is only happening to people on Virgin Media and it’s interesting that the traffic is being routed through Switzerland and the US.  What I have isn’t conclusive proof of anything; it’s just an interesting traffic pattern at this point in time.  I’m hoping there’s a less sinister explanation for what’s going on than the one I’m positing.  If you look into this, please share your findings with me.  I might just be looking at things all wrong but I want to learn from this experience whether I’m right or not.

Thanks to @gsuberland, @clappymonkey, @sawaba @tomaszmiklas, @module0x90 and others who helped verify some of my testing on twitter last night.  And special thanks to @l33tdawg for snooping and making sure I got signed up for HITB.

Update – And here it is, a much more believable explanation than spying, route leakage.  So much for my pre-dawn ramblings.

From Hacker News on Ycombinator:

This is a route leak, plain and simple. Don’t forget to apply Occam’s Razor. All of those sites which are “coincidentally” misbehaving are located in the same /24.

This is what is actually happening. Virgin Media peers with Cogent. Virgin prefers routes from peers over transit. Cogent is turrible at provisioning and filtering, and is a large international transit provider.

Let’s look at the route from Cogent’s perspective:

 

  BGP routing table entry for 199.58.210.0/24, version 2031309347
  Paths: (1 available, best #1, table Default-IP-Routing-Table)
    54098 11557 4436 40015 54876
      38.122.66.186 (metric 10105011) from 154.54.66.76 (154.54.66.76)
        Origin incomplete, metric 0, localpref 130, valid, internal, best
        Community: 174:3092 174:10031 174:20999 174:21001 174:22013

If Cogent was competent at filtering, they’d never learn a route transiting 4436 via a customer port in the first place, but most likely someone at Lionlink (54098) is leaking from one of their transit providers (Sidera, 11557) to another (Cogent, 174).

Also, traffic passing through Switzerland is a red herring — the poster is using a geoip database to look up where a Cogent router is. GeoIP databases are typically populated by user activity, e.g., mobile devices phoning home to get wifi-based location, credit card txns, etc. None of this traffic comes from a ptp interface address on a core router. GeoIP databases tend to have a resolution of about a /24, whereas infrastructure netblocks tend to be chopped up into /30s or /31s for ptp links and /32s for loopbacks, so two adjacent /32s could physically be located in wildly different parts of the world. More than likely, that IP address was previously assigned to a customer. The more accurate source of information would be the router’s hostname, which clearly indicates that it is in London. The handoff between Virgin and Cogent almost certainly happens at Telehouse in the Docklands.

If someone were, in fact, trying to intercept your traffic, they could almost certainly do so without you noticing (at least at layer 3.)

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

No responses yet

Dec 15 2013

Twitter spam filters overloaded

I believe the Twitter spam filters are currently overloaded or at least someone’s figure out a way around them.  In the last 72 hours, I’ve gotten more twitter followers than I normally get in a three weeks.  At first it was hard to tell if they were real people or not, but as they’ve accumulated, I’m certain that the vast majority of them are not.  It’s gotten to the point that I’m reporting all new followers as spam, unless there is sufficient reason to believe they might be a real person. 

So what characteristics do the spam followers share in common?

  1. Non-english speakers.  Russian, Spanish, Arabic, and any number of other languages I don’t recognize.  I’m assuming some are gibberish even in their own language.
  2. Very low number of tweets.  Almost all of these accounts less than 200 tweets and a significant number have less than 50 tweets.  There doesn’t seem to be a commonality of having links in these tweets, but I’ve given up on looking at their tweets.
  3. High following count/low follower count.  In an organic growth pattern, twitter users don’t tend to have a 10 to 1 following/follower ratio, since close to 10% of twitter is the bots anyway.  
  4. No listed count.  It doesn’t look like the bots have figured out how to get themselves listed quite yet.  Maybe there will be a botnet that will autolist bots in the future, but this is a big giveaway for now.

I’m confident the folks at twitter will figure out a way to stem the tide of the current bot invasion, but in the mean time I’ll continue to report these accounts for spam.  I apologize ahead of time if I block any real people by accident.  

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Dec 12 2013

Annual Predictions: Stop, think, don’t!

One of my pet peeves ever since I started blogging has been the annual ritual of the vendor security predictions.  Marketing teams must think these are a great idea, because we see them again and again … ad nauseum.  Why not?  Reporters and bloggers like them because they make for an easy story that can simply be cut and paste from the vendor’s press release, a fair number of people will read them and everyone gets more page views.  And there’s absolutely no downside to them, except for angry bloggers like me who rant in obscure corners of the internet about how stupid these lists are.  No one actually holds any of the authors to a standard and measures how accurate they were in any case.

Really, the amazingly stupid part of these annual lists is that they’re not predictive in the least.  With rare exceptions, the authors are looking at what they’ve seen happening in the last three months of the year and try to draw some sort of causal line to what will happen next year.  The exceptions are either simply repeating the same drivel they reported the year before or writing wildly outrageous fantasies just to see if anyone is actually reading.  Actually, it’s the last category, the outrageous fantasy, that I find the most useful and probably the predictions most likely to come true in any meaningful way.

These predictions serve absolutely no purpose other than getting page views.  As my friend and coworker, Dave Lewis, pointed out, most of the predictions from the year 2000 could be reprinted today and no one would notice the difference.  We have a hard enough time dealing with the known vulnerabilities and system issues that we know are happening as a fact; many of the controls needed to combat the issues in predictions are either beyond our capabilities or controls we should already have in place but don’t.  So what does a prediction get the reader?  Nothing.  What does it get a vendor?  A few more page views … and a little less respect.

So, please, please, please, if your marketing or PR departments are asking you to write a Top 10 Security Predictions for 2014, say NO.  Sure, it’s easy to sit down for thirty minutes and BS your way through some predictions, but why?  Let someone else embarrass themselves with a list everyone knows is meaningless.  Spend the time focusing on one issue you’ve seen in the last year and how to overcome it.  Concentrate on one basic, core concept every security department should be working on and talk about that.  Write about almost anything other than security predictions for the coming year.  Because they’re utterly and completely worthless.

Remember: Stop, Think, Don’t!

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

3 responses so far

Oct 15 2013

Don’t ask for my password or PIN, United!

I’ve been a United Airlines customer for years.  I’ve been very loyal to United and the Star Alliance.  I’ve flown over 300k miles with them, I’ll have flown over 100k miles this year alone as of my next trip.  I’m in the top tier of their frequent flyer program and they generally treat me very well, with the kinds of exceptions that plague every airline, like maintenance and weather delays.  But they do one thing that really, really bugs me and they need to change it: When I call in use my mileage or alter a ticket, their customer service representative asks for my PIN!

When you log into the United site, you have two choices; you can use your password or a four digit PIN to log in.  The same PIN or password can be used to login to the mobile application as well.  This login allows access to all aspects of the account’s capabilities, allowing the user to change flights, get updates and spend frequent flier miles.  In other words, total control of the account.  And the customer service reps need this PIN in order to make changes to my account.

This is why I’m extremely annoyed by the way United treats my PIN.  In effect, every time I call in to United, I have to give up total control of my account to a complete stranger.  I have to either trust that they are well vetted by airline, something I’m not entirely sure is true or go through the hoops of changing my PIN every time I call in to United’s customer care services.  Alternatively, I can ignore both of those options and simply hope that nothing happens when I give up my password.  I’ve done all three at various times, but it still makes me angry that I have to choose one of these options.

I’ve complained to United several times when calling in.  I’ve talked to the agent on the phone, I’ve asked to speak to a manager, but as recently as last week they show no sign of understanding that this is a problem or making any changes.  The requirement to give up my password seemed to coincide with the merger of United and Continental and the adoption of the Continental computer systems.  The impression I’ve received from sources inside of United and out is that the Continental system was developed in the mid-70’s and has been largely unchanged since then.  Yes, they slapped some lipstick on the pig in the form of a web interface, but the back end is still a mainframe of some sort with a security model that hasn’t changed since it’s inception.

I have to appeal to United’s security teams:  Please, please, please find some way of changing your system so that I don’t get asked for a sensitive piece of information like my password or PIN every time I need to talk to your agents for a change to my flight!  I realize there is no credit card data directly available from my account, but my flight information is and it opens up the ability to change my flights or spend my mileage.  This really is something that shouldn’t be allowed in the modern age, from a multi-national corporation that really should know something about security and securing customer data.  Between moving to the UK and your poor security, I’m seriously thinking it’s time for a different airline.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

One response so far

Next »