Archive for the 'Risk' Category

Jul 29 2014

You’ve been reported … by an ad

Published by under Government,Malware,Risk

This looks like an interesting experiment; the City of London police have started placing ads on sites for pirated music warning that the visit to the site has been recorded and reported.  Called “Operation Creative”, this is an effort by the Police Intellectual Property Crime Unit (PIPCU) to educate people visiting sites that offer pirated music and videos that it’s illegal and could result in prosecution.  As if anyone who visits a pirate site didn’t already know exactly what they were doing and what the potential consequences are.  The City of London police call it education, though intimidation might be a better word for what they’re actually doing.

The folks over at TorrentFreak are concerned with the fact that they couldn’t get the actual banners to show up.  They created a story out of what they could get to, ads for music sites that have reached agreements with the RIAA and music labels.  While this is interesting, I’m more concerned with what the results of this type of ‘education’ will be.

Let’s be honest in saying that anyone who’s using a pirate site has a pretty good idea of what they’re doing.  So the police banners aren’t going to be educational, they’re attempts to make users believe that their IP addresses has been logged for future prosecution.  While they don’t come out directly with the threat, it is implied using the word “reported”.  And who’s to say that the ad network they’re using to supply the ads isn’t using a cookie to gather IP addresses as well as various other information as well.  This definitely sounds more like a threat than most forms of education I’m familiar with.

The problem I have with this PIPCU exercise isn’t the intimidation, but rather the unintended consequences of it.  Scary warnings that the user is doing something illegal aren’t new and in fact have been used by malware authors for a long, long time.  Scareware saying the FBI is going to come knocking at your door for visiting illegal websites is a common tactic, it’s just whether they’re telling you you’ve been to porn sites with underage models or pirate sites to download music that change.  I’m certain the same groups who send these notifications already have fake ads telling users to “pay a fine of $500 or we’re coming to your house”.  If they aren’t in the ad networks, they definitely send out spam to users with the same messages, often using the same exact graphics and messages as official police web sites.  

Rather than discouraging the average pirate site user from visiting the site, this police effort is likely to create the illusion that such scareware ads might be legitimate in the eyes of the user.  In other words, while there might be some impact on the number of people using pirate sites, it’s more likely this will increase the amount of fraud perpetrated against those same users, since it’ll be hard to tell if the warning is really the police or not.  The music companies are probably perfectly happy with this as an outcome, but I doubt the police will enjoy being used as a method for increasing fraud against anyone.

My second concern is less about the fraud and more about the futility of the exercise.  Brian Krebs recently wrote about services that allow an organization to click on banner ads in order to drain the money spent on those ads.  In other words, you pay a service to click on your competitor’s ads without giving them anything of value, using up the money they paid for those ads as quickly as possible, with little or no return.  I see no reason some of the more technically savvy users of pirate site wouldn’t create scripts to do exactly the same to the police.  How hard would it be to use VPN’s or Tor in order disguise IP addresses and hit the same ads again and again?  In theory there are likely to be defenses in place to stop this type of targeted ad attack, but it’s possible to overcome any defense if you have a motivated attacker.

I’m purposefully not addressing the ethics of pirating music, nor am I addressing the efficacy of an outdated business model such as the music industry.  I’ll leave it to someone else to argue both sides of that argument.  What I’m concerned with is the how effective the efforts are going to be and what the consequences of those efforts.  Does the PIPCU expect their ad campaign to have a direct effort on piracy or do they realize this is a futile effort?  Have they thought of the negative consequences their efforts will have with regard to fraud?  Or is this simply an effort to be seen as doing *something* by the recording companies and the public, no matter how negligible the positive outcomes might be?  

I’m not sure what would constitute an effective measure to stop piracy.  For the most part I think the ads we’ve seen in the past, both in movie theaters and online, have been heavy handed and annoyed most of the people they were targeted at rather than dissuade anyone.  This effort doesn’t seem much different, but it has the added disadvantage of making it easier for the authors of scareware to intimidate the public into giving up money for no good reason.  And that’s something that should be avoided whenever possible.

No responses yet

Jul 21 2014

Can I use Dropbox?

Published by under Encryption,Family,Privacy,Risk

I know security is coming to the public awareness when I start getting contacted by relatives and friends about the security of products beyond anti-virus.  I think it’s doubly telling when the questions are not about how to secure their home systems but about the security of a product for their business.  Which is exactly what happened this week; I was contacted by a family member who wanted to know if it was safe to use Dropbox for business.  Is it safe, is it secure and will my business files be okay if I use Dropbox to share them between team members?

Let’s be honest that the biggest variable in the ‘is it secure?’ equation is what are you sharing using this type of service.  I’d argue that anything that has the capability of substantially impacting your business on a financial or reputational basis shouldn’t be shared using any third-party service provider (aka The Cloud).  If it’s something that’s valuable enough to your business that you’d be panicking if you left it on a USB memory stick in your local coffee shop, you shouldn’t be sharing it via a cloud provider in the first place. In many cases the security concerns of leaving your data with a service provider are similar to the dropped USB stick, since many of these providers have experienced security breaches at one point or another.

What raised this concern to a level where the general public?  It turns out it was a story in the Guardian about an interview with Edward Snowden where he suggests that Dropbox is insecure and that users should switch to Spideroak instead.  Why?  The basic reason is that Spideroak is a ‘zero-knowledge’ product, where as Dropbox maintains the keys to all the files that users place on it’s systems and could use those keys in order to decrypt any files.  This fundamental difference means that Dropbox could be compelled by law to provide access to an end user’s file, while Spideroak couldn’t because they don’t have that capability.  From Snowden’s perspective, this difference is the single most important feature difference between the two platforms, and who can blame him for suggesting users move.

Snowden has several excellent points in his interview, at least from the viewpoint of a security and privacy expert, but there’s one I don’t think quite holds up.  He states that Condoleezza Rice has been appointed to the board of directors for Dropbox and that she’s a huge enemy of privacy.  This argument seems to be more emotional than factual to me, since I don’t have much historical evidence on which to base Rice’s opinions on privacy.  It feels a little odd for me to be arguing that a Bush era official might not be an enemy of privacy, but I’d rather give her the benefit of the doubt than cast aspersions on Dropbox for using her experience and connections.  Besides, I’m not sure how much influence a single member of the board of directors actually has on the direction of the product and the efficacy of its privacy controls.

On the technical front, I believe Snowden is right to be concerned.  We know as a fact that Dropbox has access to the keys to decrypt user’s files; they use the keys as part of a process that helps reduce the number of identical files stored on their system, a process called deduplication.  The fact that Dropbox has access to these keys means a few things; they also have access to decrypt the data if they’re served with a lawful order, a Dropbox employee could possibly access the key to get to the data and Dropbox could potentially be feeding into PRISM or one of the many other governmental programs that wants to suck up everyone’s data.  It also means that Dropbox could make a mistake to accidentally expose the data to the outside world, which has happened before.  Of course, vulnerabilities and misconfigurations that results in a lapse of security is a risk that you face when using any cloud service and is not unique to Dropbox.

I’ve never seen how Dropbox handles and secures the keys that are used to encrypt data and they haven’t done a lot to publicize their processes.  It could be that there are considerable safeguards in place to protect the keys from internal employees and federal agencies.  I simply don’t know.  But they do have the keys.  Spideroak doesn’t, so they don’t have access to the data end users are storing on their systems, it’s that simple.  The keys which unlock the data are stored with the user, not the company, so neither employees nor governmental organizations can access the data through Spideroak. Which is Snowden’s whole point, that we should be exploring service providers who couldn’t share our data if they wanted.  From an end-user perspective, a zero-knowledge is vastly preferable, at least if privacy is one of your primary concerns.

But is privacy a primary concern for a business?  I’d say no, at least in 90% of the businesses I’ve dealt with.  It’s an afterthought in some cases and in many cases it’s not even thought of until there’s been a breach of that privacy.  What’s important to most businesses is functionality and just getting their job done.  If that’s the case, it’s likely that Dropbox is good enough for them.  Most businesses have bigger concerns when dealing with the government than whether their files can be read or not: taxes, regulations, taxes, oversight, taxes, audits, taxes… the list goes on.  They’re probably going to be more concerned with the question of if a hacker or rival business can get to their data than if the government can.  To which the answer is probably not.

I personally use Dropbox all the time.  But I’m using it to sync pictures between my phone and my computer, to share podcast files with co-conspirators (also known as ‘co-hosts’) and to make it so I have access to non-sensitive documents where ever I am.  If it’s sensitive, I don’t place it in Dropbox, it’s that simple.  Businesses need to be making the same risk evaluation about what they put in Dropbox or any other cloud provider: if having the file exposed would have a significant impact to your business, it probably doesn’t belong in the cloud encrypted with someone else’s keys.

If it absolutely, positively has to be shared with someone elsewhere, there’s always the option of encrypting the file yourself before putting it on Dropbox.  While the tools still need to be made simpler and easier, it is possible to use tools like TrueCrypt (or it’s successor) to encrypt sensitive files separate from Dropbox’s encryption.  Would you still be as worried about a lost USB key if the data on it had been encrypted?

 

One response so far

Jul 17 2014

Root my ride

Published by under Government,Hacking,Risk

If you’ve never watched the anime Ghost in the Shell(GITS) and you’re in security, you’re doing yourself a great disfavor.  If nothing else, watch the Stand Alone Complex series as a primer of what we might expect from Anonymous in the future.  I know my friend Josh Corman tries to sit down to watch it every year or two in order to refresh his memory and help him understand what might be coming down the pipeline from chaotic actors.  And the authors of the manga/anime have a impressive understanding of what the future of hacking might bring in the long term.  Probably a better idea than the FBI does at least.

Earlier this week the Guardian got a copy of an unclassified document the FBI had written up exploring the future of driverless vehicles and the dangers they pose to the future. Their big revelation is that driverless cars could let hackers do things they couldn’t do while driving a normal cars.  In other words, since they wouldn’t have to actually be driving they could hack while the car drove itself.  Which ignores the fact that it’s already pretty easy to get someone else to drive a car for you, presumably much better than a driverless car will be able to do for many years.  If I’m going to commit a crime, I’d rather have someone I can trust at the wheel, rather than take my chances that the police might have a back door (pun intended) into my car’s operating system.

The Guardian story also hints that the FBI is concerned about driverless cars being hacked to be used as weapons.  I have to admit that this is a concern; hacking a target’s car to accelerate at the wrong time or muck with the car’s GPS so that it thinks the road goes straight when it should follow the curve of the cliff wouldn’t be a massive logical stretch.  Also doing the same to use a car to plow into a crowd or run over an individual is a possibility.  However, both of these are things an unskilled operator could do with a real car by cutting the brake lines or driving the car themselves, then running from the scene of the crime.

I think it’ll be much more interesting when driverless cars start becoming common place and young hackers decide they don’t like the feature set and/or controls that are present in the car.  It’s a logical extension to think that the same people who root phones and routers and televisions will eventually figure out how to re-image a car so that it has the software they want, to give the vehicle the capabilities they want.  I know that the Ford Focus has a whole community built around customizing the software in the vehicle, so why will it be any different for driverless cars in the future.

The difference with the driverless car will be that I could strip out many if not all of the safety protocols that will be in place, as well as the limiters on the engine and braking systems.  I want to pull off a robbery and use a driverless car for the get away?  Okay, ignore all stoplights, step on the gas and don’t break for anything.  You’d probably be able to rely on the safety features of other driverless cars to avoid you and you wouldn’t have to worry about the police issuing a kill signal to your car once they’ve read your license plate and other identifying codes.  I’d still rather have an old fashioned car with an actual driver, but at some point those might be hard to get and using one would cause suspicion in and of itself.

On the point of a kill signal, I strongly believe this will be a requirement for driverless cars in the future.  I’m actually surprised a law enforcement kill switch hasn’t already been legislated by the US government, though maybe they’re waiting to see how the public accepts smart phone kill signals first.  Around the same time as the kill switch is being made mandatory, I expect to see laws passed to make rooting your car illegal.  Which, of course, means only criminals will root their cars.  Well, them and the thousands of gear heads who also like to hack the software and won’t know or care about the law.

The FBI hasn’t even scratched the surface of what they should be concerned with about driverless cars.  Back to my initial point about Ghost in the Shell: think about what someone could do if they hacked into the kill switch system that’s going to be required by law.   Want to cause massive chaos?  Shut down every car in Las Angeles or Tokyo.  Make the cars accelerate and shut down the breaks.  Or simply change the maps the car’s GPS is using.  There are a lot of these little chaos producing tricks used through out the GITS series, plus even more that could be adapted easily to the real world.

Many of these things will never happen.  The laws will almost definitely be passed and you’ll have a kill switch in your new driverless car, but there’s little chance we’ll ever see a hack of the system on a massive scale.  On the other hand, given the insecurity we’re just starting to identify in medical devices, the power grid and home networks, I’m not sure that any network that supports driverless cars will be much better secured. Which will make for a very interesting future.

No responses yet

Jul 10 2014

Illustrating the problem with the CA’s

You’d think that if there was any SSL certificate out there that’d be carefully monitored, it’d be Google’s.  And you’d be right; between the number of users of Chrome and the Google team itself, the certs that correspond to Google properties are under a tremendous amount of scrutiny.  So when an impostor cert is issued anywhere in the world, it’s detected relatively quickly in most cases.  But the real question is, why are Certificate Authorities (CA’s) able to issue false certs in the first place?  Mostly because we have to trust someone in the process of cert issuance and in theory the CA’s are the ones who are the most trustworthy and best protected.  Unluckily, there are still a lot of holes in the process and the protection of even the best CA’s.

Last week Google detected an unauthorized digital certificate issued in India by the National Infomatics Center(NIC). This week it was revealed that not only were the certs Google knew about issued, but an indeterminate number of other certs had been issued by the NIC.  Their issuance process had been compromised in some way and they’re still in the process of investigating the full scope of the compromise.  Users of Chrome were protected due to certificate pinning, but users of IE and other browsers might not be so lucky. What was done with these certificates, no one knows.  What could be done with them is primarily acting as a man in the middle against users of any of the compromised certs, meaning the entity that now has these certificates could intercept and decrypt email, files, etc.  There are plenty of reasons a government or criminal element would want to have control of a certificate that looks and feels like it’s an authentic Google (or MIcrosoft or…) certificate.

There’s no clear, clean way to improve the CA process.  Extended Validation (EV) certs are one way, but it also makes the whole process of getting an SSL cert much more complex.  But given the the value of privacy and how certificates play a vital role in maintaining it, this may be the price the Internet has to pay.  Pinning certs helps, as will DANE and Sunlight (aka Certificate Transparency).  Neither DANE nor Sunlight are fully baked yet, but they should both help make up for the weaknesses of current processes.  Then it’ll just take a year or three to get them into all the browsers and even longer for older browsers to be retired.  And that’s not even taking into account the fact that we don’t use SSL everywhere.

No responses yet

Jul 06 2014

The dominoes of Internet Balkanization are falling

Published by under Cloud,Government,Hacking,Privacy,Risk

We knew it was coming; it was inevitable.  The events put in motion last June played right into the hands of the people who wanted to cement their control, giving them every excuse to seize the power and claim they were doing it in defense of their people and their nation.  Some might even say it was always destined to happen, it was just a matter of how soon and how completely.  What am I talking about?  The Balkanization of the Internet.  It’s happening now and with Russia entering the competition to see who can control the largest chunk most completely, it’s only a matter of time before others follow the lead and make the same changes within their own country.

Let’s make no mistakes here, there have been countries and governments that have wanted to circumscribe their boundaries in the virtual domain and create an area where they control the content, they control what the people can and can’t see and they have the ability to see everything everyone is looking at as long as the Internet has been in existence.  But prior to the last year, very few countries had either the political impulse or the technical means to filter what came into and out of their countries except China and a few countries in the Middle East.  China had this power because they’d recognized early on the threat the Internet posed to them and the countries in the Middle East have comparatively limited Internet access to begin with, so filtering and controlling their access is a relatively easy exercise.  In both cases though, the efforts have been coarse with plentiful ways to circumvent them, including the use of Tor.  Though it now looks like Tor was itself has long been subverted by the US government to spy as well.

But then Edward Snowden came forth with a huge cache of documents from inside the NSA.  And it turned out all the things that the US had long been shaking its finger at other governments about, things that the US considered to be immoral and foreign to individual freedoms, were the exact things that the NSA had been doing all along.  Sure, it was only foreigners.  Oh, and only ‘people of interest’.  And people with connections to people of interest.  Four or five degrees of connection that is.  And foreign leaders.  And … the list goes on.  Basically, the logical justification was that anyone could be a terrorist, so rather than taking a chance that someone might slip through the cracks, everyone had become a suspect and their traffic on the Internet was to be collected, categorized and collated for future reference, just in case.  Any illusion of moral superiority, or personal freedom from monitoring was blown to shreds. American politicians carefully constructed arguments to assume high ground and tell other countries what they should and should not do torn down and America suddenly became the bad guys of the Internet.  Not that everyone who knew anything about the Internet hadn’t already suspected this had always been going on and the that the US is far from the only country performing this sort of monitoring of the world.  Every government is monitoring their people to one degree or another, the USA and the NSA were simply the ones who got their hands caught in the cookie jar.

The cries to stop data from being sent to the USA have been rising and falling since June and Mr. Snowden’s revelations.  At first they were strident, chaotic and impassioned.  And unreasonable.  But as time went by, people started giving it more thought and many realized that stopping data on the Internet from being exfiltrated to the USA in the Internet’s current form was near unto impossible.  One of the most basic routing protocols of the Web make it nearly impossible to determine ahead of time where a packet is going to go to get to it’s destination; traffic sometimes circumnavigates the globe in order to get to a destination a couple hundred miles away.  That didn’t stop Brazil from demanding that all traffic in their country stay on servers in their country, though they quickly realized that this was an impossible demand.  Governments and corporations across the European Union have been searching for way to ensure that data in Europe stays in Europe, though the European Data Protective Directives have been hard pressed to keep up with the changing situation.

And now Russia has passed a law through both houses of their Parliament that would require companies serving traffic within Russia to stay in Russia and be logged for at least six months by September of 2016.   They’re also putting pressure on Twitter and others to limit and block content concerning actions in the Ukraine, attempting to stop any voice of dissent from being heard inside Russia.  For most companies doing business, this won’t be an easy law to comply with, either from a technical viewpoint or from an ethical one.  The infrastructure needed to retain six months of data in country is no small endeavor; Yandex, a popular search engine in Russia says that it will take more than two years to build the data centers required to fulfill the mandates of the law.  Then there’s the ethical part of the equation: who and how will these logs be accessed by the Russian government?  Will a court order be necessary or will the FSB be able to simply knock at a company’s door and ask for everything.  Given the cost of building an infrastructure within Russian borders (and the people to support it, an additional vulnerability) and the ethical questions of the law, how does this change the equation of doing business in Russia for companies on the Internet?  Is it possible to still do business in Russia, is the business potential too great to pull out now or do companies serve their traffic from outside Russia and hope they don’t get blocked by the Great Firewall of Russia, which is the next obvious step in this evolution?

Where Brazil had to bow to the pressure of international politics and didn’t have the business potential to force Internet companies to allocate servers within it’s borders, Russia does.  The ruling affluent population of Russia has money to burn; many of them make the US ’1%’ look poor.  There are enough start ups and hungry corporations in Russia who are more than willing to take a chunk of what’s now being served by Twitter, Google, Facebook and all the other American mega-corporations of the Internet.  And if international pressure concerning what’s happening in the Ukraine doesn’t even make Russia blink, there’s nothing that the international community can do about Internet Balkanization.

Once Russia has proven that the Balkanization of the Internet is a possibility and even a logical future for the Internet, it won’t take long for other countries to follow.  Smaller countries will follow quickly, the EU will create laws requiring many of the same features that Russia’s laws do and eventually even the US will require companies within it’s borders to retain information, where they will have easy access it.   The price to companies ‘in the Cloud’ will sky rocket as the Cloud itself has to be instantiated within individual regions and the economy of scale it currently enjoys is brought down by the required fracturing.  And eventually much of the innovation and money created by the great social experiment of the Internet will grind to a halt as only the largest companies have the resources needed to be available on a global scale.

 

One response so far

Apr 05 2014

Hack my ride

Published by under Hacking,Risk,Security Advisories

Important:  Read the stuff at the end of this post.  I got a lot of feedback and I’ve added it there.  Unlike some people, I actually want to be told when I’m wrong and learn from the experience.

I don’t own a Tesla S and probably never will.  They’re beautiful cars, they’re (sort of) ecologically friendly, and they show that you have more money than common sense.  I use a car to get my family from point A to point B and showing off my wealth (or lack there of) has never actually been part of the equation in buying a car.  And one more reason I don’t think I’ll ever buy a Tesla is that I’m beginning to think they’re as insecure as all get out, at least from the network perspective.

Last week hacker* Nitesh Dhanjani wrote his experience with exploring the remote control possibilities of the Tesla Model S P85+.  It starts with being able to unlock the doors, check the location, etc.  And it ends with a total lack of security for the site and tools needed to control the car.  The web site for controling your new Tesla has minimal password complexity controls, six characters with at least one letter and one number.  I have no idea if it’ll even let you use symbols, but I’m guessing that’s either not supported or only a minimal subset of symbols are available.  Which means password complexity is very low by almost any standard.  Then there’s the fact that Tesla doesn’t have rudimentary controls around the web site, such as rate limits on password guesses or account lock out, which they’ve hopefully changed by now.  Which gives you an easily guessed password combined with a site that allows unlimited guesses, making the possibility of brute forcing the password very real.  That’s not even including the fact that so many people reuse account names and passwords, so there’s a good chance you can find a compromised account database with the owner’s details if you search for a little while.

That’s great so far.  Now let’s add to this the fact that your Tesla S has wifi/4G wireless access.  And there’s also a 4-pin connector in the dashboard that leads to the network inside your car.  It’s running all sorts of wonderful things in that network too, none of which could possibly be vulnerable to outside attacks, right?  SSH? Check.  DNS? Check.  Web Server? Check. Telnet?  Check.  Wait, telnet?  Seriously?  Oh, and “1050/tcp open java or OTGfileshare”.  Yes, I really want either java or an open file share running in my car.  At least one person was able to get Firefox running on the console of their Tesla, [Correction: x-11 forwarding misconfiguration, not running on the Tesla]  even if was flipped on its head for some odd reason.  Any or all of these services running on the car’s internal network could have vulnerabilities that allow configuration change, remote code execution or even full root access to the system.  Or maybe they just allow for the systems to be rebooted, not something you really want when your driving on the winding coastal roads of California. [I've been told it's just the displays that would be affected, none of the handling characteristics would change. Still disconcerting]

So now we’ve got two fairly egregious methods of connecting to your Tesla with minimal security standards.  The first is remote and allows for control of doors, sunroofs, braking and suspension profiles.  The last two should concern everyone.  While there are probably physical controls in place to keep the profiles of brakes and suspension from getting too far outside of the range of acceptable usage, I wouldn’t be willing to bet on it, given the otherwise lax security measures on the remote controls for the car.  The second method of connecting to the Tesla does require physical access, but it sounds like this is built for the engineers and technicians who work on Teslas [Correction:  The connection only allows for access to the entertainment system and there is an airgap between that and the CANBUS systems.  However, I don't trust airgaps], and is likely to allow much greater control of the car and the various parameters of its design.  Even less technologically advanced cars have the ability to make fairly advanced modification of the functioning of a car once you have access to the software, so Tesla probably has extremely advanced configuration capabilities.  Meaning everything from how the car charges when plugged in to what shows up on the dash as you’re driving to manipulating acceleration and braking are within the realm of possibility.

As the Internet of Things becomes our daily reality, this sort of lax security on something as potentially deadly as an automobile is inexcusable.   It wouldn’t take much of a tweak to the normal operation of a car to make it uncontrollable in the wrong situation.  We haven’t seen anyone killed by having their car hacked yet, but it’s only a matter of time if companies aren’t willing to take the time to properly secure the systems that go into making the car run.  While it’s important in the current marketing environment to make every device as configurable from you phone as possible, there have to be sufficient controls in place to make that configurability safe and secure as well.  Yes, it might mean that you, Tesla, have to make your users go through two or three more steps in order to set up their systems for control, but it’s worth the effort.  After all, who will be liable, who will be in the courts for years when the first person claims that their car was hacked, which is what caused the accident?  Even if having a car hacked isn’t the cause of an accident, it can’t be too long before someone uses that as their defense and still costs the company millions in legal defense.

Let’s end this with a little thought experiment.  The four pin connector in the Tesla has a full TCP stack and runs on a known set of IP’s, 192.168.90.100-102.  Say I grabbed a Teensy 3.1, with built in wi-fi capabilities, and added an ethernet shield.  With the current arduino libraries, I can create a wi-fi receiver that takes my traffic and routes it to the wired network, which just happens to have an accessible network inside the Tesla.  Now I have a device that’s a small portal directly into your car that I can connect to from several hundred feet away, farther if I want to make myself a high gain pringles can yagi antenna.  We’re not talking high technology spy gear, we’re talking about a weekend project I could do with my kids that would result in a package no bigger than a pack of cards.  I could put this in the glove box with a single cable leading to the car’s ethernet port.  Anything a Tesla engineer could control on the car, I could control remotely.  Suddenly I have the biggest remote control car on the block, which just happens to be the Tesla you’re sitting in.

This is why we have to secure the Internet of things.  If I can imagine it, you better believe there’s already someone else out there working on it.

* Hacker == someone who makes technology do things the engineers who designed it didn’t intend it to do.

Added, 9:15 GMT:  So I got some feedback very quickly after posting this.  And I admit a lot of what I’m saying here is based on guesswork, assumptions and third party statements.  It’s my blog, I get to do that.  Both Beau and the Kos have a lot more to say about why many of my assumptions are wrong.  And they probably know more about cars than I do.  So teach me.  There will be follow up.

Thanks to @chriseng for basic spell checking.  I do indeed know the difference between ‘break’ and ‘brake’, just not before breakfast.

@beauwoods: “Infotainment network and CANBUS are separate. The other issue is the equivalent threat model of a big rock to a window.” In other words, there’s a very real airgap between the two systems and it’d be impossible to control one from the other.

@theKos called my statement about the password limitations silly, stated that running Firefox on the system was a x-forward misconfiguration, that rebooting the displays won’t affect the running of the car, and that all products have vulnerabilities.  I really have to challenge the last statement as a fallacy: knowing that all products have vulnerabilities doesn’t make them more acceptable in any way.

3 responses so far

Mar 20 2014

NSP Microcast – RSAC2014 – Denim Group

Published by under Podcast,Risk

I caught up with John Dickson and Dan Cornell from the Denim Group to talk about creating secure coding environments within companies, the importance of having trainers who are themselves coders and, of course, a little bit about spying.  Which turned into a lot of bit about spying.  I should have asked them where the name ‘Denim Group’ comes from.

NSP Microcast – RSAC2014 – Denim Group

No responses yet

Mar 09 2014

Mt. Gox Doxed

I’ve never owned a bitcoin, I’ve never mined a bitcoin, in fact I’ve never really talked to anyone who’s used them extensively.  I have kept half an eye on the larger bitcoin stories though, and the recent disclosures that bitcoin exchange Mt. Gox was victim of hackers who stole the entire of the content in their vault, worth hundreds of millions of dollars (or pounds) have kept my interest.  I know I’m not the only one who’s smelled something more than a little off about the whole story and I’m sure I’m not the only one.  Apparently a hacker, or hackers, who also felt something wasn’t right on the mountain decided to do something about it: they doxed* Mt. Gox and it’s CEO, Mark Karpeles.

We don’t know yet if the files that hackers exposed to the internet were actually legitimate files from Mt. Gox and Mr. Karpeles yet, but this isn’t the only disclosure the company is potentially facing.  Another hacker has claimed to have about 20Gigs of information about the company, their users and plenty of interesting documents.  Between the two, if even a little of the data is valid, it’ll spell out a lot of trouble for Mt. Gox and it’s users.  If I were a prosecutor who had any remote possiblity of being involved in this case, I’d be collecting every piece of information and disclosed file I could, with big plans for using them in court at a later date.  

In any case, I occasionally read articles that say the Mt. Gox experience shows that bitcoins are an unusable and ultimately doomed form of currency because they’re a digital only medium and that they’ll always be open to fraud and theft because of it.  I laugh at those people.  Have they looked at our modern banking system and realized that 99% of the money in the world now only exists in digital format somewhere, sometimes with hard copy, but generally not?  Yes, we’ve had more time to figure out how to secure the banking systems, but they’re still mostly digital.  And eventually someone will do the same to a bank as was done to Mt. Gox.

*Doxed:  to have your personal information discovered or stolen and published on the Internet.

3 responses so far

Mar 07 2014

You have been identified as a latent criminal!

This afternoon, while I ate lunch, I watched a new-to-me anime called Pscho-Pass.  The TL:DR summary of the show is a future where everyone is chipped and constantly monitored.  If their Criminal Coefficient becomes to high, they are arrested for the good of society.  It doesn’t matter whether they’ve commited a crime or not, if the potential that they will commit a crime exceeds the threshold set by the computer, they’re arrested, or killed if they resist arrest. Like many anime, it sounds like a dystopian future that could never happen.  Except when I got back to my desk, I saw Bruce Schneier’s post, Surveillance by Algorithm.  And once again what I thought was an impossible dystopian future seems like a probable dystopian present.  

As Bruce points out, we already have Google and Amazon suggesting search results and purchases based on our prior behaviours online.  With every search I make online, they build up a more detailed and accurate profile of what I like, what I’ll buy and, by extension, what sort of person I am.  They aren’t using people to do this, there’s an extensive and thoroughly thought out algorithm that measures my every action to create a statistically accurate profile of my likes and dislikes in order to offer up what I might like to buy next based on their experience of what I’ve purchased in the past.  Or there would be if I didn’t purposefully share and account with my wife in order to confuse the profiling software Amazon uses.

Google is a lot harder to fool and they have access to a lot more of the data that reveals the true nature of who I am, what I’ve done and what I’m planning to do.  They have every personal email, my calendar, my searches, in fact, about 90% of what I do online is either directly through Google or indexed by Google in some way or shape.  Even my own family and friends probably don’t have as accurate an indicator of who I really am behind the mask as Google does, if they choose to create a psychological profile of me.  You can cloud the judgement of people, since they’re applying their own filters that interfere with a valid assessment of others, but a well written computer algorithm takes the biases of numerous coders and tries to even them out to create an evaluation that’s closer to reality than that of most people.

It wouldn’t take much for a government, the US, the UK or any other government, to start pushing to have an algorithm that evaluates the mental health and criminal index of every user on the planet and alerts the authorities when something bad is being planned.  Another point Bruce makes is that this isn’t considered ‘collection’ by the NSA, since they wouldn’t necessarilly have any of the data until an alert had been raised and a human began to review the data.  It would begin as something seemingly innoccuous, probably similar to the logical fallacies that governments already use to create ‘protection mechanisms’: “We just want to catch the peodophiles and terrorists; if you’re not a peodophile or terrorist, you have nothing to fear.”  After all, these are the exact phrases that have been used numerous times to create any number of organizations and mechanisms, including the TSA and the NSA itself.  And they’re all that much more powerful because there is a strong core of truth to them.

But what they don’t address is a few of the fatal flaws to any such system based on a behavioural algorithm.  First of all, inclination, or even intent, doesn’t equal action.  Our society has long ago established that the thought of doing something isn’t the same as doing the action, whether it’s well-intentioned or malign.  If I mean to call my mother back in the US every Sunday, the thought doesn’t count unless I actually follow through and do so.  And if I want to run over a cyclist who’s slowing down traffic, it really doesn’t matter unless I nudge the steering wheel to the left and hit them.  Intent to commit a crime is not the same as the crime itself, until I start taking the steps necessary to perform the crime, such as purchasing explosives or writing a plan to blow something up.  If we were ever to start allowing the use of algoritms to denote who ‘s a potential criminal and treat them as such before they’ve commited a crime, we’ll have lost something essential to the human condition.

A second problem is that the algorithms are going to be created by people.  People who are fallable and biased.  Even if the individual biases are compensated for, the biases of the cultures are going to be evident in any tool that’s used to detect thought crimes.  This might not seem like much of a problem if you’re an American who agrees with the mainstream American values, but what if you’re not?  What if you’re GLBT?  What if you have an open relationship?  Or like pain?  What if there’s some aspect of your life that falls outside what is considered acceptable by the mainstream of our society?  Almost everyone has some aspect of their life they keep private because it doesn’t meet with societal norms on some level.  It’s a natural part of being human and fallable.  Additionally, actions and thoughts that are perfectly innocuous in the US can become serious crimes if you travel to the Middle East, Asia or Africa and the other way as well.  Back to the issue of sexual orientation, we only have to look at the recent Olympics and how several laws were passed in Russia to make non-heterosexual orientation a crime.  We have numerous examples of laws that have passed in the US only later to be thought to be unfair by more modern standards, with Prohibition being one of the most prominent examples.  Using computer algorithms to uncover people’s hidden inclinations would have a disastrous effect on both individuals and society as a whole.

Finally, there’s the twin ideas of false positives and false negatives.  If you’ve ever run an IDS, WAF or any other type of detection and blocking mechanism, you’re intimately familiar with the concepts.  A false positive is an alert that erroneously tags something as being malicious when it’s not.  It might be that a coder used a string that you’ve written into your detection algorithms and it’s caught by your IDS as an attack.  Or it might be a horror writer looking up some horrible technique that the bad guy in his latest novel is going to use to kill his victims.  In either case, it’s relatively easy to identify a false positive, though a false positive by the a behavioural algorithm has the potential to ruin a persons life before everything is said and done. 

Much more pernicous are false negatives.  This is when your detection mechanism has failed to catch an indicator and therefore not alerted you.  It’s much harder to find and understand false negatives because you don’t know if you’re failing to detect a legitimate attack or if there are simply no malicous attacks to catch.  It’s hard enough when dealing with network traffic to understand and detect false negatives, but when you’re dealing with people who are consciously trying to avoid displaying any of the triggers that would raise alerts, false negatives become much harder to detect and the consequences become much greater.  A large part of spycraft is to avoid any behaviour that will alert other spies to what you are; the same ideas apply to terrorists or criminals of any stripe who have a certain level of intelligence.  The most successful criminals are the ones who make every attempt to blend into society and appear to be just like every other successful businessman around them.  The consequences of believing your computer algorithms have identified every potential terrorist are that you stop looking for the people that might be off the grid for whatever reasons.  You learn to rely to heavily on the algorithm to the exclusion of everything else, a consequence we’ve already seen.

So much of what goes on society is a pendulum that swings back and forth as we adjust to the changes in our reality.  Currently, we have a massive change in technologies that allow for surveillance that far exceeds anything that’s ever been available in the past.  The thought that it might swing to the point of having chips in every persons head that tells the authorities when we start thinking thoughts that are a little too nasty is a far fetched scenario, I’ll admit.  But the thought that the NSA might have a secret data center in the desert that runs a complex algorithm on every packet and phone call that is made in the US and the world to detect potential terrorists or criminal isn’t.  However well intentioned the idea might be, the failings of the technology, the failings of the people implementing the technology and the impacts of this technology on basic human rights and freedoms are something that not only should be considered, they’re all issues that are facing us right now and must be discussed.  I, for one, don’t want to live in a world of “thought police” and “Minority Report“, but that is where this slippery slope leads.  Rather than our Oracle being a group of psychics, it might be a computer program written by … wait for it … Oracle.  And if you’ve ever used Oracle software, that should scare you as much as anything else I’ve written.

 

No responses yet

Mar 05 2014

DDoS becoming a bigger pain in the …

Published by under Cloud,General,Hacking,Risk

I’m in the middle of writing the DDoS section of the 2013 State of the Internet Report, which is something that makes me spend a lot of time thinking about how DDoS is affecting the Internet (Wouldn’t be all that valuable if I didn’t put some thought into it, now would it?).  Plus I just got back from RSA where I intereviewed DOSarrest’s Jag Bains and talked to our competitors at the show. Akamai finally closed the deal on Prolexic about three weeks ago, so my new co-workers are starting to get more involved and being more available.  All of which means that there’s a ton of DDoS information available at my fingertips right now and the story it tells doesn’t look good.  From what I’m seeing, things are only going to get worse as 2014 progresses.

This Reuters story captures the majority of my concerns with DDoS.  As a tool, it’s becoming cheaper and easier to use almost daily.  The recent NTP reflection attacks show that the sheer volume of traffic is becoming a major issue.  And even if volumetric attacks weren’t growing, the attack surface for application layer attacks grows daily, since more applications come on line every day and there’s no evidence anywhere I’ve ever looked that developers are becoming at securing them (yes, a small subset of developers are, but they’re the exception).  Meetup.com is only the latest victim of a DDoS extortion scam, and while they didn’t pay, I’m sure there are plenty of other companies who’ve paid simply to make the problem go away without a fuss.  After all, $300 is almost nothing compared to the cost of a sustained DDoS on your infrastructure, not to mention the reputational cost when you’re offline.

I’d hate to say anything like “2014 is the Year of DDoS!”  I’ll leave that sort of hyperbole to the marketing departments, whether it’s mine or someone else’s.  But we’ve seen a definite trend that the number of attacks are growing year over year at an alarming rate.  And it’s not only the number of attacks that are growing, it’s the size of the volumetric attacks and the complexity of the application layer attacks.  Sure, the majority of them are still relatively small and simple, but the outliers are getting better and better at attacking, Those of us building out infrastructure to defend against these attacks are also getting better, but the majority of companies still have little or no defense against such attacks and they’re not the sort of defenses you can put in quickly or easily without a lot of help.

I need to get back to other writing, but I am concerned about this trend.  My data agrees with most of my competitors; DDoS is going to continue to be a growing problem.  Yes, that’s good for business, but as a security professional, I don’t like to see trends like this.  I think the biggest reason this will continue to grow is that it’s an incredibly difficult crime to track back to the source; law enforcement generally doesn’t have the time or skills needed to find the attackers and no business I know of has the authority or inclination to do the same.  Which means the attackers can continue to DDoS with impunity.  At least the one’s who’re smart enough to not attack directly from their own home network, that is.

No responses yet

Next »