Archive for the 'Hacking' Category

Jun 19 2013

Microsoft Bounty Program: Katie Moussouris at FIRST

When Katie Moussouris is so excited about something she’s almost vibrating, you know it has to be big.  So when she came Chris John Riley and I earlier this week and said she had news from Microsoft, we had to make time to talk to her.  Katie’s been working on the Microsoft Bug Bounty program for over three years and it’s no wonder; getting a company like Microsoft to recognize the importance of working with the researcher community as early in the process as possible, and getting the funding to make it happen is no small feat.

The basics of the three programs are like this:  Researchers who develop novel, new exploitation techniques (not just bugs, but new techniques) can receive $100,000 for the technique.  If they also come up with a mitigation technique for the exploitation, they can receive another $50,000.  The third program is specific to IE 11 which gives researchers the opportunity to earn $11,000 per bug for the first month of the beta for IE 11.

It’ll be interesting to see how other companies respond to Microsoft’s move.  Rather than simply wait for the vulnerability pimps resellers to bring up exploitation techniques and vulnerabilities to Microsoft once the product has been released, this encourages them to do the same during the development lifecycle, and at a healthy rate.  Will other vendors be able to do the same, will they cast aspersions on Microsoft’s efforts, or will they simply pretend it never happened?  Only time will tell.

FIRST 2013 – Katie Moussouris of Microsoft on the Bug Bounty Program

No responses yet

Feb 19 2013

This week’s ‘must read’: Mandiant APT report

Published by under Government,Hacking,Malware,Risk

If you haven’t already read it, your homework for this week is the Mandiant APT1 Report.  Don’t read someone else’s interpretation until you’ve read the report yourself.  Don’t read the analysis of reporters and consider it good.  Read the entire report yourself and draw your own conclusions, then read what other people have to say.  But in any case, read it.

No responses yet

Jan 10 2013

Morning Reading 011013

Published by under Hacking,Linux,Malware,Risk

It’s been an interesting week and start to the year.  Between the Ruby on Rails vulnerability and the Java zero day released today, we have some serious patching issues on our plates.  And if history is any indicator of future performance, the security technorati are already in the process of patching, which only leaves the other 98% of the population to get patched.  I’ve also had some interesting talks with folks about the idea of honey tokens, honey nets and other detective measures for the network.  On to the stories …

  • I’ve been saying for a couple years now that we need to change the way we think about security from the foundations up.  Apparently Art Coviello agrees and says we need to move to an intelligence-driven security model.  A lot of other professionals believe we need to rethink security architecture as well, according to Tim Wilson over at Dark Reading.  Always challenge the assumptions the leaders of the last generation made, especially in a profession as young as security.
  • The topic of honey tokens and all other things ‘honey’ started in part due to a lot of discussion around ‘offensive security models’.  The Washington Post has an article on salting databases with fake data, which if done right is exactly what a honey token is.  CSO Online says that deception is better than a counterattack; I don’t know if it’s ‘better’ but it’s something that you should be doing whether you’re considering offensive tactics or not.  And a fun new little tool to do some of this has been released, called HoneyDrive.  It’s a collection of tools on a VM, which is always a good toy to play with.
  • Continuing on the them of Monday’s post, Computerworld has an article on how to talk about security to everyone else.  I’m sure we’ll be talking about this again, since it’s one of the basics we seem to have a hard time with.
  • And finally, Cyber attack timelines from the second half of December.  There’s a few errors in the dates here, but I only know that because of my day job.  Let’s just say that there have only been two waves of QCF attacks so far, and that they started a little earlier than is being represented.  But overall, this is good data to keep aware of, especially with the recent rise in attacks.

And finally, for something completely different, a Linux-powered sniper rifle.  I’m sorry, ‘hunting rifle’. 

No responses yet

Dec 13 2012

Offensive security for dummies

Published by under Government,Hacking,Risk

If there were an “Offensive Security for Dummies” book, it’d be very short.  Chapter 1 would simply be the word “Don’t“.  Chapter 2 would be slightly more expansive and would say “No, really, we mean it: don’t practice offensive security.  You’re not worthy”.  Then it would go on to enumerate ways to incorporate offensive security measures into the enterprise, because IT and Security people are well known for skipping the first few chapters of any book and going straight for the meat of the matter.  And then ignoring a lot of that as well.

Seriously though, every couple of years the idea of ‘attack back’ technologies or retaliatory techniques comes up in the security sphere.  The basic thought pattern goes somewhere along the lines of “I’m getting attacked, I can’t do anything about it other than take the beating, the government isn’t doing anything and I’m tired of feeling like a punching bag.  Since the authorities can’t do anything, maybe I should take matters into my own hands.”  The idea of vigilante justice, even in the digital sphere, is appealing.  The visceral thrill of getting a little justice of your own is understandable, and even a little desirable in the person protecting your network.  But it’s morally and legally indefensible.

The biggest problem with retaliation is attribution in my mind; even with some of the best minds in the business working on the problem it’s impossible to really say who’s behind many of the attacks presently.  Sure, we can say ‘this is the origin IP of the attack’ and follow the command and control structure up a level or two, but it’s nearly impossible to tell which of those systems is owned and operated by the attacker and which are compromised systems used as throw away stepping stones.  Given the amount of time it takes to get even that level of information, I can’t see most administrators taking the time to really find the source of the attack.  I can see them simply attacking the end node of the attack and crowing when they bring down Grandma’s Win98 machine in Wisconsin though.

And to me, one of the biggest problems with retaliation is time and resources.  Seriously, how many security professionals do you know that have the time to properly secure their own enterprises properly?  If you don’t have time to review firewall configurations, get developers to stop including SQLi vulnerabilities in the web site and generally being a pain in the ass about corporate policies, what makes you think you have the time to do proper attribution before you attack?  Quite frankly, after having been a QSA for four years and reviewing a couple of hundred firewall configurations, I don’t trust 75% of companies to properly lock down their own networks, let alone start targeting other people’s networks with retribution tools.  Would you trust your own senior security architect to run invasive scans against your own site, let alone someone elses? 

I’m betting this whole conversation will reach a peak somewhere in March of 2013, then go back in it’s cave to hibernate for another couple of years.  It’s a bad idea that sounds good until it’s put in practice.  There might be 1% (probably less) of organizations that have the technical skills and understanding to make retaliation feasible and effective.  But feasible doesn’t mean right, either in the eyes of the law or morally.  If you’re seriously considering retaliatory security, do us all a favor and go review your firewall configuration and logs instead.  I can guarantee you’ll find flaws in the configuration your time would be better spent fixing.

4 responses so far

Oct 02 2012

Network Security Podcast, Episode 291

This week’s show went a little long, as all three of us had a lot to say on the stories we covered.  We also spent more than a few minutes at the beginning of the show talking about some of the resources people can use to get mentorship when entering the security field.  We also ramble a little bit and Rich gives us an assessment of one of his co-workers technical skils.

(All three of us made the show this week, and to be honest it was a little wittier than usual, if we do say so ourselves).

Network Security Podcast, Episode 291, October 2, 2012

Time:  38:30

Show notes:

No responses yet

Aug 01 2012

Network Security Podcast, Episode 283

Published by under Hacking,Podcast

The yearly pilgrimage to Las Vegas for BlackHat/DEFCON/B-Sides is over, and recovery mode is in full effect – and none of us got arrested/detained/married in Vegas (at least we don’t think…).  Completely Martin’s fault this week’s podcast was released late.  Sometimes a nap turns into a full night’s sleep after a week in Vegas.

Network Security Podcast, Episode 283, July 31, 2012

Time: 41:01

Show notes:

No responses yet

Jun 06 2012

Dumping LinkedIn passwords

*** Dire Warning ***
If you’re in the habit of reusing passwords AT ALL, 1) stop it! 2) if you have a LinkedIn account change your password immediately on as many sites as you can remember.  Then get yourself a password management program (like 1Password or LastPass) with a random password creator and learn to use it for all sites.
*** Dire Warning ***

Now that the dire warnings are out of the way, let’s look at what happened.  This morning it was disclosed that 6.5 million LinkedIn password hashes were posted online.  LinkedIn was not using a salted hash for storing passwords, which means that while the passwords can’t be decrypted in any way, attacking the password file by dictionary attacks and other similar methods are very effective.  Additionally, the 6.5 million hashes are each unique, meaning that they represent a much larger portion of the LinkedIn passwords, possibly even the entire database.  One of the best analysis of the password hashes and what they mean was done over at Hacker News and covers a lot of what the disclosed hashes mean in really geeky terms.  Another great resource, thrown up by Robert Graham this morning, lets you take a password to see if your password is amongst those stolen.  If you don’t find your password in the database, try replacing the first 5-6 characters with zeros and look again. 

The other point I wanted to make was that while LinkedIn’s response (1, 2) to this compromise hasn’t been atrocious, it’s been far from being a good example of how to do compromise disclosure.  If you want a good example, look at the recent post mortem writeup by CloudFlare, stating in great detail how they’d been compromised so others could learn from their problems.  I’m willing to give the LinkedIn team and Vicente Silveira the benefit of the doubt and assume they learned about the password file at the same time as everyone else, but their initial reaction was to say they were looking into it, even though a number of security professionals had already stated their passwords were definitely in the file.  When they did admit it was their database a few hours later, they stated they had ‘enhanced’ their security to include hashing and salting of the database.  I can only assume the enhanced security measures were put in place this morning, and I’d give them more credit if they’d admitted that instead of making it seem like it was something they’d already planned to do.  I do have to give them kudo’s for reacting quickly and giving users concrete steps to take in response to the compromise, but they lose at least as many points for not being up front about what’s really happening.  Of course, that may be because of the Marketing and PR departments more than anything, but I’m not willing to cut either of those departments any slack for a security incident.

Of course, this is all injury added to the assault that was disclosed yesterday, the fact that the LinkedIn mobile application collects all of your calendar notes.  And since they had your calendar data and there’s a possibility your account was compromised, if you’re using the LinkedIn iPhone app, you’d better assume all of your calendar data is also compromised.  I hope you didn’t have any important or sensitive information in your calendar!

4 responses so far

Apr 02 2012

Global Payment Systems delisted by Visa

Last Friday Brian Krebs broke the story that MasterCard and Visa were warning of a major processor breach.  Later in the day it was announced that the payment processor was Global Payment Inc. and that approximately 50,000 card numbers had been compromised, a number that was later revised to 1.5 million card numbers.  Global Payment took such a pummeling in the stock market that they had to halt trading in the middle of the day on Friday, and appears to not have resumed trading as I’m writing this post.  They have a press conference this morning, but the initial reporting shows that Global Payments isn’t saying anything that’s not already in a press release.  And to add insult to the injury that Global Payments has had their listing as a compliant service provider yanked as of Friday, pending the security review of the compromise and a new assessment, a process that could take months.

The relationship between customer, merchant, banks, card processors and the card brands is complex and not at all clear to the average consumer.  When a customer swipes their credit card or places an order online, the merchant passes that information on to their processor.  The processor is a company, such as Global Payments, that has been designated by the merchant’s bank to process payments on their behalf.  The processor sends the request to the card brands, who check the balance with the bank that issues the credit card and forward an approval or denial based on credit availability and fraud checks.  That approval is forwarded back to the merchant and the customer and the whole process only takes 2-3 seconds on the average day.  At the end of the day the merchant bundles the credit card requests and sends them to their bank, appropriately designated the merchant bank, who forwards the information through the card brands to the banks of the people who charged their cards that day.  The relationship is complex and my explanation doesn’t cover the many variations that can crop up, but it covers the basic idea.  For more information, there is a wiki page.

On of the most interesting aspects of this is that Visa has removed Global Payments from the list of compliant processors, a step that I don’t think has been taken for any breach since that of CardSystems in 2005.  CardSystems was the first major breach of the credit card flow to catch the public attention and it was very clear that de-listing was done to buoy consumer confidence.  But since then very few service providers of any stripe have had their listing pulled, which indicates there may be more going on behind the scenes than is being reported publicly.  Global Payments’ relative silence and the updates to the number of records compromised add to this impression.  Of course, no one expects any company to come clean immediately when faced with a compromise, but the degree to which this incident is causing lips to be sealed is interesting by itself.  Will Global Payments have to go through a similar process as CardSystems, basically selling themselves to prevent total collapse?

We’ve gotten to the point where we almost expect daily or weekly notifications from merchants stating they’ve been compromised.  But where merchants are not in the business of securely taking in credit card numbers, that’s exactly what processors and banks are supposed to be focusing on.  A merchant makes their money by selling products to consumers whereas a payment processor is selling the security of the transaction and any breach of that trust is a major issue.  The processors are also aggregation points for multiple merchants and many processors have millions of card transactions flowing through their systems on a daily basis.  As such, they know, beyond a shadow of a doubt, that they are being targeted by attackers and that their security is paramount to continuing to be in business.

I strongly suspect that what’s been disclosed so far is simply the tip of the iceberg.  If Global Payments was compromised for a month and a half, as currently stated, then a much higher number of card numbers than 1.5 million were most likely processed during that time.  Which means the compromise was either contained in some way with or without the awareness of Global Payments, or there is another shoe waiting to drop.  My money is on the latter.

 

Update:  I forgot to add that there was a brief outage of the Visa network on Saturday morning when they updated systems inside VisaNet.  Yeah, that can’t be at all related to the Global Payments breach, could it.

6 responses so far

Mar 05 2012

RSAC 2012 Microcast: SecureWorks

Published by under Hacking,Podcast

Dell SecureWorks Chief Technology Officer Jon Ramsey took a few minutes out of his day at the RSA Conference to talk to me about a new study his team had recently written on series of attacks they dubbed Sin Digoo Affair.  In addition to being a detailed analysis of the tools and actions performed by the attackers, the paper also contains specific steps defenders can take to detect and respond to similar attacks.  This is part of an ongoing series that the folks at SecureWorks have been publishing.

RSAC2012 Microcast:  Jon Ramsey from Dell SecureWorks

No responses yet

Jan 25 2012

Kill pcAnywhere right now!

If you haven’t already heard, the code base for Symantec’s pcAnywhere was stolen in 2006, and bad guys are now using that code against the installed base of users in the wild.  This sort of compromise really isn’t anything that new or different.  But what is different is that Symantec is now telling users to flat out disable pcAnywhere until a fix is released.  Which is a good, smart move, but a better move would be to remove pcAnywhere and never, ever start it up again!

I remember the first time I used pcAnywhere; I was working my first helpdesk job and they let me finish part of my shift from home when I was doing mail server work, I could start up the scripts on the server, drive home and finish my work from there.  Being pcAnywhere, every couple of times I’d also have to drive back to work because the program would crash, but hey, an 80% success rate wasn’t too bad at the time.

Fast forward a decade (and more) to when I’m a QSA and pcAnywhere is still out there, and in all too many cases, it’s actually the same version I was using, or nearly the same vintage.  But it’s not me using it to manage a OS/2 Warp mail server (yes, OS/2 Warp), it’s being used to manage Point of Sales (POS) systems all across the US.  You see, mom and pop stores with POS systems don’t have a clue on how to set up a computer, so they find a nice, local service provider who will set up the POS for them, trouble shoot it when they have problems and just generally manage the system for a price.

Herein lies the problem.  If you’re a small, local service provider who makes their living servicing these folks, you have to be able to work quickly and cheaply with clients in a large are if you’re going to make a living.  You need to be able to get on their systems quickly to troubleshot problems and get them back online.  So of course you use a remote desktop client like pcAnywhere and you’re going to leave it directly exposed to the Internet since that’s the easiest way to make sure it’s always available and you don’t have to do a lot of troubleshooting of network equipment.  And you probably use the same password on all your clients, since you don’t want to have to rely on having the right password written down somewhere when the client calls screaming that they’re system is down.  After all, no one would scan for open pcAnywhere servers, nor would they guess the user name is ‘admin’ and the passphrase is “Let me in!” (at least it has complexity).  And you don’t worry about changing passwords when an employee leaves or updating to the latest patch levels.  In other words, a security nightmare.

In 2009, when I worked for Trustwave, one of the things that annual security report dug into was some of the repercussions of this type of remote management of POS systems.  And no surprise, one of the things they discovered was that remote desktop applications like pcAnywhere were one of the leading causes of small business compromises, especially compromises that involved either small chains or a group of geographically close stores.  An attacker would scan for the remote desktop client and then brute force the password and spread out to the other clients of the service provider.  Soon you’d have a whole segment of the local merchant community who’d been compromised and didn’t know how or why it’d happened.  And things have not gotten better since then.

I doubt things will change, I doubt most of the people who actually use pcAnywhere as a tool are going to even notice or read Symantec’s posting.  It’s the only way that the current business model works, not just in the merchant community, but in many other small business communities as well.  The service provider model requires remote tools, otherwise the travel time to and from locations kills any chance of making a profit.  Which means the folks who want compromise systems and steal credit cards are going to continue to have access to the remote desktop solutions. 

One response so far

« Prev - Next »