Aug 19 2007

More data points in the disclosure argument

Published by at 9:18 pm under Simple Security

I’ve always been a proponent of ‘responsible disclosure’; that is, researchers give companies a reasonable amount of time to research and fix vulnerabilities and in return companies give researchers credit and treat them with respect.  This is a workable system, but it takes everyone involved to act like an adult and offers no hard and fast rules for how long a reasonable amount of time is.  It’s not easy, but it is workable.

There are extremes at both ends.  Some companies would prefer that researchers stop mucking around in there products and get real jobs.  The problem with that position is that the bad guys are going to continue to find vulnerabilities in products, because that’s where the money is.  So obviously, non-disclosure isn’t going to work.  At the other end of the spectrum, full disclosure, gives the bad guys too much information and doesn’t give the effected companies the time needed to come up with a defense.  Another problem with this is that you can sometimes anger much of the security community, which is apparently what happened in the case of Whitedust; they shut down operations last week in response from heavy criticism and backlash from the security community.

Most situations sit somewhere between the two extremes.  Security researchers are trying harder to work with the companies who produce the vulnerable software and in many cases companies are returning the favor and treating researchers with more respect.  This has yet to become the rule, as David Maynor’s nearly legendary relationship with the Apple corporation shows.  Apple would rather deny that the issues exist and let their PR department deal with any naysayers.  Companies like Google take a slightly different tact and say that the vulnerabilities reported to them are ‘expected behaviour‘, as happened to RSnake. 

It’s hard for a security researcher to continue to work with companies when the researcher is attacked or ignored.  I also understand why companies react so badly sometimes; after all, no one likes having their errors pointed out to them continuously.  But were to the point in the game where it’s up to companies to take the high ground, admit to their mistakes, fix them and credit the people who find the vulnerabilities.  Most researchers don’t have that much to lose if a company denies that a vulnerability exists, but then patches it a couple of months later.  On the other hand, every time a company does exactly that, it makes it less likely that the public is going to take the next denial at face value.

I’ve sometimes been hard on Microsoft for the security of their products.  But this is one area that I’ll give them the credit they deserve and say that Microsoft has made great strides in over the last few years.  They still stumble once in a while, but it’s a lot better than attacking the people who are researching you, or constantly threatening to sue anyone who exposes a vulnerability, like a certain database company who’s name starts with “O“.

No responses yet

Trackback URI | Comments RSS

Leave a Reply

%d bloggers like this: