Oct 31 2008

Tell me your Security or IT horror story and win a pass to CSI

Published by at 9:10 am under General

Come on, we all have them; horror stories of IT and security disasters we’d rather rather forget.  But rather than forget them, I’d like you to share them and tell everyone what you learned from the experience.  And in return, I have a free three day pass to CSI 2008 in Maryland, November 15-21, that I’ll give out on next Friday, November 7th.  If you’re in the same boat I am and don’t have a budget for training, this can go a long way towards getting management approval for the event. 

The rules are going to be pretty simple:

  1. Post a comment on this post telling us your horror story and, more importantly, what you learned from it.  If you’ve already written your story on a blog, you can leave a short description and a link to the post. 
  2. You must leave a valid email address.
  3. The story must be original, no plagiarism please.
  4. Stories will be judged on originality, entertainment value and what was learned from the incident.  I’m the sole judge.  

If you’re not the lucky winner, there’s still the CSI 2008 discount code you can use.  There’s already a lot of the Security Twits that planning on attending and I’ve even heard rumblings of a blogger meetup or twitter meetup. 

Good luck!

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

5 responses so far

5 Responses to “Tell me your Security or IT horror story and win a pass to CSI”

  1. Jasonon 31 Oct 2008 at 11:02 am

    I won’t be able to make it to CSI one way or another, but I figured I’d pass along my horror story.

    This happened when I was first learning to admin UNIX boxes. Another SysAdmin and I were working on a shell script to lowercase the file names of 30-40 million image files. They were on an NFS mount that was used by several servers. These images were part of detail listings of a relatively busy web site and we were right in the middle of the day.

    Now that the background of the mess are fully explained, the story gets going. We went through several revisions and were testing against a directory on a desktop system. Nothing destructive happened during testing and we were getting fairly comfortable with the “safety” of the script.

    We finally thought we had a working script, so we moved it to the prod server. Then we noticed a “minor” change that needed to be made on it. We made the change then decided that since this was a such a small, little tweak we could run it on the live NFS mount without any further testing. Fire in the hole!

    The script took off and we watched it run. All was well. Then my phone rang from the NOC. A panicked operator was on the phone saying, “Hey what’s happening with listing images from xyz.com? They are all coming up as 404s!” I killed the script while thinking some thing like “oh crap, oh crap, oh crap!” Sure enough the script had wiped out about 50% of the images. Amazing how fast a shell script can delete when it goes haywire.

    We pointed the web servers to a backup copy of the images, then started to recover to the production mount. The backup was a couple days old, so our image processing guys had to re-upload the missing work. I was lucky that the online backup was there. I had taken it for reasons unrelated to this event. The next day I got to explain to the CIO what had happened.

    The moral of the story was backup first and test your script until it is golden before going live. Then test it again and again and again. Make sure you are doing at the proper time, then go to production. We didn’t have change control, so I’d add get all the approvals now too. Cover your butt.

    It was a good lesson. I’ve never done anything like that again in the last 7 years.

  2. bob El Bobon 31 Oct 2008 at 11:33 am

    Hi there,

    Five years ago, i was new in the unix world like a 6 months old :)
    I had a debian machine…
    That machine was used to backup the data of about 30 users… about 500 gigs of data …

    I had a web site on that machine to monitor and call backup … we were also able to restore them from the web interface … A great piece of code…

    The backups was running during the night… and it sends a report by mail…
    I was on my computer that nite… and i decided to connect to the office and correct a little settings… One share was mispelled in the configuration…
    I then played around, checking the logs, checking file size, directory size etc…

    Then, i noticed we had a www/ directory with some stupid contents we put for testing in the /usr or /usr/local ( i do not remember all the details )… It wasn,t our apache root directory anymore… and we didn,t even need that stuff anymore…

    I decided to delete the content of www … but to be sure i didn,t want to delete the directory www/ … ( in case a service somewhere needs to refer to that directory )

    so i run without even thinking:
    rm -rf *

    hrm… too much time before i get back my prompt … like 2 or 3 seconds … and it still not over… wtf ?

    OHHHHH!!!! NO!!! ctrl-c ctrl-c damn damn!

    I just noticed that i was in /usr and my stupid rm -rf * was deleting the whole /usr…

    DOH!

  3. ax0non 31 Oct 2008 at 6:36 pm

    Working as a part-time sysadmin at a community college, I was put in charge of pretty much all things security related for my group, mostly due to my background as a pen-tester for a consulting company. One of those duties that fell into “security” was maintaining the backups. I can already see plenty of you putting your face in your palm.

    The director had our microcomputer techs inventory everything, and when they were done scanning the bar-codes of every piece of technology in the labs, classrooms and offices, they went to… the data center…

    Most of the rack-mounted hardware had conspicuously-placed inventory labels. One piece of hardware, a Dell PowerVault 220S, did not. Eventually, the techs decided to see if the sticker was on the bottom, and withdrew it from the rack. This pulled one of the SCSI cable off and destroyed the backplane in the 220. Oh, yeah. The RAID freaked out, and about 800GB of student data (assignments, term papers, and porn) was lost.

    CA ArcServe could not read the catalogs on the backup tapes I’d been making. Attempts to re-build them failed. I wasn’t too popular that day.

    What I learned:
    1) Never trust an entire team of desktop techs in your data center
    2) Never trust that backup software is working until you test bare-metal restores from those archives regularly.
    3) A huge failure like this is one way (but not the best) to get your boss to splurge for a test environment for patching, Disaster recovery exercises and the like.

  4. Matton 03 Nov 2008 at 1:31 am

    I work as a pen tester. So far in my 8 years of testing…

    1) Taken down a large water utilities company by DoS’ing 2/3 of their Oracle Solaris hosted servers just by Nmap’ing them – lessons learned.. Ask the customer about potentially flaky servers. total cost to customer – £2million in call centre downtime and lost debit and credit payments.

    2) Taken down a insurance company + their call centres during a VOIP test. Took out 5 international call centres by attempting to sniff SIP traffic off a spanning port and then re-injecting on a trunk port. – lessons learned – ask the customer if they really want their international live VOIP network tested in hours. – £5 million plus in call centre downtimes.

    3) Locked out a UK bank and their entire domain due to 3 tries and your locked out – lessons learned… review with the customer what their lockout policy is first and make sure they have back up accounts. – £ several million in wasted time trying to restore the server and user’id’s (domain accounts ran multiple databases – norty!!)

    4) Ran Oscanner with the wrong config file, locked out a large council from their own database – lessons learned.. Don’t run Oscanner without it being configured properly.

    5) Transferred $5k by accident to my own credit card as part of a bank’s web application test and then attempted to try and send the money back. Bank wouldn’t accept it as they didn’t believe it was possible, then the police got involved and i was charged with fraud but later dropped. Lessons learned – We now have a company credit card.

    Oh man.. I could go on…

    ; )

  5. Room362.comon 06 Nov 2008 at 7:38 pm

    Free Pass to CSI 2008…

    What is CSI? This is what CSI says about it:

    Security is in transition. There is general agreement that security does not work, but not on how to fix it. CSI 2008 is the only event today that faces the challenge to reconsider security. This year at …

Trackback URI | Comments RSS

Leave a Reply

%d bloggers like this: