Archive for the 'Simple Security' Category

Dec 15 2013

Twitter spam filters overloaded

I believe the Twitter spam filters are currently overloaded or at least someone’s figure out a way around them.  In the last 72 hours, I’ve gotten more twitter followers than I normally get in a three weeks.  At first it was hard to tell if they were real people or not, but as they’ve accumulated, I’m certain that the vast majority of them are not.  It’s gotten to the point that I’m reporting all new followers as spam, unless there is sufficient reason to believe they might be a real person. 

So what characteristics do the spam followers share in common?

  1. Non-english speakers.  Russian, Spanish, Arabic, and any number of other languages I don’t recognize.  I’m assuming some are gibberish even in their own language.
  2. Very low number of tweets.  Almost all of these accounts less than 200 tweets and a significant number have less than 50 tweets.  There doesn’t seem to be a commonality of having links in these tweets, but I’ve given up on looking at their tweets.
  3. High following count/low follower count.  In an organic growth pattern, twitter users don’t tend to have a 10 to 1 following/follower ratio, since close to 10% of twitter is the bots anyway.  
  4. No listed count.  It doesn’t look like the bots have figured out how to get themselves listed quite yet.  Maybe there will be a botnet that will autolist bots in the future, but this is a big giveaway for now.

I’m confident the folks at twitter will figure out a way to stem the tide of the current bot invasion, but in the mean time I’ll continue to report these accounts for spam.  I apologize ahead of time if I block any real people by accident.  

One response so far

Dec 01 2013

Security in popular culture

One of the shows I’ve started watching since coming to the UK is called “QI XL“.  It’s a quiz show/comedy hour hosted by Stephen Fry where he asks trivia questions of people who I assume are celebrities here in Britain.  As often as not I have no clue who these people are.  It’s fun because rather than simply asking his questions one after another, the group of them riff off one another and sound a little bit like my friends do when we get together for drinks.  I wouldn’t say it’s a show for kids though, since the topics and the conversation can get a little risque, occasionally straying into territory you don’t want to explain to anyone under 18.

Last night I watched a show with someone I definitely recognized: Jeremy Clarkson from Top Gear.  A question came up about passwords and securing them, which Clarkson was surprisingly adept at answering, with the whole “upper case, lower case, numbers and symbols” mantra that we do so love in security.  He even knew he wasn’t supposed to write them down.  Except he was wrong on that last part.  As Stephen Fry pointed out, “No one can remember all those complex passwords!  At least no one you’d want to have a conversation with.”

Telling people not to write down their passwords is a disservice we as a community have been pushing for far too long.  Mr. Fry is absolutely correct that no one can remember all the passwords we need to get by in our daily life.  I don’t know about anyone else, but I’ll probably have to enter at least a dozen passwords before the end of today, each one different, with different levels of security and confidentiality needed.  I can’t remember that many passwords, and luckily I don’t have to since I use 1Password to record them for me.  

But lets think about the average user for a moment; even as easy as 1Password or LastPass are to use, they’re probably still too complex for many users.  I’m not trying to belittle users, but many people don’t have the time or interest to learn how to use a new tool, no matter how easy.  So why can’t they use something they’re intimately familiar with, the pen and paper?  The answer is, they can, they just have to learn to keep those secrets safe, rather than taping the password on a note under their keyboard.

We have a secret every one of us carry with us every day, our keys.  You can consider it a physical token as well, but really it’s the shape of your keys in particular that are the secret.  If someone else knows the shape of your keys, they can create their own and open anything your keys will open.  This is a paradigm every user is familiar with and they know how to secure their keys.  So why aren’t more of us teaching our users to write down their passwords in a small booklet and treat it with the same care and attention they give their keys?  Other than the fact it’s not what we were taught by our mentors from the beginning, that is.

A user who can write down their passwords is more likely to choose a long, complex passsword, something they’d probably have a hard time remembering otherwise.  And as long as they are going to treat that written password as what it is, a key to their accounts, then we’ll all end up with a little more security on the whole.  So next time your preparing to teach a security awareness class, go back to the stationary store and pick up one of those little password notebooks we’ve all made fun of and hand them out to your users, but rememind them they need to keep the booklet as safe as they do their other keys.  If you’re smart, you’ll also include a note with a link to LastPass or 1Password as well; might as well give them a chance to have even a little better security.

3 responses so far

Nov 17 2013

Using the Secret Weapon

Published by under Cloud,Personal,Simple Security

I’m not the most organized person in the world; I never have been and I never will be.  But I’ve usually been able to keep a modicum of organization in my life by using pen and paper and a notebook.  Sometimes things would fall through the cracks, as happens to everyone, but I can normally keep up.  Lately though, that hasn’t been true.  Since moving to the UK and expanding my role there, I have so much on my plate that just keeping up with tasks has been a major issue.  So I did what any good security geek does, I asked on Twitter about the tools others are using and how they use it to track their todo list.  By some margin, the biggest response I got was Evernote and The Secret Weapon.

Evernote is a free, with upgrade to premium, note taking/scrapbooking/catch-all program that’s been around for a few years.  I’d signed up when it first came out, but never really understood how to use it for myself.  The Secret Weapon isn’t a piece of software, but instead a way to use Evernote with your email and the Getting Things Done (GTD) system.  Basically, there are a set of tutorials on the Secret Weapon site that walk you through how to set up Evernote and your email and how to use the system going forward.  In all, you can watch the videos in about an hour, though I’d suggest you watch the first few, let it percolate for a little while, watch one or two more, etc. until you’ve watched them all over a few days.  It gives you a very good point to start from for using this system.

Like many people, I’ve had to modify the GTD/TSW methodology to meet my own needs and work style.  I’ve been using a number of the GTD principals for some time without realizing it.  I’m using Mail.app on OSX which allows me to use Smart Mailboxes to tag and flag emails, but I leave them in my inbox, which acts as my archive folder.  And since I’m using Mail, I don’t have the easy integration that would be available if I was using Outlook.  But then I’d have to use Outlook, so I consider manually cutting and pasting into tasks in Evernote to be the lesser of two evils.

Once you’ve set up the system, getting hooked on the organization it gives you is incredibly quick.  I love that I can tag my todo list by priority, project, people involved and any number of other aspects.  I love being able to tell at a glance exactly which projects I should be working on today and knowing that I haven’t forgotten anything major (unless I’ve forgotten to enter it into Evernote). And I’ve started to take more and more of my meeting notes in Evernote as well, though using a keyboard instead of pen and paper can be a bit distracting for me as well as those around me.

And then there’s the downsides.  The biggest concern I have by far is the security of Evernote; you can’t encrypt your notes except individually, which is unrealistic if you have dozens or hundreds of notes, which is bound to be the case once you’ve been using it for a while.  Evernote does have a two-factor authentication capability, but I have yet to try it and I’m not sure I can use it given the amount of travel I do; I never know how much connectivity I’m going to have on any given day.  Evernote has both iOS and Android applications available and I’m starting to dip my toes into them, but quite frankly they both seem to be pretty hard to use, other than for checking the status of your projects.  I’m not very satisfied with the user interface with either operating system and don’t know if I have the patience to deal with them.

The other piece of software that several people suggested I try is Omnifocus.  It also offers integration with iOS devices, but both the desktop and phone/tablet versions are pay for.  And there’s no Android support for the program, which is a pain for me as I have an Android phone and I’m shifting to using my Nexus 7 more than my iPad as time goes by.  

The bottom line for me is that TSW and Evernote works well, but I’m very concerned about having my organizational matrix on the Internet in a way that is much less secure than it could be.  I’d upgrade to a premium account if that’s what it took me to get that encryption and I may end up upgrading since I’m using it so much anyway.  I’m not sending my email to Evernote wholesale as is suggested by TSW tactics, so I feel less uncomfortable than I could be, but I’m still not happy with this security lapse.  

Let me know what your experience has been using Evernote and The Secret Weapon.

 

2 responses so far

Oct 12 2013

Can DevOps become SecOps?

Published by under Simple Security

This is an incomplete thought.

This week I saw Gene Kim give his talk on DevOps and The Phoenix Project for the first time. I’d read the book and loved it, but I’d never seen Gene put life into the concepts himself.  I was mesmerized by by his animation and energy in the presentation.

What at I couldn’t help thinking is, how can this be translated into security?  DevOps has a security component, but it’s the collaboration between development and operations that makes this work.  So how can that collaboration be expanded to cover the whole business?  I’m probably expressing this poorly, but I think we need to work towards a business model where the whole business thinks of security as simply a part of how they think about how security is part of the fabric of what we do, rather than the bolt on it is now.

I’m going to have to give this a lot more thought, but I’m glad I got to see him talk rather than simply reading his book.

5 responses so far

Oct 02 2012

Network Security Podcast, Episode 291

This week’s show went a little long, as all three of us had a lot to say on the stories we covered.  We also spent more than a few minutes at the beginning of the show talking about some of the resources people can use to get mentorship when entering the security field.  We also ramble a little bit and Rich gives us an assessment of one of his co-workers technical skils.

(All three of us made the show this week, and to be honest it was a little wittier than usual, if we do say so ourselves).

Network Security Podcast, Episode 291, October 2, 2012

Time:  38:30

Show notes:

No responses yet

Sep 21 2012

Notes from SOURCE Seattle

Published by under General,Simple Security

I got to attend my first SOURCE event last week, thanks to a lucky confluence of events which freed up my time.  Mainly, I didn’t have to go to the PCI Council’s Community Meeting and was able to take advantage of SOURCE Seattle instead.  I know many of the people involved in SOURCE and I’d been wanting to go for a long time.  This was the 10th SOURCE event, and I walked away very happy I’d finally been able to attend.

The Seattle conference is very different than any other event I’ve been a part of; with under 100 people in attendance, it’s small and personal.  I had the opportunity to talk to almost every person there, which is something you rarely get to say at any event these days.  During lunch on both days the team running the event led interesting discussions and helped encourage people to talk to other security professionals they’d never met before.

My favorite talk was by Tony Rucci, giving a detailed account of what it was like to be part of the White House staff on 9/11/2001.  It was interesting to hear the first hand account of someone who’d been on the ground at the time.  I liked getting to go to talks by friends like Adam Shostack and Zach Lanier, even if Zach did lose me about 15 minutes into his talk (I’m not an Android debugger by trade, so shoot me).  Robert M. Lee’s talk on the maturity of security was good to hear, but I feel he may be a bit optimistic.  The Base Rate Fallacy talk by Florer & Lowder made my brain hurt; my wife is currently taking a statistics class, maybe I should ask her for help.

I haven’t been to the larger SOURCE Boston, but if you’re in the Seattle area, look at coming to the con when it happens next year.  Hopefully it stays small and intimate for a few more years.  And hopefully it can stay in the Maritime Museum for a few more years as well.

2 responses so far

Feb 15 2012

Why are we talking philosophy instead of technology?

Published by under General,Risk,Simple Security

A friend of mine recently complained in Twitter that, according to his count, nearly 80% of all talks given at the security conferences he’d looked at recently were now non-technical.  It might be in part because he’s @ramblinpeck on twitter, aka Daniel Peck, Research Scientist or something like that at Barracuda Networks.  Which is my way of saying his idea of a technical talk might be a little more technical than many peoples’.  But whether you’re at his level of technical expertise or mine, I think he’s got a valid point in saying that at most security conferences, the majority of the talks are less about the technical aspects of security and more about the philosophy or generalities of security.  And that’s probably the way it should be.

Why should most talks be more about principles of security and less about the technical aspects of security?  The first reason is that, with a few exceptions, the whole reason that conferences exist is to get butts in seats and to a place where vendors can get at them.  Even community led events like the BSides movement are about getting people to attend and mingle, the goal is still to create an atmosphere that draws people into the event and around other like minded individuals.  And many technical talks are counter to that goal, not in their content, but in who they pull in.  For example, a talk about a bug in a compiler on a OS X box is great for the few individuals in the crowd of attendees who a) work on Apple b) are worried about bugs in compilers and c) have enough technical knowledge and interest to travel the distance to attend an event.  But for the other 98% of the people interested in security who might be willing to travel to an event, they’ll take a look at the subject matter and decide it’s not for them.  Finding the right audience for any deeply technical talk is an art form at best and in most cases is more closely akin to guesswork than anything resembling a science.

A second reason it’s hard to have technical talks at security conferences is because of the wide variety in skill levels attained by security professionals.  I’m fairly smart, I’ve been in security for a long time and I understand at least the basics behind most of the technologies that make the Internet tick.  There are even one or two aspects of security that I can do the deep geek dive with almost anyone.  But when a talk is given that assumes a level of expertise that may not exist in more than a dozen people worldwide, I’m going to be left out and leave the talk annoyed and confused.  Or worse, if a talk was advertised as being technical but I find out when I attend that it’s a primer level of technical and I already know most of what’s being presented, I’m going to be annoyed, probably vocally so, and tell people that the talk was mislabeled.  It’s very hard, if not impossible, to create a presentation that captures multiple levels of technical background and it’s even harder to look at an abstract for a talk and decide what level of technical expertise it’s appropriate for.  Which, again, makes it less likely that the talk will be selected for a conference.

The third, and possibly most important, reason we’re talking about the philosophy behind security more than the technology is that so many of the assumptions that have gone into building the technology are wrong!  Security isn’t something that was designed into the Internet and corporate networks from the start, it was bolted on after, the cracks were spackled over and huge loads of duct tape were wrapped around the whole thing and it was called ‘secure’.  Or, more often, security has simply been ignored as a cost center until a compromise happens and data is lost.  Instead of building a cohesive, multilayered approach, we’ve built a collection of point solutions, few of which actually deliver on their promises and even fewer of which are properly configured to fully deliver what they’re capable of.  Given some of the compromises we’ve seen over the last year, we have every reason to believe what we’re doing isn’t working.

We’re at a point where we need to re-examine the fundamental thinking that underlies how security works.  It’s not an issue of flipping the evil bit off in a packet, it’s an issue of engineering a new set of solutions from the ground up.  The technical aspects of these solutions will be vitally important, but unless we can understand the underlying assumptions we’ve made, we’re going to make the same mistakes again on an even larger scale.

Security professionals come in all levels of technical expertise, but all of us benefit from a better understanding the philosophy that underlies our decision making processes.  I think that understanding where your decisions are coming from is even more important than the technical details of how those decisions are implemented.  I’ve seen many technical decisions made that looked good in the short term, but led to dead ends both in terms of the technology and the opportunities that the decisions limited.

This is all my way of saying that I believe an 80/20 split of non-technical to technical talks is probably appropriate for most security conferences.  The majority of people aren’t going to care about a specific technology because it simply doesn’t affect them directly.  But so many of us want to understand the underlying foundations of our chosen field.  It’s great to dig into the deeply geeky details of a protocol, but the vast majority of professionals will never need to do that for fun or for profit.  But every person who works in the security field needs to understand the philosophy that goes into making security decisions at all levels.

PS.  I’ll be giving a related talk, ‘Fundamental Flaws in Security Thinking’ at BSidesSF on Tuesday, February 28th at 1pm.  Come tell me how I’m wrong.

No responses yet

Nov 03 2011

Open Tabs 11/03/11

This week’s podcast conversation with HD Moore and Josh Corman was a good thing.  Getting the ideas of “HD Moore’s Law“, the security poverty line and security debt out there so other people can beat on the ideas, examine them for flaws and hopefully incorporate portions of the concepts into their own thinking.  This is, after all, the whole reason I started blogging and podcasting in the first place.

Open Tabs 11/03/11:

No responses yet

Sep 13 2011

Hoping to affect change at the ISC2

Published by under CISSP/ISC2,Simple Security

It might just be a pipe dream to hope that these folks can make any significant change at the ISC2, but the fact that they’re trying is more than I’ve ever done.  Which is why I’m hoping you’ll throw a little support behind the five people Jack Daniel is highlighting who want to run for the Board.  Endorsing them simply puts them on the ballot, it does not mean you have to vote for them, it doesn’t mean any of them will actually get elected.  But it will hopefully send a message that whatever direction the ISC2 is currently headed in, and I certainly don’t know what direction that is, isn’t helping the general CISSP at all.

From Jack’s site:

Below are the five candidates I am aware of, in alphabetical order:

No responses yet

Jun 08 2011

Fundamental flaw in thinking: We’re responsible

Published by under General,Simple Security

Over the last few months I’ve come to the conclusion that we’re doing security wrong.  Not the day to day details, though we’ve gotten a lot of that wrong as well.  I mean we’ve gotten the big picture issues wrong, we’ve made a number of false assumptions about how we should be protecting our enterprises.  We’re building the very concepts we rely upon to develop products, services and systems from on shaky ground.  If you don’t agree, just look around at the ease which hackers are tearing through the defenses of even the largest merchants (Sony) and you have to admit that something isn’t working like it should be.  You can blame businesses for not giving us the resources we need, you can blame a shortage in decent security professionals or you can do some self examination and realize that maybe security best practices and compliance efforts just aren’t working.

When I say we’re doing it wrong, I’m thinking at a more basic level than some of the common fallacies we run into every day.  We all know that ‘firewalls are a security device’ is wrong; they’re just a complex traffic management device and don’t do much more than filter traffic on the grossest level in most cases.  And that’s assuming they’ve been set up correctly, which too many aren’t.  When was the last time you saw good egress rules?  Or the fact that a number of studies have shown that antivirus commonly doesn’t catch more than 70% of all viruses and the number is falling.  These are both assumptions that executives and non-security professionals make, but most of us in the community know that firewalls and AV are just things we put in because the business has come to think of them as the expected minimums. 

But the flaws I’m looking for go deeper than the fallacies of firewall and antivirus effectiveness.  I’m not looking for the nuts and bolts assumptions that we make to work on a day in and day out basis.  I’m trying to examine the deeper assumptions, the ones that we’ve built our entire philosophy of security upon.  In a different context we my call this our morality or religion, which might not be a horrible comparison.  I’m looking to see what are some of the most basic truths we’ve decided for ourselves and what are the errors we’ve made because we’ve built these up from lessons taught to us by others.  Were these assumptions once valid, did they once have a grain of truth or were they merely the most basic and easy rules to put in place because they hadn’t been tested before.  And just as with religious or moral beliefs, to few of us ever take them out of the back of our mind to re-examine the assumptions and see if they still hold up as well to our adulthood as they did to our childhood.  The security assumptions that might have served you well when you were an IDS or firewall administrator may not translate well to a later point in your career, and in fact may cause damage to your reputation.

It’s never easy to change the core of your belief system.  I only know a few people who consciously make a habit of doing it on an annual basis and even fewer who live their lives in a constant state of re-examination.  It’s a powerful tool to be able to look at your worldview, understand that you’ve made some mistakes and adjust to the new realities of how that affects the way you interact with the world.  But it’s painful sometimes, and the change can be difficult.

So enough of the philosophical BS, what are the fundamental flaws in security reasoning that I’ve identified?  I’ll be honest, there’s only one I’ve identified and mulled over to the point that I’m ready to share.  We, security professionals have taken it upon ourselves to be responsible for all risk in the corporate environment.  We started by placing the firewalls around the outside of the network and as more and more complexity was added into the IT infrastructure, we took on more and more of the risk into our philosophy, without really stopping to consider if we are the ones who are responsible for the vulnerabilities and misconfigurations that spawn much of the risk in our environments.  We’ve only rarely been given, or fought for, the authority to make changes in the products and systems that introduce risk, we are all to often nothing more than a speed bump in the corporate culture and a scapegoat for compromises when they happen.  “Why didn’t you protect us?  It’s your fault this happened!”  But if we had little or no ability to change the underlying systems that led to the compromise, why are we considered responsible?  Responsibility without the authority to affect change is the surest route to being a scapegoat in the best of situations.

So why have we accepted this risk responsibility without having any authority?  Because that’s how most of us have been taught to do security.  It’s not only our duty to identify risks and explain them to the business, it’s our duty as security professionals to shoulder that risk and do what needs to be done.  Despite the fact that we can’t change the underlying problems that introduce the risk.  Despite the fact that all too often we don’t have the manpower to deal with the problems we already have.  Despite the fact that we’re not given the budget we need to reduce the risks that existed in the enterprise before some new project introduced even more risk into our overstressed environment.

So if we’re not responsible for the risk in the enterprise, who is?  In a perfect world, the people who introduce the risks should also be the ones responsible for it.  Is the marketing department requiring a new feature on the company web site that also opens up the corporation to a partner?  Then they should be the ones who’s finances bear the burden of paying for the additional monitoring costs.  The development department is doing the programing for the corporate web site, so why is the security department being held responsible when a SQL injection attack not only takes down the site but also discloses a million customer records?  If a proper SDLC had been implemented, if tools for testing the software, if internal training had taken place, the SQL injection should never have happened.  Yes, we can be responsible for adding a layer of protection beyond that, but it’s the development team that should be taking the responsibility, since they’re the team that actually had the authority to make changes and prevent the risk from being placed in the environment in the first place.  We need to stop being the sin eaters of the corporate world, absolving all other departments of their responsibility for the risk to the corporation they introduce on a daily basis.  We need to push back and put the onus of dealing with risks and vulnerability on the shoulders of the people who are closest to the problem.

The fundamental flaw in security thinking is that we can effectively combat the risk for the entire company.  We can’t.  We have to advise and point out where new or existing risks are, but it’s impossible for the security team within an organization to deal with every single potential vulnerability and we shouldn’t even be trying.  We need to make a change to the way we think about security and start pushing that responsibility back on the people who can actually affect change.  It’s amazing how many requirements turn into ‘nice to have’ or ‘we don’t really need that’ when the department asking has to shoulder the responsibility.

There’s no quick fix, I think this is something that needs to be a ‘generational’ change in security.  One of the first things that was brought up when I floated this idea amongst my peers is that we can’t just barge into the corporation and force a new way of thinking on corporations.  And that’s true, we will never be able to make an overnight change to the way other business units perceive us and we can’t be militant in pushing other parts of the organization to take responsibility for their actions.  It will be an unpopular path to take, since no one wants to take back responsibility once it’s been offloaded.  But it’s imperative we start down this path, because this isn’t a problem that’s going to go away, and as more and more compromises happen, we’re only going to be blamed more for issues we had no authority to change.  We have to change the way we approach risk in the enterprise and slowly educate our businesses about where the responsibility for risk really sits.

There are a number of people who I think are already aware of this fundamental flaw in security thinking.  Andy Ellis over at Akamai, Rafal Los at HP and a number of senior security professionals understand that we can’t take the responsibility for all risk and are pushing it back to the proper departments.  This isn’t to say they’re blocking progress, but that they’re telling the departments, “If this is what you need, we will show you the risks involved.  But you will sign off on those risks and accept that if something goes wrong, it’s not the security department who will take the blame.”  Rafal gave a great talk on this recently at BSides Detroit, and my conversations with him subsequently were a large part of the impetus for this post.

Start by changing your own way of thinking about acceptance of risk.  Push back gently at first, but push back.  Even if you’re unable to get a written statement saying that others take responsibility for the risk they’re creating, bring it up in meetings and stop just accepting it for them. Talk to your legal department, make sure the corporate council knows when there’s a risk you think will put the company in danger.  Start cultivating relationships higher in the organization and changing the way other people think about security.  Because as long as we continue to take responsibility for all risk in the corporation, we will be the scapegoats for any compromise and will be unable to be effective.  Not only will we continue to suffer, but the business will continue to be compromised with frightening regularity.

—————-
This marks blog post 2000.  It’s taken 7.5 years.  But it’s been worth it.


18 responses so far

Next »