Posts Tagged ‘Risk’

IT Risk, Security and Money

June 19, 2008

I read an extraordinarily good post this week by Bruce Schneier on how to sell security.  Or perhaps more accurately: what thought process IT security buyers go through when deciding to purchase (or not).

I’ve had the luxury in my time of having ultimate responsibility for selling content security (anti virus/spam) products, database security products and now network access security products.

I have observed first hand the “cost of insurance vs. probability of negative outcome” calculus.  No one questions money invested in Antivirus solutions because everyone knows that the probability of a negative outcome without AV is a virtual certainty.  Same with SPAM.  They might wish for more effective insurance for the money.  They might question whether they need yet-another-layer to really solve the problem – I remember arguing that gateway scanning was important and getting almost no uptake until the Melissa virus came out and demonstrated that the next generation of viruses were going to be transmitted by email, not by floppy disk.  But at least a basic level of insurance is a given.

When selling database security solutions in the earliest days of that technology I saw the opposite calculus.  IT’s almost idealistic belief in the impenetrability of the applications they had developed to front-end their databases.  In those cases our best sales tactic was to ask if it was OK if we tried to perform a SQL injection or cross-site script in a lab environment just to “test our tools”.  We could routinely demonstrate that applications were easily penetrated.  Suddenly database security solutions jumped up the priority list a few notches in organizations with a lot to lose.

The Societe-Generale “situation” vaulted insider security into the collective security consciousness.  We’re still working out the risk vs. cost-of-insurance calculation.

But for the most part, as a life-long security solution purveyor I have found that every discussion becomes a risk vs. cost discussion.  And when the risk we’re addressing becomes the next most painful one on the list we will get a serious hearing.  That’s why good sales people learn very quickly to look for “compelling events” or to simply ask, “where does solving this problem rank on your current priority list”.  If your prospect cannot demonstrate that it’s under broader (than just themselves) organizational consideration somewhere in the top 5 (or perhaps 10 if it’s a larger organization) prepare yourself for a long sales cycle.

Now security is starting to become somewhat synonymous with compliance.  And that has given us the idea that if we just say our security product solves a SarbOx problem the budget will be instantly available.  But go to RSA and walk the floor and you will very quickly realize that when 1,000 vendors proclaim that they are solving the compliance problem in subtly different ways, a prospective customer could not be blamed for putting the clutch in for a bit while sorting out what they really need; no matter how dire we paint the consequences of inaction.

I have no end-world-hunger solutions here but I will say that I’m gravitating toward at least one small solution.  Let’s call a spade a spade.  Tag this: “Security is not Compliance”.  And trying to solve compliance problems with a security solution is likely to be kind of like trying to reduce the cost of oil by invading Venezuela (now this post will show up on the NSA radar screen) – there has to be a more cost effective way.  I’m leaning toward this Compliance or Auditing as a Service (CaaS or AaaS).  And in developing a go-to-market model around this I’m starting to think that many things in IT could benefit from at least someone thinking about the problem from an “As a Service” perspective.  The business model might not be there in all cases.  And politics within IT might present too great a barrier in others.  But when you start thinking about all IT problems the way Google and Amazon are likely thinking about them, perhaps we might find ways to offer more security capabilities as a utility.

Which just might make the cost of insurance negligible enough to make good security a no-brainer deal.


Human Error or Human Misbehavior

February 12, 2008

Many minds seem to be wondering something like this: “is an organization’s data more at risk from an insider (employee, contractor, etc) purposely doing damage or from a well intentioned employee?”

It seems to be a relative certainty that one of the two represents the largest risk to an organization’s data.  I read this article about a Deloitte survey.  To the point that 91% of those surveyed said they were worried about the risk of employee misconduct related to information technology.  I’d call 91% many minds.

When I was at Trend Micro we used to say that there would always be a virus threat as long as there were humans using computers.  It has become trite to suggest that virus writers relied on the thoughtless-but-innocent behavior of users.

But is that also true when it comes to damage done by insiders? I would hypothesize that in absolute dollar numbers the highest risk of loss due to insider behavior is probably also from the well-intentioned person trying to do their job.  I won’t elaborate here on that topic because Matt Flynn has recently done that very well in a recent discussion with IT Business Edge.

Does the distinction matter?  When talking about insider security solutions with IT professionals  many times the conversation gravitates to concerns about a few malicious people often concluding that the real need for insider security solutions is confined to a few people who are so malicious that they cannot be effectively stopped.

I suspect that if the real economic damage to organizational data from all sources could be accurately charted we would find the most compelling justification for securing against inadvertent harm from insiders.