Archive for the ‘Data Security’ Category

IT Risk, Security and Money

June 19, 2008

I read an extraordinarily good post this week by Bruce Schneier on how to sell security.  Or perhaps more accurately: what thought process IT security buyers go through when deciding to purchase (or not).

I’ve had the luxury in my time of having ultimate responsibility for selling content security (anti virus/spam) products, database security products and now network access security products.

I have observed first hand the “cost of insurance vs. probability of negative outcome” calculus.  No one questions money invested in Antivirus solutions because everyone knows that the probability of a negative outcome without AV is a virtual certainty.  Same with SPAM.  They might wish for more effective insurance for the money.  They might question whether they need yet-another-layer to really solve the problem – I remember arguing that gateway scanning was important and getting almost no uptake until the Melissa virus came out and demonstrated that the next generation of viruses were going to be transmitted by email, not by floppy disk.  But at least a basic level of insurance is a given.

When selling database security solutions in the earliest days of that technology I saw the opposite calculus.  IT’s almost idealistic belief in the impenetrability of the applications they had developed to front-end their databases.  In those cases our best sales tactic was to ask if it was OK if we tried to perform a SQL injection or cross-site script in a lab environment just to “test our tools”.  We could routinely demonstrate that applications were easily penetrated.  Suddenly database security solutions jumped up the priority list a few notches in organizations with a lot to lose.

The Societe-Generale “situation” vaulted insider security into the collective security consciousness.  We’re still working out the risk vs. cost-of-insurance calculation.

But for the most part, as a life-long security solution purveyor I have found that every discussion becomes a risk vs. cost discussion.  And when the risk we’re addressing becomes the next most painful one on the list we will get a serious hearing.  That’s why good sales people learn very quickly to look for “compelling events” or to simply ask, “where does solving this problem rank on your current priority list”.  If your prospect cannot demonstrate that it’s under broader (than just themselves) organizational consideration somewhere in the top 5 (or perhaps 10 if it’s a larger organization) prepare yourself for a long sales cycle.

Now security is starting to become somewhat synonymous with compliance.  And that has given us the idea that if we just say our security product solves a SarbOx problem the budget will be instantly available.  But go to RSA and walk the floor and you will very quickly realize that when 1,000 vendors proclaim that they are solving the compliance problem in subtly different ways, a prospective customer could not be blamed for putting the clutch in for a bit while sorting out what they really need; no matter how dire we paint the consequences of inaction.

I have no end-world-hunger solutions here but I will say that I’m gravitating toward at least one small solution.  Let’s call a spade a spade.  Tag this: “Security is not Compliance”.  And trying to solve compliance problems with a security solution is likely to be kind of like trying to reduce the cost of oil by invading Venezuela (now this post will show up on the NSA radar screen) – there has to be a more cost effective way.  I’m leaning toward this Compliance or Auditing as a Service (CaaS or AaaS).  And in developing a go-to-market model around this I’m starting to think that many things in IT could benefit from at least someone thinking about the problem from an “As a Service” perspective.  The business model might not be there in all cases.  And politics within IT might present too great a barrier in others.  But when you start thinking about all IT problems the way Google and Amazon are likely thinking about them, perhaps we might find ways to offer more security capabilities as a utility.

Which just might make the cost of insurance negligible enough to make good security a no-brainer deal.

Advertisements

IT Audit Rule #2: IT Audits don’t prevent loss

March 19, 2008

Sorry to leave this topic open for so long since the last post. Financing activities take top priority for a private company….enough said.

Let’s talk about IT Audit Rule #2. Remember these are my rules, not an official set of commandments. But based on some experience in auditing.

IT Audit Rule #2: IT Audits don’t prevent loss.

Security’s intent is to stop loss. Audit’s intent is to verify the accuracy of something; typically by checking a sample of outcomes but also by making sure that critical controls are functioning. The theory of an audit is that if the right controls are consistently working then the thing being asserted (in this case your data’s accuracy or the state of your data security or privacy) is probably accurate.

Does an IT Audit help security? Absolutely. It will help to point out where weaknesses exist; where controls might be needed to prevent loss or inaccuracy.

And here’s one of the beauties of technology and automation. You literally can audit every event as it happens with automation. So instead of sampling a few transactions and seeing if the outcome was right, you can audit every transaction to see if the outcome was right. Not only that it occurred (see last post for flaws of IdA products for auditing) but that it was right.

But here is the Corollary to Rule #2: Don’t ask an IT Audit product to provide security. For the simple reason that if an IT Audit product now is reversing or stopping inappropriate events, it can no longer be trusted to audit (See Rule #1). It is tampering with the evidence of whether the processes and controls it is auditing are actually working.

Two separate processes need to exist in IT. The security process that tries to create an outcome. And the IT Audit process which verifies that the security process is working. If you have the option, don’t trust the vendor providing one solution to provide the other (Back to Rule #1).

Human Error or Human Misbehavior

February 12, 2008

Many minds seem to be wondering something like this: “is an organization’s data more at risk from an insider (employee, contractor, etc) purposely doing damage or from a well intentioned employee?”

It seems to be a relative certainty that one of the two represents the largest risk to an organization’s data.  I read this article about a Deloitte survey.  To the point that 91% of those surveyed said they were worried about the risk of employee misconduct related to information technology.  I’d call 91% many minds.

When I was at Trend Micro we used to say that there would always be a virus threat as long as there were humans using computers.  It has become trite to suggest that virus writers relied on the thoughtless-but-innocent behavior of users.

But is that also true when it comes to damage done by insiders? I would hypothesize that in absolute dollar numbers the highest risk of loss due to insider behavior is probably also from the well-intentioned person trying to do their job.  I won’t elaborate here on that topic because Matt Flynn has recently done that very well in a recent discussion with IT Business Edge.

Does the distinction matter?  When talking about insider security solutions with IT professionals  many times the conversation gravitates to concerns about a few malicious people often concluding that the real need for insider security solutions is confined to a few people who are so malicious that they cannot be effectively stopped.

I suspect that if the real economic damage to organizational data from all sources could be accurately charted we would find the most compelling justification for securing against inadvertent harm from insiders.

Is Your IT Policy Working? IT Quality Assurance

January 17, 2008

This statement grabbed my attention this week. Mostly because it seemed to be concisely obvious (a good thing).

“You can generate all the polices that you want, but unless you have some kind of monitoring and enforcement mechanism, you don’t know if a policy is working or not,” says Bob Gorrie, information security project manager at USEC, a supplier of enriched uranium fuel for commercial nuclear power plants based in Bethesda, Md.

Source: Data loss start-ups sell out – Network World

Having spent more than a few years at Intel (read: manufacturing) I often view IT processes and relate them to manufacturing practices and see parallels. In a manufacturing world you would never think of establishing a process without establishing control limits and tests to tell whether you were operating within control limits. It would be a disaster for manufacturing to “run off the rails” and for you not to notice it until large quantities of defective goods were produced.

I think the same methodology (design process, design tests) is hugely beneficial to IT. Hooray for you (truly) for you if you are a practitioner of the “process determines results” school of thought in IT. And that you have great policies, broadly communicated and understood and practiced. But frankly, if you don’t have a way to routinely tell if your policies and processes are working the way you intend, you’re missing IT Quality Control.

In the context of NetVision, If you have great role definition and automation and processes for giving out rights and managing identities but you don’t have an arms-length (read: independent, 3rd party) check on the state of the system and the effectiveness of your controls – you probably are missing the element you need to stay on track for your desired goals and to improve.

Identity Provides Security Context

January 3, 2008

We were talking to Eric Norlin about 2008 trendspotting. Given NetVision’s core raison d’etre we see a growing groundswell toward what we are calling “context” (see Eric’s post). Eric expanded our definition – which is good. But let me clarify what we mean for in our narrower definition for a second.

What we’re seeing is identity management monitoring (at least in a corporate context) being used as a stalking horse for achieving proof of compliance and risk management regarding the insider threat. As in: “I am required to demonstrate that I have control over admin rights so I need to monitor this group membership for all changes”, and other similar examples.

What we’re also observing is that providing such security (or evidence thereof) requires trolling through a lot of event data – often after the fact. As a result we’re seeing more and more customers asking us to link our risk assessment product with our change auditing product so that the search for risky behavior isn’t unguided. That’s what we mean by context. Instead of looking at the universe of data after the fact in order to document a conclusion you instead target your data gathering to areas of risk and obvious policy violation in the first place.

I am no expert on the subject of listening in on the phone calls of the world to find evidence of a threat to national security. But I believe what we’re talking about is metaphorically equivalent to whatever the national government does in order to decide what to listen to.

The question we hope to answer is: “Can this be done without creating a false negative” or overlooking a breach of security or policy which doesn’t rise to theoretical definition of risk. More on that later but opine if you have one.

Policing the Power of Identity – Security by and for Identity

December 3, 2007

I recently published a whitepaper entitled Policing the Power of Identity. It’s a vision (mine anyway) for the future use and success of identity in corporate computing. Use of identity gives us a “handle” to use in consistently assessing, analyzing, monitoring, etc. insiders. We developed multiple, fairly mature disciplines for dealing with “outsider” threats (firewall, IPS, anti-SPAM, anti-virus). We should have the same goal with protecting ourselves from insider threats – which are prevalent.

I could be accused by a reader of this whitepaper of giving the impression that I think identity is the problem. That’s not the case. But as corporate IT uses identity more exhaustively for all its good purposes then identity becomes a handy mechanism for identifying insider threat – both potential and realized. This process could most accurately be described as “Policing Computing Power BY (using) Identity”. But also, casually used, identity can create a false sense of security. And in such an imperfect-use scenario identity itself can be a problem (or more accurately, poor identity management can be a problem). In that case the process we prescribe is accurately described as “Policing the Power of Identity”. And such cases are exceedingly common if our IT customers and contacts are any indication.

Either way, our goal is never to attempt to cast identity itself as bad. But instead, to identify practices, tools and standards that use identity to provide better security and to improve identity management (aka security) practice. Along the way we believe that proof of compliance with regulations, policies or best practices will be a natural by-product of our efforts; at least in the area where identity is implicated.

If this sounds like an interesting line of discussion to follow, join the conversation or let me join yours. We’ve had a number of offline comments back on the premises in the whitepaper. I’ll add those to this blog in imminent posts.