Posts Tagged ‘Data Security’

IT Audit Rule #2: IT Audits don’t prevent loss

March 19, 2008

Sorry to leave this topic open for so long since the last post. Financing activities take top priority for a private company….enough said.

Let’s talk about IT Audit Rule #2. Remember these are my rules, not an official set of commandments. But based on some experience in auditing.

IT Audit Rule #2: IT Audits don’t prevent loss.

Security’s intent is to stop loss. Audit’s intent is to verify the accuracy of something; typically by checking a sample of outcomes but also by making sure that critical controls are functioning. The theory of an audit is that if the right controls are consistently working then the thing being asserted (in this case your data’s accuracy or the state of your data security or privacy) is probably accurate.

Does an IT Audit help security? Absolutely. It will help to point out where weaknesses exist; where controls might be needed to prevent loss or inaccuracy.

And here’s one of the beauties of technology and automation. You literally can audit every event as it happens with automation. So instead of sampling a few transactions and seeing if the outcome was right, you can audit every transaction to see if the outcome was right. Not only that it occurred (see last post for flaws of IdA products for auditing) but that it was right.

But here is the Corollary to Rule #2: Don’t ask an IT Audit product to provide security. For the simple reason that if an IT Audit product now is reversing or stopping inappropriate events, it can no longer be trusted to audit (See Rule #1). It is tampering with the evidence of whether the processes and controls it is auditing are actually working.

Two separate processes need to exist in IT. The security process that tries to create an outcome. And the IT Audit process which verifies that the security process is working. If you have the option, don’t trust the vendor providing one solution to provide the other (Back to Rule #1).

Advertisements

Human Error or Human Misbehavior

February 12, 2008

Many minds seem to be wondering something like this: “is an organization’s data more at risk from an insider (employee, contractor, etc) purposely doing damage or from a well intentioned employee?”

It seems to be a relative certainty that one of the two represents the largest risk to an organization’s data.  I read this article about a Deloitte survey.  To the point that 91% of those surveyed said they were worried about the risk of employee misconduct related to information technology.  I’d call 91% many minds.

When I was at Trend Micro we used to say that there would always be a virus threat as long as there were humans using computers.  It has become trite to suggest that virus writers relied on the thoughtless-but-innocent behavior of users.

But is that also true when it comes to damage done by insiders? I would hypothesize that in absolute dollar numbers the highest risk of loss due to insider behavior is probably also from the well-intentioned person trying to do their job.  I won’t elaborate here on that topic because Matt Flynn has recently done that very well in a recent discussion with IT Business Edge.

Does the distinction matter?  When talking about insider security solutions with IT professionals  many times the conversation gravitates to concerns about a few malicious people often concluding that the real need for insider security solutions is confined to a few people who are so malicious that they cannot be effectively stopped.

I suspect that if the real economic damage to organizational data from all sources could be accurately charted we would find the most compelling justification for securing against inadvertent harm from insiders.

Identity Provides Security Context

January 3, 2008

We were talking to Eric Norlin about 2008 trendspotting. Given NetVision’s core raison d’etre we see a growing groundswell toward what we are calling “context” (see Eric’s post). Eric expanded our definition – which is good. But let me clarify what we mean for in our narrower definition for a second.

What we’re seeing is identity management monitoring (at least in a corporate context) being used as a stalking horse for achieving proof of compliance and risk management regarding the insider threat. As in: “I am required to demonstrate that I have control over admin rights so I need to monitor this group membership for all changes”, and other similar examples.

What we’re also observing is that providing such security (or evidence thereof) requires trolling through a lot of event data – often after the fact. As a result we’re seeing more and more customers asking us to link our risk assessment product with our change auditing product so that the search for risky behavior isn’t unguided. That’s what we mean by context. Instead of looking at the universe of data after the fact in order to document a conclusion you instead target your data gathering to areas of risk and obvious policy violation in the first place.

I am no expert on the subject of listening in on the phone calls of the world to find evidence of a threat to national security. But I believe what we’re talking about is metaphorically equivalent to whatever the national government does in order to decide what to listen to.

The question we hope to answer is: “Can this be done without creating a false negative” or overlooking a breach of security or policy which doesn’t rise to theoretical definition of risk. More on that later but opine if you have one.

Policing the Power of Identity – Security by and for Identity

December 3, 2007

I recently published a whitepaper entitled Policing the Power of Identity. It’s a vision (mine anyway) for the future use and success of identity in corporate computing. Use of identity gives us a “handle” to use in consistently assessing, analyzing, monitoring, etc. insiders. We developed multiple, fairly mature disciplines for dealing with “outsider” threats (firewall, IPS, anti-SPAM, anti-virus). We should have the same goal with protecting ourselves from insider threats – which are prevalent.

I could be accused by a reader of this whitepaper of giving the impression that I think identity is the problem. That’s not the case. But as corporate IT uses identity more exhaustively for all its good purposes then identity becomes a handy mechanism for identifying insider threat – both potential and realized. This process could most accurately be described as “Policing Computing Power BY (using) Identity”. But also, casually used, identity can create a false sense of security. And in such an imperfect-use scenario identity itself can be a problem (or more accurately, poor identity management can be a problem). In that case the process we prescribe is accurately described as “Policing the Power of Identity”. And such cases are exceedingly common if our IT customers and contacts are any indication.

Either way, our goal is never to attempt to cast identity itself as bad. But instead, to identify practices, tools and standards that use identity to provide better security and to improve identity management (aka security) practice. Along the way we believe that proof of compliance with regulations, policies or best practices will be a natural by-product of our efforts; at least in the area where identity is implicated.

If this sounds like an interesting line of discussion to follow, join the conversation or let me join yours. We’ve had a number of offline comments back on the premises in the whitepaper. I’ll add those to this blog in imminent posts.