Tuesday, 31 January 2012
SECURITY MEMO: IT CAN'T HAPPEN HERE, CAN IT?
clear

I have always been a big believer in background checks for new employees.  While many companies do this prior to hiring someone, some still do not and pretty much everyone relies on outsource firms to do the background check.  Yesterday, January 30, 2012, the NY Times reported the case of a church worker within the Archdiocese of New York, accused of embezzling more than $1 million over seven years.  This type of story appears periodically in different contexts.  In this matter the Times reports that the woman was hired in 2003, without a criminal background check.  The archdiocese has since discovered that she had been convicted previously of grand larceny in one similar matter and had pleaded guilty to a misdemeanor in yet another matter. 

In the current case, she is accused of writing checks to herself and then changing the internal records to indicate that the check was issued to a vendor.  In the previous case, she had issued duplicate checks to company employees and then cashed them herself using check-cashing cards she had issued to herself.

Action items:

  1. If you are not doing background checks on employees, start a program
  2. If you are doing background checks, review the process with HR and your vendor

As William Shakespeare said in Much Ado About Nothing:  “Let every eye negotiate for itself and trust no agent; for beauty is a witch against whose charms faith melteth in blood”

clear
Posted on 01/31/2012 11:43 AM by Frederick Scholl
clear
Friday, 27 January 2012
How Not To Be a Cyber Janitor
clear

A recent blog post by Jeff Bardin ("The Proliferation of Cyber Janitors") really resonated with me.  He points out how much of the security industry is focused on incident response and breach notification.  This started with CA 1386 in 2003 and more recently has become a requirement for breaches of health information (HIPAA/HITECH).  While I don't have a problem with these privacy requirements, too many security programs are focused on reactive solutions to detect incidents and respond.  Bardin calls this the rise of the Cyber Janitors, those responsible for cleaning up digital messes.  If we don't figure out how to implement proactive security, we will be stuck in clean up mode.

I totally agree with his comments.  In fact I will go further and argue that the whole "Prevent-Detect-Respond" security mentality is broken.  It originates from the old castle security model, where the "good guys" (us) are protected from the "bad guys" (them) by an impenetrable wall and moat.  The wall prevented the bad guys from entering.  Sentries detected if a breach was made.  Soldiers were awakened if needed to repel the breach.  This model worked well for several thousand years but does not work today.  Cyber security problems are systems problems and there is no clear dividing line between good guys and bad guys.  

I believe we need to put more emphasis on security management and systems design and less emphasis on exclusively technical solutions to what are non-technical problems.  Adding more layers to the castle wall just does not work.  This was clearly shown in many of the security breaches in 2011.  Most security professionals would agree with this, but then put this approach at the bottom of the priority list.  The thought is that maybe one more new security appliance will solve our problems. 


Virtually all security breaches include a bad actor taking advantage of internal errors or communications problems.  We cannot eliminate the bad actors.  We can't anticipate their next attack vector.  But we can improve our internal defenses.  Continuous improvement models based on Capability Maturity Models have been successful in many software and systems engineering programs.  These models can be used to focus on security processes and help measure and keep track of operational excellence or the lack thereof.  I believe the adoption and use of these models will help to go beyond annual compliance checks and keep us out of clean up mode.


I will be hosting a panel discussion on this topic ("Achieving Operational Excellence in Security") at RSA 2012 Thursday March 1, for all those planning to attend this conference.  It would be great to have your comments and ideas at this meeting.

clear
Posted on 01/27/2012 5:00 PM by Frederick Scholl
clear
Tuesday, 3 January 2012
Don't Forget Cloud Availability
clear


Most assessments of cloud security risks highlight data integrity and confidentiality issues.  But the business bottom line is service availability.  With many of today's cloud services being offered without warranty, users need to be cautioned before relying on that service.  It is too easy to ignore the digital supply line that is behind the convenient service or API.  I am reading more and more about service outages from Verizon, RIM and other vendors like LinkedIn, Twitter and those in the screenshots.    A recent email reminded me again of this issue:   a reminder that Google's Personal Health Records service was closing by end of 2011.  This is a major cloud provider that is discontinuing its service.  What will be happening to my healthcare data stored therein?  Or what about the class action law suit against Dropbox?  Could that affect its viability?  Or what about DigiNotar, the now bankrupt Certificate Authority, leveled by a security breach?  These days, any cloud vendor storing personally identifiable information is subject to legal action in the event of a breach. 

Cloud customers need to exercise extreme caution in selecting vendors and in insuring backup solutions in case the vendor suffers an outage or simply goes out of business.  First, we are in the "consumerization of IT" era and without specific guarantees to the contrary we should expect cloud vendors to use the lowest cost approaches to providing their services.  Second each cloud vendor is part of a digital supply chain which is at least going to include a network vendor and data center provider. 

If the vendor is actually using a chain of N services to supply its service, where each component service has uptime U, then the net availability will be A = 100-N x (100-U).  As an example, with five links, if U = 99.9% for each link ("three nines"), the net availability will be only 99.5% ("two nines").  The same type of calculation applies in the case where there are several parallel cloud services being used.  These may grow up over time without much planning.  Each business process may use more than one cloud service and thereby be vulnerable to failures in any one of them.

CIOs have spent years developing reliable data center operations.  Now is the time to move carefully into cloud services, with a watchful eye on both short term availability issues and long term strategic vendor viability issues.

clear
Posted on 01/03/2012 12:10 PM by Frederick Scholl
clear