Connecting the Dots
Search Posts


sun | mon | tue | wed | thu | fri | sat |
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 |
Recent Posts

Categories

The NY Times reported on 2/15/2012 the amazing story of Edward Maher, the suspect in a $1.5M 1993 armored car heist in the UK. Recently apprehended, for almost 20 years he had been on the run in the US. He had a number of regular jobs including, including eight years at Nielsen, the television ratings company. This says to me that background checks today are more important than ever. Not only that, we must be sufficiently skeptical even of a clean background check. Since Mr. Maher has not been convicted, he would have no criminal record. Kevin Mitnick in his recent book documents his past employment at a prominent Denver law firm. Human judgment must be added to any security information, including the background check. You don’t want to be hiring the next Mitnick or Maher, without knowing anyway.


Marc Russinovich’s recent book Zero Day: A Novel tells an action-packed tale of international hackers; the action passes through a NYC law firm and brings the entire firm down. Great story, but it seemed a little farfetched when I read it. In the book, the entire fictional law firm grinds to a halt as a result of a malware attack. Now we read about the major attacks on law firm Puckett & Faraj (ABA Journal, February 6, 2012). Web site down and emails on YouTube is not what any firm wants. Firm shut down; although their site is now back up. Mr. Russinovich’s book does not seem so farfetched at this point. CFO Magazine just did a story on cyber thieves targeting small and midsized businesses, “Where the Money Is, and the Security Isn’t”. All this is a good reminder for small and midsized law firms, at least those involved with litigation, to take steps to secure their data and business processes.


I have always been a big believer in background checks for new employees. While many companies do this prior to hiring someone, some still do not and pretty much everyone relies on outsource firms to do the background check. Yesterday, January 30, 2012, the NY Times reported the case of a church worker within the Archdiocese of New York, accused of embezzling more than $1 million over seven years. This type of story appears periodically in different contexts. In this matter the Times reports that the woman was hired in 2003, without a criminal background check. The archdiocese has since discovered that she had been convicted previously of grand larceny in one similar matter and had pleaded guilty to a misdemeanor in yet another matter.
In the current case, she is accused of writing checks to herself and then changing the internal records to indicate that the check was issued to a vendor. In the previous case, she had issued duplicate checks to company employees and then cashed them herself using check-cashing cards she had issued to herself.
Action items:
- If you are not doing background checks on employees, start a program
- If you are doing background checks, review the process with HR and your vendor
As William Shakespeare said in Much Ado About Nothing: “Let every eye negotiate for itself and trust no agent; for beauty is a witch against whose charms faith melteth in blood”


A recent blog post by Jeff Bardin ("The Proliferation of Cyber Janitors") really resonated with me. He points out how much of the security industry is focused on incident response and breach notification. This started with CA 1386 in 2003 and more recently has become a requirement for breaches of health information (HIPAA/HITECH). While I don't have a problem with these privacy requirements, too many security programs are focused on reactive solutions to detect incidents and respond. Bardin calls this the rise of the Cyber Janitors, those responsible for cleaning up digital messes. If we don't figure out how to implement proactive security, we will be stuck in clean up mode.
I totally agree with his comments. In fact I will go further and argue that the whole "Prevent-Detect-Respond" security mentality is broken. It originates from the old castle security model, where the "good guys" (us) are protected from the "bad guys" (them) by an impenetrable wall and moat. The wall prevented the bad guys from entering. Sentries detected if a breach was made. Soldiers were awakened if needed to repel the breach. This model worked well for several thousand years but does not work today. Cyber security problems are systems problems and there is no clear dividing line between good guys and bad guys.
I believe we need to put more emphasis on security management and systems design and less emphasis on exclusively technical solutions to what are non-technical problems. Adding more layers to the castle wall just does not work. This was clearly shown in many of the security breaches in 2011. Most security professionals would agree with this, but then put this approach at the bottom of the priority list. The thought is that maybe one more new security appliance will solve our problems.
Virtually all security breaches include a bad actor taking advantage of internal errors or communications problems. We cannot eliminate the bad actors. We can't anticipate their next attack vector. But we can improve our internal defenses. Continuous improvement models based on Capability Maturity Models have been successful in many software and systems engineering programs. These models can be used to focus on security processes and help measure and keep track of operational excellence or the lack thereof. I believe the adoption and use of these models will help to go beyond annual compliance checks and keep us out of clean up mode.
I will be hosting a panel discussion on this topic ("Achieving Operational Excellence in Security") at RSA 2012 Thursday March 1, for all those planning to attend this conference. It would be great to have your comments and ideas at this meeting.


Most assessments of cloud security risks highlight data integrity and confidentiality issues. But the business bottom line is service availability. With many of today's cloud services being offered without warranty, users need to be cautioned before relying on that service. It is too easy to ignore the digital supply line that is behind the convenient service or API. I am reading more and more about service outages from Verizon, RIM and other vendors like LinkedIn, Twitter and those in the screenshots. A recent email reminded me again of this issue: a reminder that Google's Personal Health Records service was closing by end of 2011. This is a major cloud provider that is discontinuing its service. What will be happening to my healthcare data stored therein? Or what about the class action law suit against Dropbox? Could that affect its viability? Or what about DigiNotar, the now bankrupt Certificate Authority, leveled by a security breach? These days, any cloud vendor storing personally identifiable information is subject to legal action in the event of a breach.
Cloud customers need to exercise extreme caution in selecting vendors and in insuring backup solutions in case the vendor suffers an outage or simply goes out of business. First, we are in the "consumerization of IT" era and without specific guarantees to the contrary we should expect cloud vendors to use the lowest cost approaches to providing their services. Second each cloud vendor is part of a digital supply chain which is at least going to include a network vendor and data center provider.
If the vendor is actually using a chain of N services to supply its service, where each component service has uptime U, then the net availability will be A = 100-N x (100-U). As an example, with five links, if U = 99.9% for each link ("three nines"), the net availability will be only 99.5% ("two nines"). The same type of calculation applies in the case where there are several parallel cloud services being used. These may grow up over time without much planning. Each business process may use more than one cloud service and thereby be vulnerable to failures in any one of them.
CIOs have spent years developing reliable data center operations. Now is the time to move carefully into cloud services, with a watchful eye on both short term availability issues and long term strategic vendor viability issues.


An essay in a recent Wall Street Journal (December 3, 2011) caught my attention on the subject of compliance v. security. The article, “Starting Over With Regulation” by Philip K. Howard (also available at www.commongood.org), makes the case that government regulation in general is too complex to work. Recent failures by Congress to simplify Sarbox 404 illustrate where we are. According to Howard, the current approach is “deliberately designed to avoid human discretion”; but is this not the approach of many information security regulations? They are great at specifying detailed auditable controls, but short on helping to make sure the enterprise is meeting an overall security goal. This is the compliance approach to security, which makes the assumption that if all the individual controls are OK, then the organization is OK. The many reported security breaches over the past 24 months casts some doubt on this assumption.
Recent regulatory efforts have begun to require monitoring of the overall security health of the organization. For example, the language in Massachusetts’ 201 CMR 17 requires: “Regular monitoring to ensure that the comprehensive information security program is operating in a manner reasonable calculated to prevent unauthorized access to or unauthorized use of personal information”. The word comprehensive is key to me; it says that metrics must be set up keep track of the overall security program, not just individual controls.But how to do this? One way is by use of maturity levels. The maturity level approach acknowledges that continuous improvement of security is critical. Continuous improvement must be managed on a monthly basis, not annually, driven by audit. If your security process maturity level is not improving then your program is broken somewhere, no matter what the individual control metrics say. Maturity levels can be developed around your control framework of choice. I prefer ISO 27001. Another choice is COBIT; see the new Process Assessment Framework for COBIT. Another choice is the Open Group’s O-ISM3 process maturity framework for security. Whatever method you use, you must be sure to watch the security forest as well as the individual trees.


Do you think your information is secure within the federal government? You can make your own decision by reading the recent Information Security assessment by the Government Accountability Office (GAO). Some observations by GAO are expected, others are disturbing. Here are some statements that caught my attention:
1. Growth in reported incidents from 2006 to 2010 of 650%. Will the number of incidents simply overwhelm security staff?
2. The GAO noted that the IRS has not yet fully implemented required components of its security program. "...financial and taxpayer information remain unnecessarily vulnerable to insider threats and at increased risk of unauthorized disclosure, modification, or destruction; financial data are at increased risk of errors that result in misstatement; and the agency's management decisions may be based on unreliable or inaccurate financial information" A private business operating under SOX compliance, for example, would not be able to survive this type of report.
3. Regarding the FDIC, "...the Federal Deposit Insurance Corporation did not have policies, procedures, and controls in place to ensure the appropriate segregation of incompatible duties, adequately manage the configuration of its financial information systems and update contingency plans."
4. Regarding the National Archives and Records Administration, "..the agency did not always protect the boundaries of its networks by ....a firewall, enforce...use of complex passwords, limit users' access to systems to what was required for them to perform their official duties."
The most disturbing observation by the GAO was that no agency has fully implemented an agencywide security program. In this case the GAO is referring to a security management program, including framework and activities for assessing risk, developing security procedures and monitoring effectiveness. This is a basic security management gap, which should be addressed and without which security technology will not be effective.
To summarize the report, in the GAO's words: "Persistent governmentwide weaknesses in information security controls threaten the confidentiality, integrity, and availability of the information and information systems supporting the operations and assets of federal agencies."
It seems like we need better security leadership to address these problems, not better technology.


Earlier this year I published an ISSA Journal article (ISSA Journal, May 2011) advocating the use of lean management techniques to manage security. This is just an observation that security needs to use business management methods to tie together people, process, technology and partners.
Recently in the Harvard Business Review of October 2011 a good article appeared on the subject of lean: "Lean Knowledge Work", by Professors Bradley Staats and David Upton. They analyzed Wipro's adoption of lean into their software development process. Here are their main points as applied to lean security:
1. Eliminate Waste. In manufacturing we are all familiar with waste: overproduction; unnecessary transportation, inventory and worker motion; defects; overprocessing; waiting. The same issues come up in knowledge work and can be applied to security processes. For example, errors in implementing software changes will cause production outages. Errors in implementing firewall rules can add security holes. Poorly documented access management procedures or lack of automation will cause users to wait for application access. Review your security processes to see what ideas you can come up with to reduce wasted time or efforts.
2. Specify the Work. This translates to well documented security policies, procedures and standards. Most companies have some type of security policy. Fewer have working procedures or standards that are adhered to. Absense of good procedures means more time training new employees and higher probability of security gaps being introduced whenever changes are implemented.
3. Structure Communications. Much of security is built on good communications that involves everyone in the organization. This includes awareness training for employees; security event reporting from employees, contractors and vendors; reporting security risks in business terms to management. With a standard way of communicating for each of these stakeholders, results will be more predictable and security will be seen as a business function, not as an adhoc technical accessory.
4. Address Problems Quickly. Security breaches do get addressed quickly. But too often security events are not analyzed for root cause and the cause eliminated. Since we are stuck with highly flawed software and systems, it is critical to continually be improving those systems through effective problem resolution.
5. Plan for an Incremental Journey. Too often security is driven by compliance and compliance is seen as the end goal. Real security requires a cultural change and must be put in place over time. The best way to make this happen is to set up a simple metric that tracks the effectiveness of the security program and set up a plan to improve on that metric quarter over quarter, just as with other business functions.
In summary, manage security as a business process, not a disconnected set of technical controls. Lean is a set of management tools to help do this.


Security managers spend significant amounts of time analyzing software vulnerabilities and patching the same. I just looked at the Common Vulnerability and Exposure database (CVE) and see that it now has 47,555 vulnerabilities. But how many security managers have analyzed or cataloged the social engineering vulnerabililties faced by their organizations? I suspect few. Virtually all security managers have a technical background and social engineering skills (for good or evil) do not come naturally for most. Now however, we have Kevin Mitnick's new book, Ghost in the Wires, the practioner's handbook of social engineering. I don't normally choose to purchase or recommend books written by convicted felons, but in this case I am making an exception. Mitnick's story is full of specific examples of social engineering tricks. This is such a common attack vector today this his book is valuable reading for all involved with protecting information. From Ghost you can identify attack vectors that apply to your organization and make sure that mitigating controls are in place.
Some examples from Mitnick's experience.
1. Reconnaissance--Mitnick was a master at researching his targets, learning their language and culture before calling anyone. Today this is much easier with web sites and social networks. You can't eliminate the web, but you need to periodically monitor information that is on your web site and on social networks. Do you really need the help desk number and process for resetting passwords published on your public facing web site? I have seen this at more than one site.
2. Tailgating--This is entering a building behind others. Not a problem in small firms or large firms with professional security guards. I have seen this in campus settings where the organization is distributed enough that people do not know each other, but the culture is relaxed. If this is an issue at your site, make it part of regular awareness training.
3. Impersonating Insiders--One of Mitnick's favorite hacks. In most of his calls to "marks" he posed as a tech support person, help desk person or other insider. Training is needed to remind employees that they must verify the identity of anyone asking them for sensitive information. Phone numbers can be spoofed as can IP addresses and email addresses. Trust but verify must be the mantra.
4. Dumpster diving--Another of Mitnick's tricks. Many businesses still have tons of paper data with sensitive information. Do you have a process for disposing of it? Usually it will be outsourced. This hack is so common, that it is worthwhile going over the process in detail. Do the same for disposing of electronic data contained on PC's, servers and other devices.
In summary, if you pay as much attention to social engineering vulnerabilities as to software and technical vulnerabilities, you stand a much better chance of staying out of the sequel to Mitnick's book.


The recent breach of 20,000 medical records at Stanford Hospital has me concerned. The institution is part of Stanford University Medical Center and is a top rated health care provider. Are we making progress on HIPAA security? Are things getting better? If this institution cannot effectively protect patient data, who can? I analyzed the data on the HHS breach site, which reports medical records breaches (starting in 2009) of more than 500 records (www.hhs.gov) to see if I could see a positive trend. The results are shown in the graph.
The data is not really showing a clear trend in either direction. The vertical axis is pretty staggering in any case; the Stanford breach is only a blip on the chart. Our Federal government has been putting great emphasis on medical records privacy. Audits of HIPAA security will be starting in 2012 and could affect healthcare providers and their business associates. Unfortunately, in perhaps typical government fashion, the detailed audit requirements have not been published.
One of my concerns around HIPAA is the emphasis on compliance. Good compliance does not result, necessarily, in good security. Security managers need to develop effective security programs with compliance as only one "deliverable".
Should you be concerned about potential HIPAA security audits in 2012? Statistically, probably not. With 150 announced audits and the large population of covered entities (700,000) and business associates (1,500,000), your chance of an audit is about 0.000068. This is the same probability as you or a family member being attacked by a shark. But the audits will not be random and larger organizations will have a much greater chance of an onsite audit.
Should you be concerned about HIPAA security breaches? Yes. Depending on the nature of the breached records, the consequences could be material. Consider the fine of $1M levied by the Office of Civil Rights against Mass General for losing 200 records of AIDS patients.
In my next blog post, I will consider practical ways to secure HIPAA records and stay out of the news.


For more recent security blog posts that I have written, please check out:


One of the key challenges in building a security program is getting active participation from across the organization, from line workers to top management. All of these people have “day jobs” and security is too easily put out of mind.
“Why Every Project Needs a Brand (and How to Create One)” appearing in the Summer issue of MIT Sloan Management Review has ideas to address these challenges. Professors Karen Brown, Richard Ettenson and Nancy Hyer base the article on their research on project success and failure in a variety of industries. I believe that security projects and security programs can benefit from their ideas, perhaps even more than projects with tangible business goals. Opening a new factory has built in branding and awareness with clear and concrete goals and visible milestones along the way. But security programs often operate in the background and branding must be more deliberate.
The five P’s of project branding, according to the authors are: Pitch, Plan, Platform, Performance and Payoff. These efforts go hand in hand with the actual project efforts, from start to finish. The 5 P’s really focus on overall communication efforts between the security team and the rest of the organization. The Pitch is the initial presentation to management, answering the question “why should we do this?” For security it needs to address issues such as: reducing risk, satisfying customer demands, meeting compliance requirements, improving efficiency. The Plan is the time to bring in a broader group into the planning process. This helps ensure that the security initiative will be successful and helps guarantee that participants will be on board after rollout. The Platform is the vehicle by which the Plan is communicated within the organization or to effected third parties. Performance includes communication of project results during the rollout phase. Obviously it assumes success, but even that is not enough, if not appropriately communicated to the enterprise. Finally, Payoff marks the completion of the initiative. For example, this could be a celebration marking a successful audit result or the final implementation of an automated identity management system. Without a clear “end point” in the project, participants may feel that their efforts have not been worthwhile. This in turn, will effect their continued participation in this or other security efforts.
Branding is all about communication with stakeholders. It importance in security programs results from the fact that everyone in the organization is a stakeholder in the organization’s security.


I have to admit that I have never really understood the PDCA concept as it applies to information security. I do know that PDCA stands for Plan-Do-Check-Act, but I have never understood the difference between Do and Act, other than there is a Check step in between. Also, I can never remember which way the wheel goes, clockwise or counter-clockwise!
It seems like PDCA is from another era, when security improvements could be regularly scheduled on an annual basis; time moved more slowly then. Now we have new types of threats on a monthly basis, or even weekly basis. We need to respond to these quickly, no matter what phase of PDCA we happen to be in at that time.
I am a big believer in continuous improvement, which I call pdca. Continuous improvement in this case refers to progress made on a daily basis and documented monthly or quarterly. This pdca is part of lean management, kaizen, etc. Unless security is managed on this basis, it will slowly degrade, until the next annual audit.
A recent article in the May, 2011 Harvard Business Review discusses these concepts in the context of motivating people. The connection to security is that fundamentally security is about people and motivating them to do the right thing. The article is “The Power of Small Wins”, by Teresa Amabile and Steven Kramer. The article presents their research, illustrating that employees are best motivated when they are able to make continuous progress in their work. This might seem obvious, but contrast with what managers think is the number one motivator: “recognition for good work”.
In short, pcda is critical to information security for two reasons: keeping up with changing threats and helping to motivate security staff, IT staff and end users to do the right thing.


I tend to read any legal cases about information security, because they are one source where accurate root cause information on breaches can be found. Two very interesting decisions on security at banks were recently published. One is the May 27 US District Court decision on Patco v. People’s United Bank. An even more recent decision is Experi-Metal v. Comerica Bank, June 13, US District Court. Both cases involve online transactions with large sums being fraudulently transferred out of business accounts and to Eastern Europe. Unfortunately, these cases are not uncommon. The first one I became aware of was Joe Lopez v. Bank of America; Lopez lost $90,000 back in 2004 in a wire transfer to Eastern Europe. Last year my neighbor’s business lost over $300,000! In this case, the funds were recovered through timely efforts of the FBI and their counterparts in Latvia.
These and many of the recently reported security breaches are caused by basic operational security problems, not super high tech exploits. The vast majority of breaches are also perpetrated on firms that have had multiple breaches. In other words those businesses are not learning from past simple mistakes. In this post, I look at these two recent legal decisions and what we can now learn from the mistakes of the banks. In another post, I will look at what we can learn from the mistakes of the banks’ clients.
In reviewing the courts’ decisions, the common factor is that neither bank had suitable fraud detection monitoring controls in place. Both violated one of the basic principles of security: monitoring. Today, all security controls can be hacked…..the only way to control breaches is constant vigilance and the ability to react quickly.
In Patco v. People’s case, $588,851 was transferred out of Patco’s business account to unknown entities. Someone had captured Patco’s bank log in information, either through a keystroke logger, man in the middle attack or other means, not identified in the proceedings. Patco sued the bank to recover the funds.
In the Patco case, the Bank prevailed in Summary Judgment. This decision turned on what it had promised to Patco and the legal interpretation of that contract under the Uniform Commercial Code.
But even more interesting is how the bank might have protected itself, its customer and its reputation. The bank did have fraud detection software in place. The criminals had used a computer not owned by Patco to log in and transfer funds. The risk score recorded by the bank was 790 for these transactions. The transaction was noted in real time by the risk scoring system to be from a “very high risk non-authenticated device”. Log files of transaction risk scores subsequently showed that the highest previously recorded score was 214. In other words the bank was able to detect that the fraudulent transactions had a risk score 300% higher than its previous maximum, but no one was watching!
This is one of the most common contributing factors to security breaches: no monitoring. Putting in place complex technical or administrative security controls does not work, unless those controls are monitored! At this point, we have to assume that all controls can be breached given high enough incentives. Therefore monitoring and incident response processes become essential to minimize potential losses.
PostScript on this case: After the breach, United Bank is now reporting that it is monitoring the risk scores for ACH transactions!
The facts in the Comerica case were similar. In this case $1.9M was transferred out of Experi-Metal accounts after the criminal used a phishing attack to gain the log on credentials. The 93 transfers were done in a matter of hours out of Experi-Metals Employee Savings Account, which regularly had a zero balance! The judge also noted in his decision that Experi-Metal had limited prior wire activity and that Comerica was aware of phishing attacks just prior to the fraudulent activity. His conclusion was that Comerica had not acted in “good faith” as required under the UCC and therefore is responsible for the outstanding $560,000 that was not recovered.
Unfortunately, security monitoring is often left to the last in implementing controls. The best approach is to establish first what information executive management needs to see, and then work backward to establish how that information will be collected.


A recent article by Michael Porter and Mark Kramer, Creating Shared Value (Harvard Business Review, January-February 2011) makes the point that a focus on "shared value" can help give birth to a new capitalism and move business beyond its short term profit focus. Shared Value, as defined by Professor Porter is not giving money away, but rather is a focus on goals that will both increase profits and improve the business and social environment in which a firm operates.
The practice of information security can contribute to creating shared value. That is, if business is able to move away from the regulatory response approach that most firms use now. Many businesses operate with a strategy of doing as little as possible to protect information, unless regulators or compliance dictates otherwise. As Porter discusses this response to regulation is not just the fault of business. If government prescribes detailed regulations then business sees that as a cost to be avoided. Porter's suggestion regarding regulation is guidelines created by government, around which companies then create performance standards. Government would be responsible for "efficient and timely" reporting of results which could then be audited.
To me this is a good approach which could help ensure that better information security will become a shared value benefit. Keeping consumer information safe is clearly a social benefit, as one example. What we have now is the opposite. For health related information we have detailed standards on how to secure the information, but almost no reporting and monitoring of results. The result is a reported 1M+ medical records breached in any given six month period. What happened to the Hippocratic Oath?


Last week, while at the RSA show, I made a point of seeking out the HBGary booth; I had previously been aware of their good technical reputation through webinars. Booth #556 was there, but not HBGary. Searching online I then learned that they had pulled out after being hacked by Anonymous. This event was possibly the most significant at the entire show. After all, the company states that its Razor product is the "Most Powerful Weapon Against Today's Targeted Attacks". At first glance, that a security technology company gets hacked might be strange. But, then again, this is just confirmation technology does not secure data. People and process secure data, along with partners where needed. Technology automation can help.
The best review of the HBGary attack I have seen is at Ars Technica (www.arstechnica.com). What lessons can be learned from this?
1. Test your applications. The HBGary hack originated via a SQL injection attack. Guess what the #1 OWASP vulnerability is: SQL injection. Go back to 2005 when the FTC filed suit against computer forensics company Guidance Software. The complaint: loss of confidential information through a SQL injection attack. Maybe coincidence, but three HBGary executives came from Guidance. Again: test your applications.
2. Use complex passwords; use different passwords for each application. The next steps in the HBGary exploit included guessing passwords retrieved through the injection attack, and then using those to access other systems incorporating the same passwords. Since the 8 character passwords used by some HBGary executives included only lowercase letters and numbers, it was relatively easy for the Anonymous Group to guess those passwords using pre-computed Rainbow tables. Depending on the information being accessed, we need to make it easy for users to employ complex passwords using all 95 characters on the keyboard. Even worse, HBGary executives used the same 8 character password for other sensitive systems, thus exposing company emails and other sensitive data. While some people argue for short simple passwords...claiming that users will just write complex passwords on Post-it notes... the HBGary hack show clearly the risks of using passwords that can be guessed. It also shows that security rules need to apply to executives as well as to the rest of the firm employees.
3. Patch systems in a timely way. Another critical step in the hack was moving from user to administrator on a Linux support server. This was accomplished through a published vulnerability that had not been patched. Maybe this step in the hack was an exception. According to the 2010 Verizon Data Breach Report, none of the intrusions they investigated resulted from a patchable vulnerability.
4. Monitor Intrusion Detection systems. Although there is no mention of this in the Ars Technica analysis, we have to ask the question, who was monitoring the web server, email server and other platforms that were hacked? Each layer of HBGary's defenses had vulnerabilities. The only way to keep intruders out in this situation will be via monitoring and rapid reporting of incidents. This may be a difficult lesson to learn, since we all tend to rely on technical defenses as impermeable.
5. Train and retrain users about social engineering. One of the most fascinating parts of the hack was the email exchange between the Anomymous hacker and an HBGary user requesting Greg Hogland's user ID AND password. This was willingly sent over the Internet. The moral here is: never send this information without speaking directly to the recipient.
6. Carefully monitor your business partners. This incident spilled over to other firms, in particular the law firm Hunton & Williams. A number of the emails hacked at HBGary were to and from Hunton & Williams discussing the use of HBGary Federal security services. These were from H&W partners and never intended to be aired to the public. Ironically, H&W advertises itself on its website to "have an internationally known, superb team of privacy professionals at the firm who understand the maze of privacy and data security issues facing global companies." A lesson learned from this is: know who your business partners are, what data they have access to and work through risks to that data with them face to face.
In summary, good security is not about technical controls or architecture. It is about execution and monitoring of execution.
As Emmy Lou Harris sang: "C'est La Vie, You Never Can Tell"


A very good tutorial on DDOS attacks, much in the news in the past few months, was posted by the Berkman Center at Harvard University in December. The research is entitled: "Distributed Denial of Service Attacks Against Independent Media and Human Rights Sites", December 2010. The first part of this report outlines DDOS attacks in general, while the last half presents research on attacks against human rights sites around the globe.
DDOS statistics in the report, quoted from Arbor Networks, include: 1300+ DDOS attacks per day in the global Internet, 49Gbps maximum aggregate attack traffic; botnets with up to 1 million nodes. According to Arbor's February 1st report, DDOS attacks have now exceeded 100Gbps. Mid-sized firms, connected through Tier 3 ISP's are the most vulnerable. Those connected to Tier 1 or Tier 2 providers can take advantage of those providers' expertise in mitigating DDOS attacks.
I believe we will have more of this type of attack against commercial businesses. As more enterprises move into the clould, are they more at risk from DDOS attacks against a fellow tenant in that cloud? Or will the superior skills of the cloud provider be able to mitigate that risk?


We live in a time when information technology is turning everything inside out. This presents challenges and opportunities for information security professionals. I had the pleasure this week of listening to a presentation by Michael Rogers at LegalTech in NYC. The subject of his talk was information technology in 2020. Mr. Rogers designates himself as a "practical futurist" and can found at www.practicalfuturist.com. Here are my security related takeaways from his comments:
1. Everything will be more mobile. Although the size limitations of smart phone and portable computers might be seen to be a limitation, new input and output devices will be included to facilitate the concept of work anywhere. These include picoprojectors to project screens on the wall and heads-up goggles. These devices will continue to make securing the enterprise and home more difficult.
2. More and more relationships and business will be done virtually. While traditional business has been done through face to face handshakes, the millennial generation and succeeding generations are now more comfortable with the virtual relationship. We need to come up with something to facilitate online trust. Can we create a federal standard for a secure legal identity?
3. Mr. Rogers talks about the "Internet of things", where everything has an IP address. More IP addresses means more entry points for hackers, whether it be through Internet connected cars or even Internet connected dumpsters. The Internet connected car could facilitate pay as you go insurance, but could also be a target for fraudsters. I'm not sure about risks associated with Internet connected dumpsters!
The convergence of social media, mobility and cloud is going to challenge security professionals in these areas and many others!


I recently had a scary experience with Amazon. I regularly order items on this site, and have not had significant problems. However, yesterday was different. I was ordering an emergency flashlight and four way travel powerstrip and about to complete my order, when I noticed that the shipping charges totalled $1055.44. See the screen shot to see what I saw. Fortunately I caught this and didn't click "Place your order". Amazon explained to me that the seller, not Amazon, had incorrectly provided this shipping cost. Was it fraud, a computer error or a simple human error? I don't know. Did anyone else order the flashlight before me? Here is an example of a data governance problem, where Amazon is importing erroneous data without checking the information. It's their reputation that will suffer, if someone really does order the $1000 flashlight. I believe that there is an opportunity for security professionals to get more involved with these types of problems. It doesn't really matter if the data "glitch" is fraud or a data import error or human error. The effect on the customer relationship is the same. As an entry point into this field, check out www.infogovcommunity.com.


I believe that information security professionals can learn from disasters reported in other areas. After all, the basic security mission of prevent, detect and respond is the same whether the assets being protected are bytes of data or barrels of oil.
Yesterday the National Oil Spill Commision released its final report on the Deepwater disaster of April 20, 2010. The section on root causes was especially interesting. It is not often that we get a real analysis of the root cause of a security incident. In this case the identified root causes were failures in management and communications, both of which directly apply to information security management.
Here are the causes identified by the Commission and the corresponding actions that should be taken by security managers to help avoid a security disaster:
1. There was no process to evaluate the risks associated with last minute changes in well design or procedures. This highlights the necessity of security representation on the Change Advisory Board as well as a strong change management process overall.
2. Inadequate testing of well processes before utilization. Again, this highlights the need for a strong change management process and QA function.
3. Inadequate communications between BP, Transocean and Halliburton. Most security operations today are at least partly outsourced to one or more vendors. In the case of Deepwater there were numerous communications failures between vendors and between well management and operational personnel. This highlights the need for including vendors in the security incident process and for expanding this process to include security events that may be leading to a larger incident.
4. Inadequate communication of previous near disaster. Another rig operated by Transocean had experienced a similar blowout four months prior to the Deepwater disaster. Transocean had prepared an advisory regarding this event, but it was not communicated to the Deepwater team. This highlights the need for regular review of security incidents by security management and continuous improvement of security controls.
The Commission report states that this accident was the result of mistakes and was avoidable. Implementing the above procedures will help eliminate similar types of avoidable information security disasters.

