In reality, software failures have yet to have catastrophic consequences even on a large scale level, and their effect on critical infrastructures has been limited. Likewise, faulty software code is ubiquitous, but specification errors and management errors are much more consequential for most systems. In short, Human Factor or Human Error is the most common cause of network or software security failures.

Do we ever learn? This question is very much justified when looking at the size and scope of damage that can be caused by network or software security failures in companies. Imagine a company hit by such tragedy is trading globally on a wide scale; multiply that with other so-called small “tragic consequences” and the whole picture begins to grow.

All because of a small human error a small piece of software installed on nearly every modern PC that can cause a fatal back-door for hackers that can last for many years.

Still, we should be aware of the long history of software blunders, and try to ensure that what we build now is better than what we had before.

The Java Bug

A particular Java bug vulnerability has been existing since the release of the version 7 of the Java plug-in, as well as in versions 5 and 6. That’s every Java release in the past 8 years. The vulnerability allowed to break out of the sandbox created by the plug-in and to run malicious code on a remote computer fairly easily. One billion users were to fall victim, and still counting.

Security consultants agree that patching this exploit might prove more complicated than its maker, Oracle, would even admit. CMU/SEI CERT experts suggest disabling the Java plug-in even after installing a security patch. Many experts claim that it can take as much as two years for Oracle to prepare a really secure version of the plug-in.

What have we learned? Even if a piece of software was used and scrutinized for many years, it may still contain surprising human errors. Nothing we use is 100% hack-proof, so we should aim to reduce the number of active runtimes or code execution environments on our system to the bare minimum.

The XP Day-Zero Experience

The most recent zero-day vulnerability in Windows XP was found in 2010 and allowed hackers to install applications and malware on the system remotely, thanks to a gap in the Help and Support Center module. Shortly after the bug became public, Microsoft reported as many as 10,000 attacks conducted from all over the world on fresh or non-updated copies of Windows XP.

What have we learned? Before you install any system or a piece of software, it might be good to check for the already known zero-day vulnerabilities, download offline patches and apply them before you put the Ethernet cord into the socket.

An 8-year old Web form leads to a disaster

To this day it is regarded as a single largest data breach in the history of IT.

And it all happened thanks to an 8-year old Web form that languished, covered in cobwebs, somewhere on the corporate website. Maybe because it was created in a different day and age, and maybe due to a simple oversight of the programmer, this form was prone to SQL injection attack. In plain English: if, instead of a proper text, you’d paste a special SQL script into the text box, the server would run it, giving you access to things you shouldn’t have access to.

During those eight years Heartland Systems conducted several safety audits, but most of them were related to the payment processing modules. No one ever bothered to take a look at small Web forms buried deep in a byzantine structure of Heartland’s front-end.

Now, there are many automated tools you can use to scan your Web app for potential SQL injection targets. Heartland’s staff never used them. Some enterprising hackers did. After gaining access to the corporate network, they were able to crack the payment processing systems from the inside, and steal a lot of data.

That’s how one piece of obsolete code led to 130 million lost records and $3.5 billion in damages.

What have we learned? Be aware of all the legacy architectures in your system. A long-forgotten search window, an unused protocol, a legacy API; under right circumstances they all become entry points for unwanted guests. Stay on top of tools used by hackers. When testing your systems for security, test all of them, not only the priority ones. They might all be connected in ways you wouldn’t even imagine.

What is most striking from the 3 aspects above is that all of them could have been easily avoided if someone, somewhere, at some time had shown extra care against human error. We also need to educate ourselves about many current and past software security issues.

That’s something all software developers should keep in mind, unless they want to end up on a list just like the one above and we become another statistic or story of “surprising backdoors” and “avoidable flaws.”

Tags: , ,

This entry was posted by Staff Writer on Tuesday, July 29, 2014 at 11:15:11 PM and is filed under Computer Security & Data Protection, Small-Medium Business.

Leave a Response