Published Nov 26, 2019
WRITTEN BY EVAN SCHUMAN
Evan Schuman has been a security writer for far longer than he’ll ever admit (OK, since 1988), having penned security stories for Computerworld, SCMagazine, VentureBeat, American Banker, CBSNews.com, HealthcareITNews , StorefrontBacktalk, Pymnts.com and many other sites and corporate blogs. He can be reached at email@example.com.
It’s essential that CISOs hunt out security holes, whether unintentionally created by a careless coder or deliberately created by a cyberthief. The word “hunt” is critical because today’s enterprise—with its hybrid cloud platforms, mobile apps, shadow IT, legacy code, and homegrown software (created by a programmer who left the company ten years ago)—gives these holes plenty of places to hide.
Even worse, holes can be created by reusing code for popular functions (why recreate wheel code?) and not bothering to run security checks on outside code, regardless of how trusted the source is. In short, code from a highly-trusted third-party may not have malware (and I stress “may”), but accidental holes via sloppy programming can come from Mother Theresa’s cyber workshop.
Where to Look for Security Holes
This is a classic area where highly-trusted sources might be trusted for the wrong things and the wrong reasons. Apple/iOS and Google/Android are certainly honorable sources for apps, but those vendors are carefully checking apps for violations of their policy, copyright and competitive issues. Neither Apple nor Google do any meaningful security checks (if you think either company is pen-testing apps before approving them for public download, you are not nearly suspicious enough) so all kinds of nasty bugs and holes can slip through.
And once those apps are downloaded by your IT team for corporate posting or, far more frightening, any employee into a BYOD shared mobile device, they can provide a perfect backdoor into your network. Even though neither Apple nor Google do meaningful security checks of app code, cyber thieves do so rigorously.
Legacy and homegrown code
Legacy and homegrown code present different challenges, but the one they share is key: They tend to be old code that no one bothers to recheck. Legacy mainframe code that seemed absolutely secure years ago—and very well might have been—could present a hole in 2019 due to platform changes that were never envisioned when it was created.
Similarly, custom homegrown code to deliver very specific functionality is rarely rechecked. There are two reasons for that. First, no one bothers because it’s not demanded in IT/Security policy. Secondly, the original programmer has often left the company and no wants to try and figure out the original code strategy. That is the perfect breeding environment for a security hole
Beyond the headaches that a hybrid cloud platform (where some data stays on-prem and the rest sits on an IT-rented cloud environment) offers for compliance and security in general (who knows what settings the cloud staff changed and never bothered to tell anyone), it’s particularly problematic for code security holes.
First, cloud staff tends to add their own apps for various functions that work well within their environment. But has it been checked to work with their customer environments? Let us not forget that the Capital One breach, which impacted more than 100 million customers, reportedly leveraged firewall configuration settings that Amazon staff set in its cloud environment. And the person charged with launching the attack is a former Amazon AWS cloud employee who knew all about Amazon cloud settings.
Security holes can sometimes exist on their own, but holes—especially the accidental ones—are often the result of code interactions. And the cloud is full of surprising code interactions.
Shadow IT is where individual employees or even workgroups get tired of waiting for IT to deliver and go on their own to purchase their own cloud environments to manage projects. As long as those efforts are hidden from Security and IT, the enterprise has zero chance of doing any security checks on, well, anything.
It’s pointless to write policy rules regulating shadow environments because if employees felt like abiding by IT or Security rules, they wouldn’t have gone shadow in the first place. But making sure that managers understand the dangers is still a good idea. The real problem materializes when apps created in those shadow environments eventually get absorbed into the enterprise’s networks. Hello, security holes.
Apps from Google and Apple are not the only places holes can sneak into a system via mobile. Employees will often take entire conversations via mobile devices, including accepting attachments from customers and prospects. When those unchecked files eventually get introduced into corporate networks, accidental holes and deliberate malware can happen.
Another popular way that bad code can sneak into your otherwise pristine programming environments is through mergers and acquisitions. Even if your team has diligently addressed all of the issues referenced above, when your enterprise systems are merged with a different company’s systems, you not only inherit all of their holes (you did do security sweeps of all of their systems during the due diligence phase of the acquisition process, right?), but the interactions between the systems can create new ones.
What to Do with Security Holes
No matter the source, code holes are a critical issue and it must be aggressively checked every day with every system. The problem with that, though, is that few enterprises have visibility into all of their data. And with growing usage of cloud (especially Shadow IT clouds), mobile and simply working with home machines—and a lack of strict compliance with backing up all corporate data with corporate IT—gaining that level of data visibility (a complete and comprehensive global data map) is only going to get more challenging.
To state the obvious, what you need to protect is data. Cyberthieves can attack your systems to their hearts’ content and do minimal damage, as long as they can’t touch your data. Note: an attack on your OS that plants a virus or a worm that later copies, deletes or blocks access to your data—as a cyberterrorist or a ransomware attacker would do—that’s still an attack on your data, albeit one step removed.
The only viable option then is to make a complete list of the triaged risks of data access—limited, of course, to data that you know about—and craft a data protection plan. Until then, though, just beware of these hiding places.
Wondering how to find the hidden security issues in your application code?