Sunday, July 26, 2015

Can We Drive the Market to Force Secure Software Development



Much of the information security profession discusses how software development must include security and each phase of development process. But that is not something that is obviously being implemented in software security today. So the question has come whether or not there needs to be some level of legal regulation that governs the development of certain software systems and ensures compliance with security practices. 

The government regulation is not the answer. The answer just may be the advent of an independent certification capability that can audit software development practices and the resultant products and certify them as compliant with some kind of security standard. The software developers would be able to demonstrate the security in the product through the certification audit, and those organizations that use the software can demonstrate that too.  This gives the market something that is missing today; information for consumers on the degree of security that has gone into the software that organizations use to protect their personal and private information.

The concept is not novel; the auto industry thrives on the receipt of awards from Motor Trend, JD Power Associates recognizes aspects of other industries for excellence in certain areas of performance, and in a smaller scope, this kind of certification exists with SSAE-16 certifications.  

Would this kind of a capability work?  Would it drive improvement?  Would consumers react to certifications and drive business to those organizations that have such certifications?  Is this a better solution than government regulation?

Tuesday, July 21, 2015

The Challenges to Improve Authentication Assurance

Introduction

            Authentication protection is a concern in the industry and in research as authentication is relied upon to be a strong protection measure to ensure communication exchanges between authenticated entities is secure, private, and that data maintains its integrity.  Vulnerabilities in multiple types of authentication and subsequent communications are reported for interactive user access (Perkovic, Cagalj, & Saxena, 2010; Volker,Richter, & Friedinger, 2004) and automated system access (Xu & Zhu, 2013; Alomari, Benamer,  Muhammad Rafie Mohd, & Mohamed, 2014).  This leads researchers to consider new approaches to address the vulnerabilities and develop mechanisms that provide greater assurance of protection of authentication credentials.
Improvement Approaches

            There are two basic approaches that literature seems to be taking to improve authentication assurance.  The first approach, primarily adopted for interactive user access and authentication, is to adapt a measure where the user enters authentication information in a non-traditional way.  The non-traditional way requires data input and exchange with the authentication service but in a manner that, if intercepted, would not have meaning to an attacker.  In Perkovic et al., 2010 and Volker et al., 2004, the researchers not only measured their approaches to demonstrate the security, but they also assessed their approaches using the System Usability Scale. Predictably, the new approaches did not rate well on an ease-of-use scales when compared to more traditional approaches.  The Technology Acceptance Model (TAM) measures the extent where ease-of-use and usefulness of a technology can predict the level for which a user will intentionally adapt the technology (Reynolds & Woods, 2006).   The decrease in ease-of-use could arguably be a factor in explaining why such capabilities have yet to be adapted.  It is possible that not only has ease-of-use of such technologies inhibited adoption, but usefulness too.  In the research on interactive user authentication, the authors made minimal effort to demonstrate a need for the technologies they developed.  If the threat that the technology addresses is not a concern to users, then the technology provided to address that threat will not be readily received.  This may be an area where more contributions to the body of knowledge would be helpful; the level to which users find different security threat scenarios of significance.  Such information could guide the focus on what capabilities need to be developed and whether such capabilities would have an improved chance at adoption.

            The second approach that seemed common in literature for addressing automated system authentication is to enchance current capabilities to exchange even another piece of information between them and perform a mathematical function on them.  This approach is analogous to encryption; where a key is exchanged and that key is required to undo a hash and reveal the data.  Alomari et al., 2014 present a type of shared key exchange in RFID authentication to validate communciations and assure data integrity in an enhanced communication protocol.  In a similar derivative of this approach, Xu and Zhu (2013) introduce a smart card which provides a proxy-type/anonymizing capability between two systems in order to mask information about the remote system.  The proxy/anonymizing capability comes from a successful exchange of additional information between the proxy and the remote system.  Counter to the challenges of ease-of-use found in the interactive user authentication solutions, the authors demonstrate the ability to integrate these solutions with minimal user impact and cost.  This makes it difficult to ascertain what factors could be impeding the adoption of these solutions and perhaps more research that examines why these types of technologies are not being seen in the market would be useful.

The Importance of the Problem

            Valid authentication is a concern across all aspects and represents both a human and a technology problem.  The use of compromised credentials is the primary attack vector that leads to successful breaches in organizations (Rapid 7 empowers organizations, 2014). Credentials are compromised due to human issues (social engineering attacks), technology issues (compromised systems) and policy issues (weak passwords that are easily cracked).  Management attempts to address these issues with relevant controls and capabilities that are appropriate for the threat and risk in the organization.  A common layer of protection is a strong detective capability, which should identify the use of compromised accounts regardless of the root method that the attacker got the credentials.  

References

Alomari, S. A., Benamer, S. H., Muhammad Rafie Mohd, A., & Mohamed, H. H. (2014). APSEC+: An enhanced simple mutual authentication protocol for RFID security. International Journal of Academic Research, 6(5), 278–291. doi:10.7813/2075-4124.2014/6-5/A.39

Perkovic, T., Cagalj, M., & Saxena, N. (2010). Shoulder-surfing safe login in a partially observable attacker model. Financial Cryptography and Data Security Lecture Notes in Computer Science, 6052, 351-358. Retrieved from https://ifca.ai/pub/fc10/29_80.pdf

Rapid7 empowers organizations to easily simulate, detect and investigate compromised user credentials, today’s most common attacker methodology (2014). Business Wire.  Retrieved from http://ezproxy.library.capella.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bwh&AN=bizwire.c56777674&site=ehost-live&scope=site

Reynolds, R. A., & Woods, R. (Eds.). (2006). Handbook of Research on Electronic Surveys and Measurements. Hershey, PA, USA: IGI Global. Retrieved from http://www.ebrary.com

Volker, R., Richter, K., & Friedinger, R. (2004) A PIN-entry method resilient against shoulder surfing. Proceedings of the 11th ACM Conference on Computer and Communications Security, 236-245. doi: 10.1145/1030083.1030116              

Xu, J., & Zhu, WT.(2013). A generic framework for anonymous authentication in mobile networks. Journal of Computer Science and Technology, 28(4), 732-742. doi: 10.1007/s11390-013-1371

Monday, July 13, 2015

A New Security Capability is Needed

Over the past several years, the security industry has watched the threat vectors that attack information security systems change and adapt to overcome the increasing security controls that individuals and organizations have established in the environment. In response to improved technologies to identify and address vulnerabilities, such as malware, web-application code weaknesses, and direct system exploits (Dey, Lahiri, & Guoying, 2014), the attackers have taken back channels to gain access into systems. Instead of frontal attacks on systems, the attackers utilize the authorized individuals, and their approved credentials and roles within systems, to gain access to targeted systems (Dey et al., 2014). This generally comes in the form of some type of social engineering attack that, when successful, contaminates a user’s endpoint and then uses the access the user creates through his/her credentials, to gain the stronger foothold into backend systems and data (Purkait, 2012).

Industry’s Historical Response
The industry has focused on three approaches to mitigate this attack vector: technological improvement in detection and prevention of phishing-type attacks; technological improvement in detection and prevention of malware infection from such attacks, and; improved training and awareness of end-users to recognize such attacks to avoid becoming victims of them (Purkait, 2012). Absent from the approach is a strong effort to improve application security that would detect anomalic behavior from an authorized user (or his/her credentials) to alert or mitigate the anomaly.
Take, for instance, a user with an infected system who is accessing (with appropriate credentials) an organization’s customer information systems. Role-based access controls, which are increasingly being offered in systems and applied by organizations, are intended to limit the type of data that the user has access to. However, application security controls typically are not developed to identify when the user is accessing records that are not associated with an authorized work task, accessing records at a higher-than-usual rate, or other actions that are authorized via credentials but anomalic to the user. 
 
Limitations of Industry’s Response
This is a significant weakness in software security when considering that 40% of breaches that organizations have experienced in the past two years has found its root at a compromised endpoint using a compromised user’s valid credentials to extract valuable data from the system (The eight most common causes, 2013). In essence, information security technologies and practices have been increasingly effective at preventing a breach of a door to the organization’s data. Security has not, however, been effective in ensuring that the activities through a properly opened door are not breach-related. 
 
The Challenge of Addressing the Limitations
This gap exists in software security for two reasons. First and foremost, developing security capability to “block the door” has been the mainstream focus of security efforts for the past two decades (Drinkwater, 2014). A shift in momentum and approach will undoubtedly take notable time to manifest. Secondly, identification of malicious activity within connections, transactions, and sessions between that are otherwise authorized has notable technical challenges. Embedding the kind of analytical capability into a software system is doable, but comes with additional cost. Generally, budget and business considerations look to drive down the cost of software and to bring the software to market quicker (Pass & Ronen, 2014). In addition, there are considerations that would have to be added to the implementation approach to the software. In current technologies that attempt to identify anomalic behavior, there is a period where the software must “learn” normal behavior patterns of users in order to identify such anomalies in the future (Purkait, 2012).
 
Summary
A current challenge in the area of software security is to move past door blocking and role-based access controls, which were relevant controls in the past, and increase the capability to detect and prevent anomalic behavior that is initiated by the authorized user through the open door. Attackers are effectively using social engineering to plant malware on endpoints that ride on the authorized sessions of end users to perform data extraction on systems. Often this data extraction activity is atypical in one or more characteristics of how the end-user engages with the system. A shift in focus in the industry, and improvements in technology, are needed to be effective against this breach type. Data shows that the traditional preventive controls that have been implemented in organizations are ineffective against this attack type.
References
 
Dey, D., Lahiri, A., & Guoying, Z. (2014). Quality competition and market segmentation in the security software market. MIS Quarterly, 38(2), 589-A7. Retrieved from http://ezproxy.library.capella.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=iih&AN=95754036&site=ehost-live&scope=site
 
Drinkwater, D. (2014). Data breach discovery takes ‘weeks or months’. SC Magazine. Retrieved from http://www.scmagazineuk.com/data-breach-discovery-takes-weeks-or-months/article/343638/
 
Pass, S., & Ronen, B. (2014). Reducing the Software Value Gap. Communications Of The ACM, 57(5), 80-87. doi:10.1145/2594413.2594422
 
Purkait, S. (2012). Phishing counter measures and their effectiveness - literature review. Information Management & Computer Security, 20(5), 382-420. doi:http://dx.doi.org/10.1108/09685221211286548
 
The eight most common causes of data breaches (2013). Information Week. May 22, 2013. Retrieved from http://www.darkreading.com/attacks-breaches/the-eight-most-common-causes-of-data-breaches/d/d-id/1139795?

Monday, July 6, 2015

Time to Shift Focus to Detection and Response



One of the more well-known triads of information security is the role security has in assuring confidentiality, integrity, and availability of data [2].  A secondary triad is generated from these responsibilities: to prevent, detect and respond to malicious information security events (which we will call, for this paper, “breaches”).

But, where should the focus be?  Historically, information security has invested much of its efforts into the area of breach prevention [4].  I would offer that, in the future, this is not where security should invest.  Let’s look at the three primary drivers that should drive a shift away from prevention.

Driver 1: It is not a matter of “if”, but of “when” a breach will occur

This type of language is becoming much and much more common in the security discussion [1], [3], [5], [7]. In other words, a breach is almost certainly going to occur in an organization.  In fact, many discussions suggest that all organizations have been breached, if you count pervasive malware such as bots have already infected systems in the organization.  So, if this is the new landscape for information security – and prevention systems have and will continually lose the battle – should the profession (as risk managers) be pushing and pushing for preventative technology implementations?  In agreement with professionals that sounded off in [4], that there is too much focus on prevention.  Therefore, I would offer that the focus should be, in such a landscape, on detection and response.  If a breach is inevitable, given the advances in the threats and the continual proliferation of information and services in the environment, then let’s improve our detection and response capability.

Driver 2: The average breach takes seven months to detect

Articles and news releases from some of the more recent and prominent data breaches have exposed that, following investigation into the breach, organizations are finding that attackers had been into the network and systems for extended periods.  Mandiant reports that the current average time to detect a breach is 7 months [6]. While the time-to-detection seems to be decreasing (the average time in 2012 was 13 months), that length of time to discovery and response is exacerbating the entire impact of the breach.  With the amount of time that breaches go unnoticed, the number and types of records climb in their significance.  In addition, Mandiant reports that the number of organizations that are discovering their own breaches is down the past few years, meaning that organizations are finding out from third parties.

Driver 3: Greater financial and reputational harm comes from larger breaches

Assuming that full breach prevention is not possible, organizations need to invest in their ability to minimize the scope of the breach, whether in number of records or type of data.  It is commonly known that the larger the breach, the more significant the financial and reputational impact [4].  Even under some of the most stringent regulatory conditions, a breach that is well-contained does not require the organization to make public notification.  The investment in rapid identification and containment of a breach could provide the financial benefits to the organization that prevention cannot (again, assuming breaches have and will occur and detection will be slow under the current state of investment).

Wrapping it up

This document should not be interpreted to say that any and all investment in preventive capabilities should be abandoned.  That is not only irresponsible and impractical, but probably impossible under most regulatory environments.  Instead, this paper challenges the continual push for prevention with newer technologies that, undoubtedly, will not be able to succeed in full prevention, and succumb to the fact that breaches will occur.  With that landscape in mind, the industry should focus on detection and response capabilities that improve the ability to identify and remediate a breach so that organizational harm is mitigated.  The industry has demanded, historically, better delivery of preventive capabilities from security vendors.  It is time that the industry shift and demand improved delivery of detective capabilities so that we can be ready for what appears to be the inevitable breach.

References

[1] Be prepared for a data breach – it’s not a matter of ‘if’ but ‘when’. (2014). Information Security Buzz. Retrieved from http://www.informationsecuritybuzz.com/prepared-data-breach-matter/

[2] CIA Triad. (2015). WhatIs. Retrieved from http://whatis.techtarget.com/definition/Confidentiality-integrity-and-availability-CIA

[3] Cyber security breach: It’s not a matter of if, but when. (2014). Professional Risk. Retrieved from http://profrisk.com/2014/05/14/cyber-security-breach-its-not-a-matter-of-if-but-when/

[4] Drinkwater, D. (2014). Data breach discovery takes ‘weeks or months’. SC Magazine. Retrieved from http://www.scmagazineuk.com/data-breach-discovery-takes-weeks-or-months/article/343638/

[5] Lord, N. (2014). Data breach experts share the most important next step you should take after a data breach in 2014 – 2015 & beyond. Digital Guardian. Retrieved from https://digitalguardian.com/blog/data-breach-experts-share-most-important-next-step-you-should-take-after-data-breach-2014-2015

[6] Mandiant 2014 Threat Report (2014). Mandiant.  Retrieved from https://www.mandiant.com/resources/mandiant-reports/

[7] Vries, A. (2015). A data breach…It’s not a matter of if, but when. Association of Food Industries. Retrieved from http://www.afius.org/page-767462

Friday, July 3, 2015

Research for the Profession

Most people in the security industry are aware they can find relevant research and whitepapers on the state of security from Verizon, IBM, Gartner, and Forrester, to name a few.  Through LinkedIn, there was an offer of some other research and I am posting it here for others to consider:

http://newenterprisetechnologies.org/proactivedatasecurity/

Tuesday, June 30, 2015

Reviewing Our Decades-Old Approaches

I sat in a professional exchange the other day here in Seattle, listening to the challenges that security professionals are encountering in their attempt to protect an organization that, if typical, is experiencing massive data growth and exchange, without crippling the business.

I feel like I was listening to the same conversation a decade ago.  Granted, the extent of data growth and information exchange was nowhere at the level that organizations are experiencing today, but..at the time...it seemed significant.

In other words, our challenges haven't subsided.  They are the same, but with an increased magnitude.  And the "solutions" and "considerations" that are exchanged amongst professionals haven't changed;
Integrate security into business processes
Ensure visibility of risk matters at executive levels
Apply security controls to the most critical risk areas

My questions are many, but simmered down to two:

1.  Why are we embracing approaches, today, that we did a decade ago, despite the lack of effectiveness?
2.  If one is of the opinion that these approaches are effective, then why do we have the frequency and magnitude of breaches that we are seeing today?