CWE/SANS Top 25

With the release of the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors came a push to hold software developers to be held liable for any insecure code they write.



FAQs

Got questions about dotDefender? Please visit our knowledgebase for answers or contact Support at support@applicure.com.


Featured Blog Posts

Top Jordan website back up after hacking

AMMAN — Jordan's most popular news website, Ammonnews, said it was shut down ... read more ...

Why Web Application Security?

Presence on the Internet involves dealing with an ever-shifting landscape. New technologies emerge while ... read more ...

Web Hacking Facts and Figures

According to a new Data Breach Investigations Report from global comms and IT provider Verizon ... read more ...

Alan Paller, director of research for the SANS Institute commented that "wherever a commercial entity or government agency asks someone to write software for them, there is now a way they can begin to make the suppliers of that software accountable for [security] problems."

Put in place to protect the buyer from liability, these efforts are also guided at producing secure code from the very beginning since vendors won’t be able to charge for fixing vulnerabilities found in their software.

Compiling the list

The Most Dangerous Programming Errors is a list compiled yearly by the Common Weakness Enumeration, a community initiative sponsored by the US Department of Homeland Security and the MITRE corporation, and the SANS Institute. Drawing from an international pool of approximately 40 software security experts from businesses, governments, and universities, the Top 25 are created by building from the previous year’s list through a private discussion board. Once the threats discussed were evaluated by the research team and the list was narrowed down to 41 entries. These entries were then rated according to two metrics: prevalence and importance and the 25 with the highest ratings were selected to the list. The ratings for prevalence were:

When looking at the importance of a threat, the ratings were:

The Top 25

In addition to ranking the Top 25 weaknesses, the report broke them down in to three categories: Insecure Interaction Between Components, Risky Resource Management, and Porous Defenses. The following table displays the rank, score, weakness, and category it falls under.

Rank Score Weakness Category
1 346 Failure to preserve web page structure (Cross-site scripting) Insecure interaction between components
2 330 Improper sanitization of special elements used in a SQL command (SQL injection) Insecure interaction between components
3 273 Buffer copy without checking the size of input (Classic buffer overflow) Risky resource management
4 261 Cross-site request forgery Insecure interaction between components
5 219 Improper access control (Authorization) Porous defenses
6 202 Reliance on untrusted inputs in a security decision Porous defenses
7 197 Improper limitation of a pathname to a restricted directory (Path traversal) Risky resource management
8 194 Unrestricted upload of a file with dangerous type Insecure interaction between components
9 188 Improper sanitization of special elements used in an OS command (OS command injection) Insecure interaction between components
10 188 Missing encryption of sensitive data Porous defenses
11 176 Use of hard-coded credentials Porous defenses
12 158 Buffer access with incorrect length value Risky resource management
13 157 Improper control of filename for include/require statement in PHP program (PHP file inclusion) Risky resource management
14 156 Improper validation of array index Risky resource management
15 155 Improper check for unusual or exceptional conditions Risky resource management
16 154 Information exposure through an error message Risky resource management
17 154 Integer overflow or wraparound Insecure interaction between components
18 153 Incorrect calculation of buffer size Risky resource management
19 147 Missing authentication for critical function Porous defenses
20 146 Download of code without integrity check Risky resource management
21 145 Incorrect permission assignment for critical resource Porous defenses
22 145 Allocation of resources without limits or throttling Risky resource management
23 142 URL redirection to untrusted site (Open redirect) Insecure interaction between components
24 141 Use of a broken or risky cryptographic algorithm Porous defenses
25 138 Race condition Insecure interaction between components

Dealing with Weaknesses

If developers are expected to ensure that their code is void of any weakness listed in the Top 25, they need to a) know how to identify the weakness and b) know how to prevent it. The report further breaks down those making the list and provides a detailed description for each weakness. These descriptions are broken down for the developer, providing a Summary that provides the weakness prevalence rating, a rating for the remediation cost, the attack frequency, the consequences of the weakness being exploited, how easy it is to detect the weakness, and how aware attackers are that the weakness exists. Following the summary is a discussion that provides the developer with a quick description of how an attack is carried out against the weakness and a sequence of prevention techniques to help developers avoid making such mistakes in their code.

What about OWASP

In developing their Top 25 list, CWE/SANS included a comparison to the OWASP Top Ten making a clear statement of the importance of OWASP’s list while also recognizing distinct differences between the two. Most clearly defined is that the OWASP Top Ten deals strictly with vulnerabilities found in web applications where the Top 25 deals with weaknesses found in desktop and server applications as well. A further contrast is seen in how the list is compiled. OWASP giving more credence to the risk each vulnerability presents as opposed to the CWE/SANS Top 25 that included the prevalence of each weakness. This factor is what gives Cross-site scripting the edge in the Top 25 as it is ranked number 1 while OWASP has it ranked at number 2.

Working with the Top 25

Pushing to make the Top 25 a checklist for developers to use to avoid lawsuits seems like the panacea for all programming vulnerabilities to those who support adopting such standards. Opponents claim that it will result in costlier software. Neither addresses what happens in the event of a zero-day attack. With an estimated 78% of all vulnerabilities being found in web applications, the likelihood of falling victim to an unknown threat is high.

So what happens if XYZ software abides by the Top 25 but weeks after deploying a new application, they are hit by an unknown threat? Is XYZ liable? Or is the buyer? And what if the buyer fails to protect the application? If the code is attacked as a result of their failure to secure other resources, where does the blame lie?

In my opinion, this has already been answered and is put into practice daily. Merchants who want to accept credit cards must comply with PCI standards that require either a code review or the deployment of a web application filter. Taking either route insures compliance but the recommendation is to make use of both solutions. Code review, which would help address Top 25 weaknesses, helps the developer indentify weaknesses and address them before releasing the application to the marketplace. Building secure code from the beginning is always a best practice to be followed. Combining this with the protection a web application firewall provides a line of defense against unknown vulnerabilities to stave off potential zero-day threats.


Related Articles:

Securing Cloud Data
Vendor Lock In or Ignorant Design?
The Anatomy of a SQL Injection Attack

Please Wait...