in

Tons of JavaScript Bugs in Node.js Ecosystem – Automated Discovery – Naked Security

odgen 1200

[ad_1]

Here is an interesting paper from the recent 2022 USENIX conference. Mining Node.js Vulnerabilities with Object Dependency Graphs and Queries.

I’m going to cheat a bit here by not delving into the core research presented by the authors of the paper (some mathematics, and knowledge of operational semantic notation preferred when reading it).Source code analysis they call ODGEN, for short Object dependency graph generator.

Instead, I’d like to focus on what they could discover almost automatically by actually using the ODGEN tool in the Node Package Manager (NPM) JavaScript ecosystem.

One important fact here is that, as I said above, their tool is static analysis.

This is the intended place to review possible (or actual) coding failures or security holes without actually running the source code.

Testing by execution is a much more time-consuming process that typically takes longer to set up and run.

But, as you can imagine, the so-called dynamic analysis – actually building software so that it runs in a controlled way and can be exposed to real data – generally rather than just “look carefully and intuitively understand how it works” , gives much more complete results and is more likely to reveal mysterious and dangerous bugs.”

However, dynamic analysis is not only time consuming, it is also difficult to do well.

This allows dynamic software testing to do bad things very easilyBecause even if you spend years on this task, it’s easy to end up with an impressive number of tests that aren’t as diverse as you think and that your software will almost certainly pass no matter what. Dynamic software testing can result in teachers setting the same exam questions year after year. Therefore, a student who concentrates entirely on practicing “past papers” will perform as well as a student who has truly mastered the subject.

Disjointed tangle of supply chain dependencies

Global open source repositories such as NPM, PyPI, PHP Packagist and RubyGems are well known in today’s huge software source code ecosystem. Many software products rely on extensive collections of other people’s packages, forming a complex and disjointed supply chain. chain dependencies.

Implicit in these dependencies, as you might expect, is a dependency on each dynamic test suite provided by each underlying package. These individual tests usually don’t take into account how all the packages interact when they interact (indeed, they can’t). together to form your own unique application.

So while static analysis alone isn’t enough, static analysis can be done “offline” and is a great starting point for scanning software repositories for glaring holes.

In particular, you can regularly and regularly scan all the source code packages you use. You don’t have to build running programs, and you don’t have to write reliable test scripts that exercise those programs in a variety of realistic ways.

You can also scan your entire software repository, including packages you don’t need to use. This allows you to remove code (or identify the authors) of software you don’t want to trust before trying it out.

Even better, use some kind of static analysis to go through all your software and find it via dynamic analysis (or reported through a bug bounty system) in one piece of software You can look for bugs caused by similar programming mistakes. product.

For example, imagine an actual bug report originating from a specific location in your code. Use after release memory error.

a Use after release You’re sure you’re done with a particular block of memory, and you hand it over for use elsewhere, but forget it’s not yours and keep using it anyway. Months later out of habit I accidentally drive home from work to my old address and wonder why there’s a strange car in my driveway.

If someone copied and pasted the buggy code into another software component in the company’s repository, text search would not show you may be able to find them. not much change.

But if other programmers were simply following the same coding idiom, they would probably write the flawed code in a different programming language (in technical terms, lexically different)…

…and then text search becomes almost useless.

Isn’t that convenient?

Wouldn’t it be nice to be able to statically search your entire codebase for existing programming failures based on features such as code flow and data dependencies instead of text strings?

Now, in the USENIX paper discussed here, the authors put a lot of different code I tried to build a static analysis tool that combines traits. A piece of code affects the result. “

This process is based on the object dependency graph described above.

A very simplistic idea is to statically label your source code to know which combinations of code and data (objects) used at one point affect objects used later. That’s it.

Then you need to be able to search for known bad code behavior – odorin jargon, you don’t need to actually test the software in a live run, nor do you have to rely solely on text matching in the source.

In other words, similar to what coder A found out from coder B, regardless of whether A literally copied B’s code, followed B’s flawed advice, or simply chose the same bad workplace habits. You may be able to detect if you have generated a bug for as B.

Broadly speaking, a good static analysis of your code, despite the fact that it never watches the software actually run, before injecting subtle (or rare) bugs into your own projects. to help identify poor programming from the start. Enough to never show up in practice, even under extensive and rigorous field testing.

And that’s the story we were trying to tell you in the first place.

300,000 packages processed

The authors of this paper applied the ODGEN system to 300,000 JavaScript packages in the NPM repository to filter out those that suggested the system might contain vulnerabilities.

Of those, they kept packages that had over 1000 downloads each week (they didn’t seem to have time to process all the results), and upon further investigation identified packages that they believed had exploitable bugs. Did.

Among them, they found 180 harmful security bugs. This includes 80 command injection vulnerabilities (untrusted data passed to system commands can lead to undesirable results, typically involving remote code execution) and 14 Contains further code execution bugs.

Of these, 27 were ultimately given CVE numbers and identified as “official” security holes.

Unfortunately, these CVEs are all dated 2019 and 2020. Because the actual part of the work on this white paper was done over two years before him.

Nonetheless, even working in as tenuous an atmosphere as researchers think (for most active cybersecurity responders, fighting cybercriminals today means doing as little research as possible). It means to finish early so that it can be used immediately)…

…if you are looking for research topics to combat supply chain attacks on today’s giant software repositories, static code analysis should not be overlooked.

still old dog life

Static analysis has suffered some unpopularity in recent years, especially since popular dynamic languages ​​like JavaScript make static processing frustrating and difficult.

For example, a JavaScript variable could be an integer at one point and be “appended” to a text string, albeit incorrectly, but perfectly legal, converted to a text string, and later to a completely different object type. There is a nature.

Also, dynamically generated text strings magically turn into new JavaScript programs that are compiled and executed at runtime, resulting in behavior (and bugs) that wasn’t even present when static analysis was done. is introduced.

However, the paper suggests that even for dynamic languages, regular static analysis of dependent repositories is still very useful.

Static tools can help you not only find potential bugs in code you’re already using, even JavaScript, but also determine the underlying quality of code in packages you’re considering adopting. .


Learn more about preventing supply chain attacks

This podcast features Sophos experts Chester WisniewskiPrincipal Research Scientist at Sophos, is packed with useful and actionable advice for dealing with supply chain attacks, based on lessons learned from past mega-attacks like Kaseya and SolarWinds.

If you don’t see the audio player above, listen directly on Soundcloud.
You can also read the entire podcast as a full transcript.


[ad_2]

Source link

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

    supreme court victoria smsf

    Recent decisions highlight the risks of incompetent advisors

    WIndows 11

    Former Microsoft UX director ‘shocked’ by confusing Windows 11 Start menu experience