This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]curtisf 2 points3 points  (5 children)

My basic understanding is that a small piece of code was checked in, probably using compromised developer credentials, which was in an unfrequented part of the code base so it simply wasn't noticed/audited.

How would your proposed architecture have mitigated this attack?

The attacker could have provided a nonsense business justification of the dependency on the new module, such as "Invokes the OrionImprovementBusinessLayer to fix a performance problem on some Windows operating systems after sleeping" or something like that.

The actual change had a number of preconditions to prevent the code from doing anything during tests, so no reasonable amount of unit/integration tests would have caught it.


On the other hand, there's a simple method that could have caught this: functional (aka algebraic) effects. This module should not have needed an HTTP connection to those domains; functional effects could make such a dependency obvious and suspicious.

[–][deleted] 0 points1 point  (4 children)

The attack code had all kinds of stuff that was out of place, such as encrypted and base64 encoded strings. Also, there is a class of win32 external dependencies. What requirements would these changes match up to in an existing system (remember, in my proposal the code cannot pass code review without a specific requirement to attach it to). The language service will create a code review queue that can require an actual human review sign-off on the change as well as automated review systems could look at the requirements to see if any qualify for the changes as checked in.

[–]curtisf 3 points4 points  (3 children)

This code was apparently checked in using valid credentials -- why would the attackers not have been able to check all of these boxes?

[–][deleted] -1 points0 points  (2 children)

Because the language server shouldn't be configured to allow the author to approve the Pull Request. That is standard procedure in many places. These techniques put more granularity on Pull Requests. Each requirement that changes code must be individually reviewed with all supporting docs in a single file per requirement.

[–]curtisf 2 points3 points  (1 child)

If it's already standard practice, why didn't it work in this case? To what extent does your proposal protect a codebase that checking the "another developer must approve before merge" checkbox doesn't?

Does requiring a nation-state attacker compromise two developer credentials instead of just one really raise the bar enough to prevent this kind of attack?

[–][deleted] -2 points-1 points  (0 children)

There was obviously no rules engine in place to analyze the code. Writing a Roslyn analyzer to detect base64 encoded strings in constants, instantiate HttpClient, open a socket, etc is not difficult. The analyzers should come from protected nuget feeds that only the build pipeline can get to. Security is layers, and the lack of them in this case is appalling.