Saturday, October 14, 2017


In preparing to do some testing in a Windows 7 VM, I decided to beef up PowerShell to ensure that artifacts are, in fact, created.  I wanted to make sure anything hinky that was done in PowerShell was recorded in some way.

The first step was to upgrade PowerShell to version 5.  I also found a couple of sites that recommended Registry settings to ensure the Module Logging and Script Block Logging were enabled, as well.

The idea behind this is that there have been a number of cases I've worked that have involved some sort of obfuscated PowerShell...Meterpreter, stuff loaded from the Registry, stuff that's come in via LNK files attached to emails (or embedded in email attachments), etc.  Heck, not just cases I've worked...look at social media on any given day and you're likely to see references to this sort of thing.  So, in an effort to help clients, one of the things I want to do is to go beyond just recommending "update your PowerShell" or "block PowerShell all together", and be able to show what the effect of updating PowerShell will likely be.

There's been a good bit of info floating around on Twitter this past week regarding the use of DDE in Office documents to launch malicious activity.  I first saw this mentioned via this NViso blog post, then I saw this NViso update (includes Yara rules), and anyone looking into this will usually find this SensePost blog article pretty quickly.  And don't think for a second that this is all there is...there's a great deal of discussion going on, and all you have to do is search for "dde" on Twitter to see most of it.

David Longenecker also posted an article on the DDE topic, as well.  Besides the technical component of his post, there's another aspect of David's write-up that may go unnoticed...look at the "Updated 11 October" section.  David could have quietly updated the information in the post, but instead went ahead and highlighted the fact that he'd made a mistake and then corrected it.

USB Devices
Matt Graeber recently tweeted about data he observed in the Microsoft-Windows-Partition/Diagnostic Windows Event Log, specifically events with ID 1006; he said, " you a raw dump of the partition table, MBR, and VBR upon drive insertion."  Looking at records from that log, in the Details view of Event Viewer, there are data items such as Capacity, Manufacturer, Model, SerialNumber, etc.  And yes, there's also raw data from the partition table, MBR, and VBR, as well.

So, if you need to know something about devices connected to a Windows 10 system, try parsing the data from this *.evtx file.  What you'll end up with is not only the devices, but when, and how often.

Eric Zimmerman recently tweeted about the RecentApps key in the NTUSER.DAT hive; once I took a look at the key contents, I was pretty sure I was looking at something not too different from the "old" UserAssist data...pretty cool stuff.  I also found via Twitter that Jason Hale had blogged about the key, as well.

So, I wrote a and a plugin, and uploaded them to the repository.  I only had one hive for testing, so YMMV.  I do like the TLN plugin...pushing that data into a timeline can be illuminating, I'm sure, for any case involving a Windows 10 system where the someone interacted with the Explorer shell.  In fact, creating a timeline using just the UserAssist and RecentApps information is pretty illuminating...using information from my own NTUSER.DAT hive file (extracted via FTK Imager), I see things like:

{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (3)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (0)
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE RecentItem: F:\ch5\notes.txt


{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe RecentItem: C:\Users\harlan\Desktop\speak.vbs
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (7)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (0)

Each of the above two listings of three entries from a timeline all occurred in the same second, and provide a good bit of insight into the activities of the user.  For example, this augments the information provided by the RecentDocs key, by providing the date and time at which files were accessed, rather than just that of the most recently accessed file.  Add to this timeline entries from the DestList stream from JumpLists, as well as entries from AmCache.hve, etc., and you have a wealth of data regarding program execution and file access artifacts for a user, particularly where detailed process tracking (or some similar mechanism, such as Sysmon) is not enabled.

Eric also posted recently about changes to the contents of the AmCache.hve file...that file has, so far, been a great source of artifacts, so I'll have to go dig into Eric's post and update my own parser.  From reading Eric's findings, it appears that there's been some information added regarding devices and device drivers, which can be very valuable.  So, a good bit of the original data is still there (Eric points out that some of the MFT data is no longer in the hive file), and some new information has been added.

Friday, October 06, 2017


Eric over at Carbon Black recently posted regarding the Kangaroo ransomware.   Here are some cool things that Eric point out about the ransomware:

1. It's GUI based, and the folks using it to infect un-/under-protected RDP servers.

2. The ransomware time-stomps itself.  While on the surface this may seem to make it ransomware difficult to find during DFIR, that's not really the case at all, and to be honest, I'm not at all sure why this step was taken.

3. The ransomware clears the System and Security Event Logs, and removes VSCs.  As with the time stomping, I'm sure that clearing the Event Logs is intended to make things difficult but to be honest, most folks who've done this kind of work know (a) where to look for other artifacts, and (b) know how to recover cleared Windows Event Logs.

Eric's technical analysis doesn't mention a couple of things that are specific to ransomware.  For example, while Eric does state that the ransomware is deployed manually, there's no discussion of the time frame after accessing the RDP server in which the ransomware is deployed, nor if there are any attempts at network mapping or privilege escalation.  I'm sure this is the result of the analysis being based on samples of the ransomware, rather than due to responding to engagements involving this ransonware.  Earlier this spring, I saw two different ransomware engagements that were markedly different.  While both involved compromised RDP servers, for one, the bad guy got in, mucked about for a week (7 days total, albeit not continuously) installing Opera, Firefox, and a GIF image viewer, and then launched ransomware without ever attempting to escalate privileges.  As such, only the files in the compromised profile were affected.  On the other hand, in the second instance, the adversary accessed the RDP server and within 10 minutes escalated their privileges and launched the ransomware.  In this case, the entire RDP server was affected, as were other systems within the infrastructure.

Types of Ransomware
Speaking of ransomware, I ran across this article from CommVault recently, which discusses "5 major types" of ransomware.  Okay, that sparked my interest...that there are "5 major types". 

Once I started reading the article, I became even more interested, particularly in the fourth type, identified as "Samsam".  Okay, this is the name of a variant or family, not so much what I'd refer to as a "type" of ransomware...but okay.  Then I read this statement:

Once inside the network, the ransomware looks for other systems to attack.

I've worked with Samsam, or "Samas" ransomware for a while.  For example, I authored this blog post (note: prior employment) based on analysis of about half a dozen different ransomware engagements where Samas was deployed.  In all of those cases, a JBoss server was exploited (using JexBoss), and an adversary mapped the network (in several instances, using Hyena) before choosing the systems to which Samas was then deployed.  More recently (i.e., this spring), the engagements I was aware of involved RDP servers being compromised (credentials guessed), and much shorter timeframe between initial access and the ransomware being deployed. 

My point is, from what I've seen, the Samas ransomware doesn't do all the things that some folks say it does.  For example, I haven't yet seen where the ransomware looks for other systems.  Further, going back to Microsoft's own description of the ransomware modus operandi, I saw no evidence that the Samas ransomware "scans the network"...I did, however, find very clear evidence that the adversary did so.  So, a lot of what is attributed to the ransomware itself is, in reality, and based on looking at data, the actions of a person, at a keyboard.

If you want to see some really excellent information about the Samas ransomware, check out Kevin Strickland's blog post on the topic.  Kevin did some really great work, and I really can't say enough great things about the work he did, and what he shared.

Windows Registry
Over on the Follow the White Rabbit blog, @_N4rr34n6_ has an interesting article discussing the Windows Registry.  The article addresses setting up and using RegRipper and its various components, as well as other tools such as Corey Harrell's auto_rip and Phill Moore's RegRipper GUI, both of which clearly provide a different workflow placed over the basic code. 

I've had the honor and privilege to be asked to be involved on a couple of podcasts recently, and I thought I'd share the links to all of them in one place, for those who are interested in listening:

Doug Brush's CyberSecurity Interviews - I've followed Doug's CyberSecurity Interviews from the beginning, and greatly appreciated his invitation and opportunity to engage

Down the Security Rabbithole with Rafal and James; thanks to both of these fine gentlemen for offering me the opportunity to be part of the work they're doing

Nuix Unscripted - Corey did a really great job moderating Chris and I, which brought things full circle; not only did Chris and I used to work together, but Chris was one of the very first folks interviewed by Doug Brush...

Chris Woods over at Nuix (transparency: this is my employer) posted an excellent article regarding three best practices for increasing the efficiency of examinations.  Interestingly enough, these are all things that I've endorsed over the years...defining clear analysis goals, collaboration, and using what has been learned from previous investigations.  I want to say something about "great minds", but the simple fact is that these are all "best practices" that simply make sense.  It's as simple as that.

I ran across something really fascinating today..."wait," you ask, "more fascinating than making your computer recite lines from the Deadpool movie??" almost!  Here is a fascinating article that illustrates not only the steps for how to reveal Wifi passwords on a Win7+ computer, but provides a batch file for doing so!  How cool is that?

LNK Metadata
A bit ago, I'd taken a look at a Windows shortcut/LNK file from a campaign someone had blogged about, and then submitted a Yara rule to detect submissions to VirusTotal, based on the MAC address, volume serial number, and SID embedded in the LNK file.  This was based on an LNK file that had been sent to victims as an attachment.

The Yara rule I submitted a while back looks like this:

rule ShellPhish 
        $birth_node = { 08 D4 0C 47 F8 73 C2 }
$vol_id        = { 7E E4 BC 9C }
        $sid             = "2287413414-4262531481-1086768478" wide ascii

all of them

So, pretty straightforward.  The thing is, over the past few days, I've seen a pretty significant up-tick in responses from the retro hunt, indicating a corresponding up-tick in submissions to VT.  Up to this point, I'd been seeing maybe one or two detections (again, based on submissions) a week; I've received about a few dozen or so in the past two days alone.  This up-tick in responses is an interesting change, particularly because I'm not seeing a corresponding mention of campaigns utilizing LNK files as attachments (to emails, or embedded in documents, etc.).

A couple of things I haven't done is note the first submission dates for the items, as well as the country from which they were submitted, and then downloaded the LNK file itself to parse out the command line, and note the differences.

So, why am I even mentioning this?  Well, this goes back to Jesse Kornblum's premise of using every part of the buffalo, albeit the fact that it's not directly associated with memory analysis.  The metadata in file formats such as documents and LNK files can be used to develop insight based on relationships, which can lead to attribution based on further developing the threat intelligence you already have available.

Thursday, September 28, 2017

Something on the fun/irreverent side

A while back, I read about some ransomware that, instead of leaving a ransom note, accessed the speech functionality of Windows systems to tell the user that the files on their system had been encrypted.  Hearing that, I started doing some research and put together a file that can play selected speech through the speakers of my laptop.  I thought it might be fun to take a different approach with this blog post and share the file.

Copy-paste the below file into an editor window, and save the file as 'speak.vbs', or (as I did, 'deadpool.vbs') on your desktop. Then simply double-click the file.

dim sapi
set sapi=createobject("sapi.spvoice")
Set sapi.Voice = sapi.GetVoices.Item(0)
sapi.Rate = 2
sapi.Volume = 100
sapi.speak "This shit's gonna have NUTS in it!"
sapi.speak "It's time to make the chimichangas!"
sapi.speak "hashtag drive by"

So, Windows 7 has just one 'voice', so there's no realy need for line 3; Windows 10 has two voices by default, so change the '0' to a '1' to switch things up a bit.

The cool thing is that you can attach a file like this to different actions on your system, or you can have fun with your friends (a la the days of SubSeven) and put a file like this in one of the autorun locations on their system.  Ah, good times!

Tuesday, September 26, 2017


It's been some time since I've had an opportunity to talk about NTFS alternate data streams (ADS), but the folks at Red Canary recently published an article where ADSs take center stage.  NTFS alternate data streams go back a long way, all the way to the first versions of NTFS, and were a 'feature' included to support resource forks in the HFS file system.  I'm sure that will all of the other possible artifacts on Windows systems today, ADSs are not something that is talked about at great length, but it is interesting how applications on Windows systems make use of ADSs.  What this means to examiners is that they really need to understand the context of those ADSs...for example, what happens if you find an ADS named "ZoneIdentifier" attached to an MS Word document or to a PNG file, and it is much larger than 26 bytes?

Some thoughts on Equifax...

According to the Equifax announcement, the breach was discovered on 29 July 2017.  Having performed incident response activities for close to 20 years, it's no surprise to me at all that it took until 7 Sept for the announcement to be made.  Seriously.  This stuff takes time to work out.  Something that does concern me is the following statement:

The company has found no evidence of unauthorized activity on Equifax's core consumer or commercial credit reporting databases.

Like I said, I've been responding to incidents for some time, and I've used that very same language when reporting findings to clients.  However, most often that's followed by a statement along the lines of, "...due to a lack of instrumentation and visibility."  And that's the troubling part of this incident to's an organization that collects vast amounts of extremely sensitive data in one place, and they have a breach that went undetected for 3 months.

Unfortunately, I'm afraid that this incident won't serve as an object lesson to other organizations, simply because of the breaches we've seen over the past couple of years...and more importantly, just the past couple of months...that similarly haven't served that purpose.  For a while now, I've used the analogy of a boxing ring, with a line of guys mounting the stairs one at a time to step into the ring.  As you're standing in line, you see that these guys are all getting into the ring, and they apparently have no conditioning or training, nor have they practiced...and each one that steps into the ring gets pounded by the professional opponent.  And yet, even seeing this, no one thinks about defending themselves, through conditioning, training, or practice, to survive beyond the first punch.  You can see it happening in front of you, with 10 or 20 guys in line ahead of you, and yet no one does anything but stand there in the line with their arms at their sides, apparently oblivious to their fate.

Threat Intelligence
Sergio/@cnoanalysis recently tweeted something that struck me as profound...that threat intelligence needs to be treated as a consumer product.

He's right...take this bit of threat intelligence, for example.  This is solely an example, and not at all intended to say that anyone's doing anything wrong, but it is a good example of what Sergio was referring to in his short but elegant tweet.  While some valuable and useful/usable threat intelligence can be extracted from the article, as is the case with articles from other organizations that are shared as "threat intelligence", this comes across more as a research project than a consumer product.  After all, how does someone who owns and manages an IT infrastructure make use of the information in the various figures?  How do illustrations of assembly language code help someone determine if this group has compromised their network?

Web Shells
Bart Blaze created a nice repository of PHP backdoors, which also includes links to other web shell resources.  This is a great resource for DFIR folks who have encountered such things.

Be sure to update your Yara rules!

Sharing is Caring
Speaking of Yara rules, the folks at NViso posted a Yara rule for detecting CCleaner 5.33, which is the version of the popular anti-forensics tool that was compromised to include a backdoor.

Going a step beyond the Yara rule, the folks at Talos indicate in their analysis of the compromised CCleaner that the malware payload is maintained in the Registry, in the path:

HKLM\Software\Microsoft\Windows NT\CurrentVersion\WbemPerf\001 - 004

Unfortunately, the Talos write-up doesn't specify if 001 is a key or value...yes, I know that for many this seems pedantic, but it makes a difference.  A pretty big difference.  With respect to automated tools for live examination of systems (Powershell, etc.), as well as post-mortem examinations (RegRipper, etc.), the differences in coding the applications to look for a key vs. a value could mean the difference between detection and not.

The Carbon Black folks had a couple of interesting blog posts on the topic of ransomware recently, one about earning quick money,  and the other about predictions regarding the evolution of ransomware.  From the second Cb post, prediction #3 was interesting to me, in part because this is a question I saw clients ask starting in 2016.  More recently, just a couple of months ago, I was on a client call set up by corporate counsel, when one of the IT staff interrupted the kick off of the call and wanted to know if sensitive data had been exfiltrated; rather than seeing this as a disruption of the call, this illustrated to me the paramount concern behind the question.  However, the simple fact is that even in 2017, organizations that are hit with these breaches (evidently some regulatory bodies are considering a ransomware infection to be a "breach") are neither prepared for a ransomware infection, nor are they instrumented to answer the question themselves. 

I suspect that a great many organizations are relying on their consulting staffs to tell them if the variant of ransomware has demonstrated an ability to exfiltrate data during testing, but that assumption is fraught with issues, as well.  For example, to assume that someone else has seen and tested that variant of ransomware, particularly when you're (as the "victim") are unable to provide a copy of the ransomware executable.  Further, what if the testing environment did not include any data or files that the variant would have wanted to, or was programmed to, exfil from the environment?

Looking at the Cb predictions, I'm not concerned with tracking them to see if they come true or concern is, how will I, as an incident responder, address questions from clients who are not at all instrumented to detect the predicted evolution of ransomware?

On the subject of ransomware, Kaspersky identified a variant dubbed "nRansom", named as such because instead of demanding bitcoin, the bad guys demand nude photographs of the victim.

Attack of the Features
It turns out the MS Word has another feature that the bad guys have found and exploited, once again leaving the good folk using the application to catch up.

From the blog post:
The experts highlighted that there is no description for Microsoft Office documentation provides basically no description of the INCLUDEPICTURE field.


Tuesday, September 05, 2017


The quote of the day comes from Corey Tomlinson, content manager at Nuix.  In a recent blog post, Corey included the statement:

The best way to avoid mistakes or become more effective is to learn from collective experience, not just your own.

You'll need to read the entire post to get the context of the statement, but the point is that this is something that applies to SO much within the DFIR and threat hunting communit(y|ies).  Whether you're sharing experiences solely within your team, or you're engaging with others outside of your team and cross-pollinating, this is one of the best ways to extend and expand your effectiveness, not only as a DFIR analyst, but as a threat hunter, as well as an intel analyst.  None of us knows nor has seen everything, but together we can get a much wider aperture and insight.

Ryan released an update to hindsight recently...if you do any system analysis and encounter Chrome, you should really check it out.  I've used hindsight several times quite's easy to use, and the returned data is easy to interpret and incorporate into a timeline.  In one case, I used it to demonstrate that a user had bypassed the infrastructure protections put in place by going around the Exchange server and using Chrome to access their AOL email...launching an attachment infected their system with ransomware.

Thanks, Ryan, for an extremely useful and valuable tool!

It's About Time
I ran across this blog post recently about time stamps and Outlook email attachments, and that got me thinking about how many sources and formats for 'time' there are on Windows systems.

Microsoft has a wonderful page available that discusses various times, such as File Times.  From that same page, you can get more information about MS-DOS Date and Time, which we find embedded in shell items (yes, as in Shellbags).

If nothing else, this really reminds me of the various aspects of time that we have to consider and deal with when conducting DFIR analysis.  We have to consider the source, and how mutable that source may be.  We have to consider the context of the time stamp (AppCompatCache).  

Using Every Part of The Buffalo
Okay, so I stole that section title from a paper that Jesse Kornblum wrote a while back; however, I'm not going to be referring to memory, in this case.  Rather, I'm going to be looking at document metadata.  Not long ago, the folks at ProofPoint posted a blog entry that discussed a campaign they were seeing that seemed very similar to something they'd seen three years ago.  Specifically, they looked at the metadata in Windows shortcut (LNK) files and noted something that was identical between the 2014 and 2017 campaigns.  Reading this, I thought I'd take a closer look at some of the artifacts, as the authors included hashes for the .docx ("need help.docx") file, as well as for a LNK file in their write-up.  I was able to locate copies of both online, and begin my analysis.

Once I downloaded the .docx file, I opened it in 7Zip and exported all of the files and folders, and quickly found the OLE object they referred to in the "word\embeddings\oleObject.bin" file.  Parsing this file with, I found a couple of things...first, the OLE date embedded in the file is "10.08.2017, 15:46:51", giving us a reference time stamp.  At this point we don't know if the time stamp has been modified, or let's just put that aside for the moment.

Next, I at the available streams in the OLE file:

Root Entry  Date: 10.08.2017, 15:46:51  CLSID: 0003000C-0000-0000-C000-000000000046
    1 F..       6                      \ ObjInfo
    2 F..   44511                  \ Ole10Native

Hhhmmm...that looks interesting.

Excerpt of oleObject.bin file

Okay, so we see what they were talking about in the ProofPoint post...right there at offset 0x9c is "4C", the beginning of the embedded LNK file.  Very cool.

This document appears to be identical to what was discussed in the ProofPoint blog post, at figure 16.  In the figure above, we can see a reference to "VID_20170809_1102376.mp4.lnk", and the "word\document.xml" file contains the text, "this is what we recorded, double click on the video icon to view it. The video is about 15 minutes."

I'd also downloaded the file from the IOCs section of the blog post referred to as "LNK object", and parsed it.  Most of the metadata was as one would expect...the time stamps embedded in the LNK file referred to the PowerShell executable from that system, do it was uninteresting.  However, there were a couple of items of interest:

machineID               john-win764                   
birth_obj_id_node  00:0c:29:ac:13:81 (VMWare)             
vol_sn                     CC9C-E694  

We can see the volume serial number that was listed in the ProofPoint blog, and we see the MAC address, as well.  An OUI lookup of the MAC address tells us that it's assigned to VMWare interface.  Does this mean that the development environment is a VMWare guest?  Not necessarily.  I'd done research in the past and found that LNK files created on my host system, when I had VMWare installed, would "pick up" the MAC address of the VMWare interface on the host.  What was interesting in that research was that the LNK file remained and functioned correctly, long after I had removed VMWare and installed VirtualBox.  Not surprising, I know...but it did verify that at one point, when the LNK file was created, I had had VMWare installed on my system.

As a side note, I have to say that this is the first time that I've seen an organization publicizing threat intel and incorporating metadata from artifacts sent to the victim.  I'm sure that this may have been done before, and honestly, I can't see everything...but I did find this to be very extremely interesting that the authors would not only parse the LNK file metadata, but tie it back to a previous (2014) campaign.  That is very cool!

In the above metadata, we also see that the NetBIOS name of the system on which the LNK object was created is "john-win764".  Something not visible in the metadata but easily found via strings is the SID, S-1-5-21-3345294922-2424827061-887656146-1000.

This also gives us some very interesting elements that we can use to put together a Yara rule and submit as a VT retrohunt, and determine if there are other similar LNK files that originated from the same system.  From there, hopefully we can tie them to specific campaigns.

Okay, so what does all this get us?  Well, as an incident responder in the private sector, attribution is a distraction.  Yes, there are folks who ask about it, but honestly, when you're having to understand a breach so that you can brief your board, your shareholders, and your clients as to the impact, the "who" isn't as important as the "what", specifically, "what is the risk/impact?"  However, if you're in the intel side of things, the above elements can assist you with attribution, particularly when it's been developed further through not only your own stores, but also via available resources such as VirusTotal.

On the ransomware front, there's more good news!! 

Not only have recently-observed Cerber variants been seen stealing credentials and Bitcoin wallets, but Spora is reportedly now able to also steal credentials, with the added whammy of logging key strokes!  The article also goes on to state that the ransomware can also access browser history.

Over the past 18 months, the ransomware cases that I've been involved with have changed directions markedly.  Initially, I thought folks wanted to know the infection vector so that they could take action...with no engagement beyond the report (such is the life of DFIR), it's impossible to tell how the information was used.  However, something that started happening quite a bit was that questions regarding access to sensitive (PHI, PII, PCI) data were being asked.  Honestly, my first thought...and likely the thought of any number of analysts...was, "'s ransomware...".  But then I started to really think about the question, and I quickly realized that we didn't have the instrumentation and visibility to answer that question.  Only with some recent cases did clients have Process Tracking enabled in the Windows Event Log...while capture of the full command line wasn't enabled, we did at least get some process names that corresponded closely to what had been seen via testing.

So, in short, without instrumentation and visibility, the answer to the question, "....was sensitive data accessed and/or exfiltrated?" is "we don't know."

However, one thing is clear...there are folks out there who are exploring ways to extend and evolve the ransomware business model.  Over the past two years we've seen evolutions in ransomware itself, such as this blog post from Kevin Strickland of SecureWorks.  The business model of ransomware has also evolved, with players producing ransomware-as-a-service.  In short, this is going to continue to evolve and become an even greater threat to organizations.

Saturday, September 02, 2017


Office Maldocs, SANS Macros
HelpNetSecurity had a fascinating blog post recently on a change in tactics that they'd observed (actually, it originated from a SANS handler diary post), in that an adversary was using a feature built in to MS Word documents to infect systems, rather than embedding malicious macros in the documents.  The "feature" is one in which links embedded in the document are updated when the document is opened. In the case of the observed activity, the link update downloaded an RTF document, and things just sort of took off from there.

I've checked my personal system (Office 2010) as well as my corp system (Office 2016), and in both cases, this feature is enabled by default.

This is a great example of an evolution of behavior, and illustrates that "arms race" that is going on every day in the DFIR community.  We can't detect all possible means of compromise...quite frankly, I don't believe that there's a list out there that we can use as a basis, even if we could.  So, the blue team perspective is to instrument in a way that makes sense so that we can detect these things, and then respond as thoroughly as possible.

WMI Persistence
TrendMicro recently published a blog post that went into some detail discussing WMI persistence observed with respect to cryptocurrency miner infections.  While such infections aren't necessarily damaging to an organization (I've observed several that went undetected for months...), in the sense that they don't deprive or restrict the organization's ability to access their own assets and information, they are the result of someone breaching the perimeter and obtaining access to a system and it's resources.

Matt Graeber tweeted that on Windows 10, the creation of the WMI persistence mechanism appears in the Windows Event Logs.  While I understand that organizations cannot completely ignore their investment in systems and infrastructure, there needs to be some means by which older OSs are rolled out of inventory as they become obviated by the manufacturer.  I have seen, or known that others have seen, active Windows XP and 2003 systems as recently as August, 2017; again, I completely understand that organizations have invested a great deal of money, time, and other resources into maintaining the infrastructure that they'd developed (or legacy infrastructures), but from an information security perspective, there needs to be any eye toward (and an investment in) updating systems that have reached end-of-life.

I'd had a blog post published on my previous employer's corporate site last year; we'd discovered a similar persistence mechanism as a result of creating a mini-timeline to analyze one of several systems infected with Samas ransomware.  In this particular case, prior to the system being compromised and used as a jump host to map the network and deploy the ransomware, the system had been compromised via the same vulnerability and a cryptocoin miner installed.  There was a WMI persistence mechanism created at about the same time, and another artifact (i.e., the LastWrite time on the Win32_ClockProvider Registry key had been modified...) on the system pointed us in that direction.

InfoSec Program Maturity
Going back just a bit to the topic of the maturity of IT processes and by extension, infosec programs, with respect to of the things I've seen a lot of over the past year to 18 months, beyond the surge in ransomware cases that started in Feb, 2016, is the questions that clients who've been hit with ransomware have been asking.  These have actually been really good questions, such as, "...was sensitive data exposed or exfiltrated?"  In most instances with ransomware cases, the immediate urge was to respond, ", it was ransomware...", but pausing for a bit, the real answer was, "...we don't know."  Why didn't we know?  We had no way of knowing, because the systems weren't instrumented, and we didn't have the necessary visibility to be able to answer the questions.  Not just all.

More recently with the NotPetya issues, we'd see where the client had Process Tracking enabled in the Windows Event Log, so that the Security Event Log was populated with pertinent records, albeit without the full command line.  As such, we could see the sequence of commands that were associated with NotPetya, and we could say with confidence that no additional commands have been run, but without the full command lines, we couldn't stated definitively that nothing else untoward had also been done.

So, some things to consider when thinking about or discussing the maturity of your IT and infosec programs include asking yourself, "...what are the questions we would have in the case of this type of incident?", and then, " we have the necessary instrumentation and visibility to answer those questions?"  Anyone who has sensitive data (PHI, PII, PCI, etc...) is going to have the question of "...was sensitive data exposed?", so the question would be, how would you determine that?  Were you tracking full process command lines to determine if sensitive data was marshaled and prepared for exfil?

Another aspect of this to consider is, if this information is being tracked because you do, in fact, have the necessary instrumentation, what's your aperture?  Are you covering just the domain controllers, or have you included other systems, including workstations?  Then, depending on what you're collecting, how quickly can you answer the questions?  Is it something you can do easily, because you've practiced and tweaked the process, or is it something you haven't even tried yet?

Something that's demonstrated (to me) on a daily basis is how mature the bad guy's process is, and I'm not just referring to targeted nation-state threat actors.  I've seen ransomware engagements where the bad guy got in to an RDP server, and within 10 min escalated privileges (his exploit included the CVE number in the file name), deployed ransomware and got out.  There are plenty of blog posts that talk about how targeted threat actors have been observed reacting to stimulus (i.e., attempts at containment, indications of being detected, etc.), and returning to infrastructures following eradication and remediation.

The folks at JPCERT recently (June) published their research on using Windows Event Logs to track lateral movement within an infrastructure.  This is really good stuff, but is dependent upon system owners properly configuring systems in order to actually generate the log records they refer to in the report (we just talked about infosec programs and visibility above...).

This is also an inherent issue with amount of technology will be useful if you're not populating it with the appropriate information.

New RegRipper Plugin
James shared a link to a one-line PowerShell command designed to detect the presence of the CIA's AngelFire infection.  After reading this, it took me about 15 min to write a RegRipper plugin for it and upload it to the Github repository.

Tuesday, August 22, 2017

Beyond Getting Started

I blogged about getting started in the industry back in April (as well as here and here), and after having recently addressed the question on an online forum again, I thought I'd take things a step further.  Everyone has their own opinion as to the best way to 'get started' in the industry, and if you look wide enough and far enough, you'll start to see how those who post well thought out articles have some elements in common.

In the beginning...
We all start learning through imitation and repetition, because that's how we are taught.  Here's the process, follow the process.  This is true in civilian life, and it's even more true in the military.  You're given some information as to the "why", and then you're given the "how".  You do the "how", and you keep doing the "how" until you're getting the "how" right.  Once you've gotten along for a bit with the "how", you start going back to the "why", and sometimes you find out that based on the "why", the "how" that you were taught is pretty darned good.   Based on a detailed understanding of the "why", the "how" was painstakingly developed over time, and it's just about the best means for addressing the "why".

In other cases, some will start to explore doing the "how" better, or different, questioning the "why".  What are the base assumptions of the "why", and have they changed?  How has the "why" changed since it was first developed, and does that affect the "how"?

This is where critical thinking comes into play.  Why am I using this tool or following this process?  What are my base assumptions?  What are my goals, and how does the tool or process help me achieve those goals?  The worst thing you could ever do is justify following a process with the phrase, "...because this is how we've always done it."  That statement clearly shows that neither the "why" nor the "how" is understood, and you're just going through the motions.

Years ago, when I had the honor and the pleasure of working with Don Weber, he would regularly ask me "why"...why were we doing something and why were we doing it this way?  This got me to consider a lot about the decisions I was making and the actions I was taking as a team leader or lead responder, and I often found that my decisions were based not just on the technical aspects of what we were doing, but also the business aspects and the impact to the client.  I did not take offense at Don's questions, and actually appreciated them.

Learn to program
Lots of folks say it's important to learn a programming language, and some even go so far as to specify the particular language.  Thirty-five years ago, I started learning BASIC, programming on an Apple IIe.  Later, it was PASCAL, then MatLab and Java, and then Perl.  Now it seems that Python is the "de facto standard" for DFIR work...or is it?  Not long before NotPetya rocked the world, the folks at RawSec posted an article regarding carving EVTX records, and released a tool written in Go.  If you're working on Windows systems or in a Windows environment, PowerShell might be your programming language of all depends on what you want to do.

There is a great deal of diversity on this topic, and I'd suggest that the programming language you choose should be based on your needs.  The main point is that learning to program helps you see big problems as a series of smaller problems, some of which must performed in a serial fashion.  What we learn from programming is how to break bigger problems into smaller, logical steps.

Engage in the community
Within the DFIR "community", there's too much "liking" and retweeting, and not enough doing and asking of questions, nor actively engaging with others.  Not long ago, James Habben posted an excellent article on his blog on "being present", and he made a lot of important points that we can all learn from.  Further, he put a name to something that I've been aware of for some time; when presenting at a conference, there's often that one person how completely forgets that they're in a room full of other people, and kidnaps and dominates the presenter's time.  There are also those who attend the presentation (or training session) who spend the majority of their time engaged in something else entirely.

Rafal Los recently posted a fascinating article on the SecurityWeek web site.  I found his article well-considered and insightful, and extremely relevant.  It's also something I can relate others, I get connection requests on LinkedIn from folks who've done nothing more than clicked a button.  I also find that after having accepted most connection requests, I never hear from the requester again.  I find that if I write a blog post (like this one) and share the link on Twitter and LinkedIn, I'll get "likes" and retweets, but not much in the way of comments.  If I ask someone what they "like" about the article...and I have done this...more often than not the response is that they didn't actually read it; they wanted to share it with their community.  Given that, there is no difference between having worked on writing and publishing the article, and not having done so.

Engaging in the community is not only a great way to learn, but also a great way to extend the community itself.  A friend recently asked me which sandbox I use for malware analysis, and why.  For me to develop a response beyond just, "I don't", I really had to think about the reasons why I don't use a sandbox.   I learned a little something from the engagement, just as I hope my friend did, as well.

An extension of engaging in the community is to write your own stuff.  Share your thoughts.  Instead of clicking "like" on a link to a blog post, add a comment to the post, or ask a question in the comments.  Instead of just clicking "like" or retweeting, share your reasons for doing so.  If it takes more than 140 characters to do so, write a blog post or comment, and share *that*.

I guess the overall point is this...if you're going to ask the question, "how do I get started in the DFIR industry?", the question itself presupposes some sort of action.  If you're just going to follow others, "like" and retweet all the things, and not actually read, engage, and think critically, then you're not really going to 'get started'.