Sunday, March 26, 2017

Links, Updates

LNK Attachments
Through my day job, we've seen a surge in spam campaigns lately where Windows shortcuts/LNK files were sent to the targets as email attachments.  A good bit of the focus has been the embedded commands within the LNK files, and how those commands have been obfuscated in order to avoid detection or analysis.  There is some great work being done in this area (discovery, analysis, etc.) but at the same time, a good bit of data and some potential intelligence is being left "on the floor", in that the LNK files themselves are not being parsed for embedded data.

DFIR analysts are probably most often familiar with LNK files being used to either maintain malware persistence when infected, or to indicate that a user opened a file on their system.  In instances such as these, the MS API for creating LNK files embeds information from the local system within the binary contents of the LNK file.  However, if a user is sent an LNK file, that file must have been created on another system all together...which means that unless a specific script was used to create the LNK file on a non-Windows system, or to modify the embedded information, we can assume that the embedded information (MAC address, system NetBIOS name, volume serial number, SID) was from the system on which the LNK file was created.

I'd blogged on this before (yes, eight years ago), and while researching that blog, had found this reference to %LnkGet% at Microsoft.

I recently ran across this fascinating write-up from NVISO Labs regarding the analysis of an LNK file shipped embedded within an old-style (re: OLE) MSWord document.  While the write-up contains hex excerpts of the LNK file, and focuses on the command used to launch bitsadmin.exe to download malware, what it does not do is extract embedded artifacts (SID, system NetBIOS name, MAC address, volume serial number) from the binary contents of the LNK file.

I what?  Who cares?  How can you even use this information?  Well, if you're maintaining case notes in a collaboration portal, you can search for various findings across engagements, long after those engagements have closed out or analysts have left (retired, moved on, etc.), developing a "bigger picture" view of activity, as well as maintaining intelligence from those engagements.  For example, keeping case notes across engagements will allow a perhaps less experienced analyst see what's been done on previous engagements, and illustrate what further work can be done (just-in-time learning).  Of course then there's correlating multiple engagements with marked similarities (intel gathering).  Or, something to consider is that there are Windows Event Log records that include the NetBIOS name of the remote system when a login occurs, and you might be able to correlate that information with what's seen embedded in LNK files (intel collection/development).

MS-SHLLINK: Shell Link Binary File Format

AutoRun - ServiceStartup
Adam's found yet another autostart location within the Registry, this one the ServiceStartup key in the Windows Registry.  I haven't seen a system with this key populated, but it's definitely something to look out for, as this would make a great RegRipper plugin, or a great addition to the plugin.

However, while I was looking around at a Software hive to see if I could find a ServiceStartup key with any values, I ran across the following key that looked interesting:


Digging into the key a bit, I found a couple of interesting articles (here, and here).  This is something I plan to keep a close eye on, as it looks as if it could be very interesting.

The guys from Carbon Black recently shared some really fascinating analysis regarding "a malicious Excel document was used to create a PowerShell script, which then used the Domain Name System (DNS) to communicate with an Internet Command and Control (C2) server."

This is really fascinating stuff, not only to see stuff you've never seen before, but to also see how such things can be discovered (how would I instrument or improve my environment to detect this?), as well as analyzed.

Imposter Syndrome
I was catching up on blog post reading recently, and came across a blog post that centered on how the imposter syndrome caused one DFIR guy to sabotage himself.  Not long after reading that, I read James' post about his experiences at BSidesSLC, and his offer to help other analysts...and it sounded to me like James was offering a means for DFIR folks who are interested to overcome their own imposter syndrome.

On a similar note, not too long ago, I was asked to give some presentations on cyber-security at a local high school, and during one of the sessions, one of the students had an excellent does someone with no experience apply for an entry-level position that requires experience?  To James' point, I recommended that the students with an interest in cyber-security start working on projects and blogging about them, sharing what they did, and what they found.  There are plenty of resources available, including system images that can be downloaded and analyzed, malware that can be downloaded and analyzed, etc.  Okay, I know folks are going to say, "yeah, but others have already done that..."...really?  I've looked at a bunch of the images that are available for download, and one of the things I don't see is responses...yes, there are some, but not many.  So, download the image, conduct and document your analysis, and write it up.  So what if others have already done this?  The point is you're sharing your experiences, findings, how you conducted your analysis, and most importantly, you're showing others your ability to communicate through the written word.  This is very important, because pretty much every job in the adult world includes some form of written communications, either through email, reports, etc.

Here's what I would look for if I were looking to fill a DFIR analysis position...even if the image had already been analyzed by others, I'd look for completeness of analysis based on currently available information, and how well it was communicated.  I'd also look for things like, was any part of the analysis taken a step or two further?  Did the "analyst" attempt to do all of the work themselves, or was there collaboration?  I'd want to see (or hear, during the interview) justification for the analysis steps along the way, not to second guess, but in order to understand thought processes.  I'd also want to see if there was a reliance on commercial tools, or if the analyst was able to incorporate (or better yet, modify) open source tools.

Not all of these aspects are required, but these are things I'd look for.  These are also things I'd look at when bringing new analysts onto a team, and mentoring them.

posted recently about some interesting Powershell findings that pertain to Powershell version 5.  I found this while examining a Windows 10 system, but if someone has updated the Powershell installation on their Windows 7 and 8 systems, they should also be able to take advantage of the artifacts.

However, something popped up recently that simply reiterated the need to clearly identify and understand the version of Windows that you're examining...Powershell downgrade attacks.

This SecurityAffairs article provides an example of how Powershell has been used.

Wednesday, March 15, 2017

Incorporating AmCache data into Timeline Analysis

I've had an opportunity to examine some Windows 10 systems lately, and recently got a chance to examine a Windows 2012 server system.  While I was preparing to examine the Windows 2012 system, I extracted a number of files from the image, in order to incorporate the data in those files into a timeline for analysis.  I also grabbed the AmCache.hve file (I've blogged about this file previously...), and parsed it using the RegRipper plugin.  What I'm going to do in this post is walk through an example of something I found after the initial analysis,

From the ShimCache data from the system, I found the following reference:

SYSVOL\Users\Public\Downloads\badfile.exe  Fri Jan 13 11:16:40 2017 Z

Now, we all know that the time stamp associated with the entry in the ShimCache data is the file system last modification time of the file (NOT the execution time), and that if you create a timeline, this data would be best represented by an entry that includes "M..." to indicate the context of the information.

I then looked at the output of the plugin to see if there was an entry for this file, and I found the following:

File Reference    : 720000150e1
LastWrite           : Sun Jan 15 07:53:53 2017 Z
Path                    : C:\Users\Public\Downloads\badfile.exe
Company Name : FileManger
Product Name    : Fileppp
File Descr          : FileManger
Lang Code         : 0
SHA-1               : 00002861a7c280cfbb10af2d6a1167a5961cf41accea
Last Mod Time  : Fri Jan 13 11:16:40 2017 Z
Last Mod Time2: Fri Jan 13 11:16:40 2017 Z
Create Time       : Sun Jan 15 07:53:26 2017 Z
Compile Time    : Fri Jan 13 03:16:40 2017 Z

We know from Yogesh's research that the "File Reference" is the file reference number from the MFT; that is, the sequence number and the MFT record number.  In the above output, the "LastWrite" entry is the LastWrite time for the key with the name referenced in the "File Reference" entry.  You'll also notice some additional information that could be pretty useful...some of it (Lang Code, Product Name, File Descr) were values that I added to the plugin today (I also updated the plugin repository on GitHub, as well).

You'll also notice that there are a few time stamps, in addition to the key LastWrite time.  I thought that it would be interesting to see what effect those time stamps would have on a timeline; so, I wrote a new plugin (, also uploaded to the repository today) that would allow me to add data to my timeline.  After adding the AmCache.hve time stamp data to my timeline, I went looking for

Sun Jan 15 07:53:53 2017 Z
  AmCache       - Key LastWrite   - 720000150e1:C:\Users\Public\Downloads\badfile.exe
  REG                 User - [Program Execution] UserAssist - C:\Users\Public\Downloads\badfile.exe (1)

Sun Jan 15 07:53:26 2017 Z
  AmCache       - ...B  720000150e1:C:\Users\Public\Downloads\badfile.exe
  FILE              - .A.B [286208] C:\Users\Public\Downloads\badfile.exe

Fri Jan 13 11:16:40 2017 Z
  FILE               - M... [286208] C:\Users\Public\Downloads\badfile.exe
  AmCache       - M...  720000150e1:C:\Users\Public\Downloads\badfile.exe

Fri Jan 13 03:16:40 2017 Z
  AmCache       - PE Compile time - 720000150e1:C:\Users\Public\Downloads\badfile.exe

Clearly, a great deal more analysis and testing needs to be performed, but this timeline excerpt illustrates some very interesting findings.  For example, the AmCache entries for the M and B dates line up with those from the MFT.

Something else that's very interesting is that the AmCache key LastWrite time appears to correlate to when the file was executed by the user.

For the sake of being complete, let's take the parsed MFT entry for the file:

86241      FILE Seq: 114  Links: 1
    M: Fri Jan 13 11:16:40 2017 Z
    A: Sun Jan 15 07:53:26 2017 Z
    C: Fri Feb 10 11:37:25 2017 Z
    B: Sun Jan 15 07:53:26 2017 Z
  FN: badfile.exe  Parent Ref: 292/1
  Namespace: 3
    M: Sun Jan 15 07:53:26 2017 Z
    A: Sun Jan 15 07:53:26 2017 Z
    C: Sun Jan 15 07:53:26 2017 Z
    B: Sun Jan 15 07:53:26 2017 Z
[$DATA Attribute]
File Size = 286208 bytes

We know we have the right file...if we convert the MFT record number (86241) to hex, and prepend it with the sequence number (also converted to hex), we get the file reference number from the AmCache.hve file.  We also see that the creation date for the file is the same in both the $STANDARD_INFORMATION and $FILE_NAME attributes from the MFT record, and they're also the same as the value extracted from the AmCache.hve file.

There definitely needs to be more research and work done, but it appears that the AmCache data may be extremely valuable with respect to files that no longer exist on the system, particularly if (and I say "IF") the key LastWrite time corresponds to the first time that the file was executed.  Review of data extracted from a Windows 10 system illustrated similar findings, in that the key LastWrite time for a specific file reference number correlated to the same time that an "Application Popup/1000" event was recorded in the Application Event Log, indicating that the application had an issue; four seconds later, events (EVTX, file system) indicated an application crash.  I'd like to either work an engagement where process creation information is also available, or conduct testing and analysis of a Win2012 or Win10 system that has Sysmon installed, as it appears that this data may indicate/correlate to a program execution finding.

Now, clearly, the AmCache.hve file can contain a LOT of data, and you might not want it all.  You can minimize what's added to the timeline by using the reference to the "Public\Downloads" folder, for example, as a pivot point.  You can run the plugin and pipe the output through the find command to get just those entries that include files in the "Public\Downloads" folder in the following manner:

rip -r amcache.hve -p amcache_tln | find "public\downloads" /i

An alternative is to run the plugin, output all of the entries to a file, and then use the type command to search for specific entries:

rip -r amcache.hve -p amcache_tln > amcache.txt
type amcache.txt | find "public\downloads" /i

Either one of these two methods will allow you to minimize the data that's incorporated into a timeline and create overlays, or simply create micro-timelines solely from data within the AmCache.hve file.

Oh, and hey...more information on language ID codes can be found here and here.

Addendum: Additional Sources
So, I'm not the first one to mention the use of AmCache.hve entries to illustrate program execution...others have previously mentioned this artifact category:
Digital Forensics Survival Podcast episode #020
Willi's parser