Monday, June 11, 2012

Timeline Analysis

As I've been preparing for our upcoming timeline analysis course, I've been putting some work into updating some of the tools that I use for creating timelines, which are also provided to attendees for their use, along with the other course materials.  Some of the updates I've been doing are intended to bring a new level of capabilities to the analyst, and really illustrate the power that timelines bring to an analyst.

One of the things Rob Lee has talked about in his timeline analysis courses is the idea of "pivot points", events within a timeline that would likely serve as anchor points for our analysis, or for what you're interested in determining.  I've had some conversations with folks at work and some extensive email exchanges with Corey Harrell lately, both of which have involved determining what some of these pivot or anchor points might be.  One place from which I've obtained pivot points is the initial triage phone call with the customer; maybe the customer had logged into a system and found that, at some point, WinRAR had been installed.  Or maybe they saw a pop-up from the AV application.  Or maybe banking or credit card fraud was reported by the bank to have occurred on a certain date, so you know that access to a system had to have occurred prior to that date.  The point is that we usually have some piece of information that leads us to the decision to create a timeline; after all, we wouldn't simply create a timeline "...because that's what we've always done."  I say, nay, nay...we're not likely to index an entire image unless we're planning to perform a keyword search, and we're equally unlikely to create a timeline unless we have a good reason for doing so.

My point is this...when we sit down to analyze an acquired image, how many times have we opened the image in our commercial analysis framework and just started poking around aimlessly?  The answer is probably...far too often.  What if I had a timeframe ("prior to April 5th"), or better yet, a specific event ("WinRAR was installed") that led me to creating a timeline?   I would then have a specific point within the timeline that I could go to in order to begin my analysis.

A Brief Word About Goals...
During the recent Intro to Windows Forensic Analysis course, one of the attendees asked me, "How do you determine the goals of your analysis if your customer doesn't even know them?"  Well, the fact of the matter is that they will know their goals, even if they don't know it...as an analyst, it's your job to work with them and draw that out.  Start with something simple, such as why the customer called you in the first place, or what led them to identify a system that they want you to acquire and analyze; now you just need to determine, analyze for what?  Sometimes, this can involve something (AV alert, pop-up, etc.) having occurred on a specific date, and that's a start...you'll at least know that you're looking for an event that occurred on a certain date, and that would be your initial anchor point.

And Now, Back To Timelines...
When I'm building at timeline, I like to use LogParser to extract data from the Windows Event Log (*.etvx) files.  I then use a Perl script, evtxparse.pl (provided as a standalone Windows executable), to transition the resulting .csv format logs into TLN format for inclusion into the timeline.  One of the tool updates I've completed recently is to add an event ID mapping capability to evtxparse.pl: as it parses through the output of LogParser, for each event, it checks a lookup table, based on event source and event ID pairs, for an identifier of what type or category the event is, and adds an identifier or tag to the event description.  For example, there are event records that tell you when a program has been installed, removed, or launched.  There are event records that tell you when the system has been connected to a wireless access point.  And there are a LOT of event records that indicate login attempts, either locally or remotely.  As I was working my way through an analysis, I thought that it might be useful to be able to quickly see at a glance what I was looking for...login events, program execution, etc.

Note: While I refer to the tools as Perl scripts, I also provide course attendees with copies of the scripts compiled into Windows executables via Perl2Exe.

The event ID mapping file is a flat text file, and in my modifications to the evtxparse.pl script, I included the ability to add comments to the file.  As part of my research, I've included links to Microsoft resources (as comments) that identify what certain events mean; so, it's not me saying that a particular event means that a program was installed, it's Microsoft stating that that's what the event identifies, and I provide a link a vendor resource that analysts can use to validate that information.  This provides a great facility for an analyst to not only easily research the event, but also add their own event identifiers.  I've also taken this a step further by adding similar identifiers to the TLN output of other tools, including the RegRipper plugins and other data parsers that are provided along with the course materials.

Another potential means for identifying pivot or anchor points for your analysis is to add an additional layer of filtering to the tools.  For example, I wrote a RegRipper plugin that replicates Mandiant's shimcache.py Python script, and we know from the published research that the entries identify programs that had been executed on the system.  Now, what if we were to not only tag these entries in our timeline as identifying program execution, but also scan each entry and identify those in particular that were run from a directory that includes the word "temp" in the path (such as "Local Settings\Temp" or "Temporary Internet Files")?  With all of the available data in a timeline, adding tags to identify pivot or anchor points in this way would likely be extremely useful.

This isn't anything new, and I'm not the only one to look at things this way.  At the SANS360 event last year, Rob Lee spent his 6 minutes talking about an Excel macro he had created to color code events in a similar manner, so that they could be easily identified in the output of log2timeline.  Rob's also created a poster of these event categories.

Tools
Some analysts have asked me about the timeline analysis course that we're offering, and why I don't use other, perhaps more popular tools when I perform my analysis.  I'm not against the use of other tools; in fact, if you have the time and interest, I strongly encourage you to use multiple tools to look at data.  Creating my own tools serves two purposes; it forces me to better understand the actual data so that I can better understand it and see how it can be useful, and it allows me a finer, more granular level of access to the data.  Sometimes, I don't want a full timeline...in fact, I may not even want a timeline created from just one source.  Other times, I may not have a complete image to work with; rather, I will have a selected set of files from a system.  I conduct various levels of analysis using a selected set of files in cases where it takes far too long to obtain and ship a full image file, when working with compromised systems that may contain sensitive data or illicit images, or when working to assist another analyst, to name but a few instances.  Or, based on the questions that the customer has, I may want a timeline created solely from a subset of one data source, such as a timeline of remote logins and from where they originated.  If that were the case, I might use a command line such as the following:

Logparser -i:evt -o:csv "Select RecordNumber,TO_UTCTIME(TimeGenerated),EventID,SourceName,ComputerName,SID,Strings from Log.evtx" | find "TerminalServices-LocalSessionManager/XX" > remote_logins.csv

Now that I have just the remote login messages, I can (a) parse these into a timeline, and (b) quickly write up a Perl script that will run through the information in the .csv file and provide me with a list of unique IP addresses.

Again, I'm not averse to using other tools, and definitely would not advocate against the use of other tools.  This is simply how I prefer to go about creating timelines, and I think that it serves as an excellent foundation from which to teach timeline creation and analysis.

No comments: