Category Archives: Intrusion Analysis

Digital Forensics – PlugX and Artifacts left behind

When an attacker conducts an intrusion using A, B or C technique, some of his actions leave artifact X, Y or Z behind. So, based on the scenario from the last article about PlugX, I collected a disk image and memory image from the domain controller. Over the past years I wrote several articles on how to perform acquisition, mounting and processing of such images and analyze them by creating super timelines, look at different artifacts like Event Logs, Prefetch, ShimCache, AMCache, etc., or analyze NTFS metadata or look for artifacts related to interactive sessions. Today, I’m not going to perform analysis but I’m going to list a quick overview about some of the Windows endpoint artifacts that might give us evidence about the actions that were executed in the previous scenario and help us produce a meaningful timeline. In addition, I list some tools that could be used to analyze those artifacts.

Scenario 1: The attacker placed the filename “kas.exe” on the folder “c:\PerfLogs\Admin”. Which artifacts could record evidence about this action?

  • NTFS MFT
    • Description: The Master File Table (MFT) is a special system file that resides on the root of every NTFS partition. The file is named $MFT and is not accessible via user mode API’s but can been seen when you have raw access to the disk e.g, forensic image. This special file is a hierarchical database and inside you have records that contains a series of attributes about a file, directory and indicates where it resides on the physical disk and if is active or inactive. The size of each MFT record is usually 1024-bytes. Each record contains a set of attributes. Some of the most important attributes in a MFT entry are the $STANDART_INFORMATION, $FILENAME and $DATA. The first two are rather important because among other things they contain the file time stamps. Each MFT entry for a given file or directory will contain 8 timestamps. 4 in the $STANDARD_INFORMATION and another 4 in the $FILENAME. These time stamps are known as MACE.
    • Tools: Parse and analyze it with SleuthKit originally written by Brian CarrierMFT2CSV from Joakim Schicht or PLASO/log2timeline originally created by Kristinn Gudjonsson
  • NTFS INDX Attribute
    • Description: The MFT records for directories contain a special attribute called $I30. This attribute contains information about file names and directories that are stored inside a directory. This special attribute is also known as $INDX and consists of three attributes, the $INDEX_ROOT, $INDEX_ALLOCATION and $BITMAP. So, What? Well, this attribute stores information in a B-tree data structure that keeps data sorted so the operating system can perform fast searches in order to determine if a file is present. In addition, this attribute grows to keep track of file names inside the directory. However, when you delete a file from a directory the B-tree re-balances itself but the tree node with metadata about the deleted file remains in a form of slack space until it gets reused. This means we can view the $I30 attribute contents and we might find evidence of files that once existed in a directory but are no longer there.
      Tools: o Parse it and analyze it with INDXParse from William Ballenthin or MFT2CSV from Joakim Schicht.
  • NTFS $LogFile
    • Description: NTFS has been developed over years with many features in mind, one being data recovery. One of the features used by NTFS to perform data recovery is the Journaling. The NTFS Journal is kept inside NTFS Metadata in a file called $LOGFILE. This file is stored in the MFT entry number 2 and every time there is a change in the NTFS Metadata, there is a transaction recorded in the $LOGFILE. These transactions are recorded to be possible to redo or undo file system operations. After the transaction has been logged then the file system can perform the change. When the change is done, another transaction is logged in the form of a commit. The $LOGFILE allows the file system to recover from metadata inconsistencies such as transactions that don’t have a commit. The size of the $LOGFILE can be consulted and changed using chkdsk /l and per default is 65536 KB. Why would $LOGFILE be important for our investigation? Because the $LOGFILE keeps record of all operations that occurred in the NTFS volume such as file creation, deletion, renaming, copy, etc. Therefore, we might find relevant evidence in there.
    • Tools: Parse it and analyze it with LogFileParser from Joakim Schicht
  • NTFS $UsnJrnl
    • Description: The change journal contains a wealth of information that shouldn’t be overlooked. Another interesting aspect of the change journal is that allocates space and deallocates as it grows and records are not overwritten unlike the $LogFile. This means we can find old journal records in unallocated space on a NTFS volume. How to obtain those? Luckily, the tool USN Record Carver written by PoorBillionaire can carve journal records from binary data and thus recover these records
    • Tools: Parse and analyze it with UsnJrnl2Csv from Joakim Schicht or from unallocated space with USN Record Carver from PoorBillionaire.

Scenario 2: Which account did the attacker used to log into the system when he placed “kas.exe” on the file system?

  • Windows Event Logs
    • Description: The Windows Event logs record activities about the operating system and its applications. What is logged depends on the audit features that are turned thus impacting the information that one can obtain. From a forensic perspective the Event Logs capture a wealth of information. The main three Windows Event Logs are Application, System, and Security and on Windows Vista and beyond they are saved on %System32%\winevt\Logs in a binary format. For example the Event id’s 4624, 4625 might give us answers.
    • Tools: Parse it and Analyze it with PLASO/Log2timeline, LibEvtx-utils from Joakim Schicht , python-evtx from William Ballenthin or Event Log Explorer. You likely get better results if in your environment if you have consistent and enhanced audit policy settings defined that track both success and failures. In case the attacker  deletes the Windows Event Logs, there is the possibility to recover Windows Event Log records from the pagefile.sys or from unallocated space, from Volume Shadow copies or even the system Memory. You could use EVTXtract from Willi Ballenthin to attempt to recover Event logs from raw data.

Scenario 3: Attacker executed the “kas.exe” binary. Which artifacts might record this evidence?

  • Windows Prefetch / Superfetch
    • Description: To improve customer experience, Microsoft introduced a memory management technology called Prefetch. This functionality was introduced into Windows XP and Win-dows 2003 Server. This mechanism analyses the applications that are most frequently used and preloads them in advance in order speed the operating system booting and application launching. On Windows Vista, Microsoft enhanced the algorithm and introduced SuperFetch which is an improved version of Prefetch. The Prefetch files are stored in %SYSTEMROOT%\Prefetch directory and have a .pf extension. The Superfetch files have a .db extension. Prefetch files keep track of programs that have been executed in the system even if the original file is no longer present. In addition Prefetch files can tell you when the program was executed, how many times and from which path.
    • Tools: PLASO/log2timeline, Windows-Prefetch-Parser from Adam Witt, Prefetch Parser from Eric Zimmerman. For Superfetch you could use SuperFetch tools.
  • ShimCache either from Registry or from Kernel Memory
    • Description: Microsoft introduced the ShimCache in Windows 95 and it remains today a mechanism to ensure backward compatibility of older binaries into new versions of Microsoft op-erating systems. When new Microsoft operating systems are released some old and legacy application might break. To fix this Microsoft has the ShimCache which acts as a proxy layer between the old application and the new operating system. A good overview about what is the ShimCache is available on the Microsoft Blog on an article written by Tim Newton “Demystifying Shims – or – Using the App Compat Toolkit to make your old stuff work with your new stuff“. The interesting part is that from a forensics perspective the ShimCache is valuable because the cache tracks metadata for binary that was executed and stores it in the ShimCache.
    • Tools: From Kernel memory, you can parse it and analyze it with Volatility ShimCache and ShimCacheMem plugin. From the Registry you can use ShimCacheParser https://github.com/mandiant/ShimCacheParser. You can also use RegRipper from Harlan Carvey or AppCompatCacheParser from Eric Zimmerman. In addition, to analyze ShimCache artifacts at scale you can use AppCompatProcessor from Mattias Bevilacqua,
  • AMCache
    • Description: On Windows 8, Amcache.hve replaced the RecentFileCache.bcf file, a registry file used in Windows 7 as part of the Application Experience and Compatibility feature to ensure compatibility of existing software between different versions of Windows. Similar to its predecessor, Amcache.hve is a small registry hive that stores a wealth of information about recently run applications and programs, including full path, file timestamps, and file SHA1 hash value. Amcache.hve is commonly found at the following location: C:\Windows\AppCompat\Programs\Amcache.hve. The Amcache.hve file is standard within the Windows 8 operating system, but has been found to exist on Windows 7 systems as well.
    • Tools: To read the amcache HIVE you could use RegRipper or Willi Ballenthin stand-alone script or Eric Zimmerman AmcacheParser. To analyze AMCache artifacts at scale you can use AppCompatProcessor from Mattias Bevilacqua,
  • Windows Event Logs. 
    • The Windows Event logs – for example id 4688 – could track binary execution if you have the proper audit settings or you use Sysmon.

Scenario 4: The execution of “kas.exe” dropped three files on disk that used DLL Search Order Hijacking to achieve persistence and install the malicious payload. Which artifacts might help identifying this technique?

Identifying evidence of DLL Search Order hijacking is not easy if no other leads are available. Likely you need a combination of artifacts. The following artifacts / tools might help.

  • NTFS MFT, INDX, $LogFile, $UsnJrnl.
  • Prefetch / SuperFetch.
  • ShimCache either from Registry or from Kernel Memory.
  • AMCache.
  • Windows Event Logs could track process execution and give you leads if you have the proper audit settings or you use Sysmon
  • Volatility to perform memory analysis.
  • RegRipper – One thing you could try, among many others that this powerful tool allows,is to identify different persistence mechanism that could have resulted as part of the DLL Search Order Hijacking technique.
  • AppCompatProcessor to analyze ShimCache and AMCache at scale combined with with PlugX signatures.

Scenario 5: The PlugX dropped files have the NTFS timestamps manipulated i.e., It copies the timestamps obtained from the operating system filename ntdll.dll to set the timestamps on the dropped files. What artifacts could be used to detect this?

The time modification will cause a discrepancy between the NTFS $STANDART_INFORMATION and $FILENAME timestamps. You could combine the NTFS artifacts with the execution artifacts to spot such anomalies.  Other technique you could use is with AppCompat Processor which has the Time Stomp functionality that will search for appcompat entries outside of the Windows,  System and SysWOW64 folders with last modification dates matching a list of known operating system files.

Scenario 6: Attacker used the PlugX controller to Invoke a command shell and execute Windows built-in commands. Are there any artifacts left behind that could help understand commands executed?

  • ShimCache either from Registry or from Kernel Memory.
  • Memory analysis with Volatility and look for Process creation, Console History, cmdscan or consoles plugin.
  • The Windows Event logs could track process execution if you have the proper audit settings or you use Sysmon.

Scenario 7: Attacker established a persistence mechanism either using a Service or Registry Key. 

  • Producing a timeline of the Registry would help identify the last modification dates of the registry keys. You could use RegRipper from Harlan Carvey or RECmd from Eric Zimmerman. The Windows Event logs would also help in case the there was a service created on the operating system. For example Event ID 7009, 7030, 7035, 7036, 7040, 7023 or 7045 could help. In addition, to list the services and its properties you could perform memory analysis with Volatility or use RegRipper.

Scenario 8: The attacker accessed the Active Directory database using the “ntdsutil.exe” command. What could be used to detect this activity?

  • As we saw previously, command execution could be identified using ShimCache either from Registry or from Kernel Memory. Because “ntdsutil.exe” would be executed on a Server system, Prefetch won’t help here because its not enabled on Server systems. One of the most usefull artifacts would be the Windows Event logs but you need to have the right settings so it could track binary execution and the interactions with the Active Directory. One thing that might help in case the memory image has been acquired not long after the attacker activity is to perform memory analysis and creating a timeline of the artifacts with Volatility might help identifying the process creation and its parent(s). In addition, you might get interesting leads just by running strings (little and  big endian) on the pagefile.sys. Other than that, the execution of “ntdsutil.exe” the way it was executed on the scenario, leaves behind artifacts on the NTFS metadata.

That’s it for today. With this article I presented a quick listing on some artifacts and tools that can help you perform forensic analysis on a system and help you answer your investigative questions. Many other tools and artifacts would be available depending on the attacker activities, for example if the attacker logged into a system interactively, but the ones listed might give you a starting point and might help you understand what happened and when. One thing that would greatly complement the findings of a system forensic analysis the network data such as the ones that comes from Firewall, Router, IDS or Proxy logs or any other kind of networking logs you might have. Specially if attacker is using a C2 and is clearing evidence such as the threat group that used a file named “a.bat” to clean several artifacts as illustrated on the “Paranoid PlugX” article written by Tom Lancaster and Esmid Idrizovic from Unit 42.

Happy hunting and If you have dealt with a security incident where PlugX was used, please leave your comments about the tools or techniques you used to detect it.

Tagged , , , , , , , , , ,

Malware Analysis – PlugX – Part 2

Following my previous article on PlugX, I would like to continue the analysis but now use the PlugX controller to mimic some of the steps that might be executed by an attacker. As you know the traditional steps of an attack lifecycle follow, normally, a predictable sequence of events i.e., Reconnaissance, initial compromise, establish foothold, escalate privileges, internal reconnaissance, move laterally, maintain persistence, complete mission. For sake of brevity I will skip most of the steps and will focus on the lateral movement.I will use the PlugX controller and C2 functionality to simulate an attacker that established a foothold inside an environment and obtained admin access to a workstation. Following that, the attacker moved laterally to a Windows Domain Controller. I will use the PlugX controller to accomplish this scenario and observe how an attacker would operate within a compromised environment.

As we saw previously, the PlugX controller interface allows an operator to build payloads, set campaigns and define the preferred method for the compromised hosts to check-in and communicate with the controller. In the PlugX controller, English version from Q3 2013, an operator can build the payload using two techniques. One is using the “DNS Online” technique which allows the operator to define the C2 address e.g, an URL or IP address, that will be used by the payload to speak with the C2. The other method, is the “Web Online”, which allows the operator to tell the payload from where it should fetch the C2 address. This method allows the operator to have more control over the campaign. The following diagram illustrates how the “Web Online” technique works.

Why do I say this technique would allow an attacker to have more control? Consider the case that an organization was compromised by a threat actor that use this PlugX technique. In case the C2 is discovered, the impacted organization could block the IP or URL on the existing boundary security controls as a normal reaction to the concerns of having an attacker inside the network. However, the attacker could just change the C2 string and point it to a different system. In case the organization was not able to scope the incident and understand the TTP’s (Tools, Tactics and Procedures) then the attacker would still maintain persistence in the environment. This is an example that when conducting incident response, among other things, you need to have visibility into the tools and techniques the attacker is using so you could properly scope the incident and perform effective and efficient containment and eradication steps. As an example of this technique, below is a print screen from a GitHub page that has been used by an unknown threat actor to leverage this method.

So, how to leverage this technique on the PlugX builder? The picture below shows how the operator could create a payload that uses the “Web Online” technique. The C2 address would be fetched from a specified site e.g. a Pastebin address, which on its turn would redirect the payload to the real C2 address. The string “DZKSAFAAHHHHHHOCCGFHJGMGEGFGCHOCDGPGNGDZJS” in this case is used to decode to the real C2 address which is “www.builder.com”. On the “PlugX: some uncovered points” article, Fabien Perigaud writes about how to decode this string. Palo Alto Unit42 gives another example of this technique on the “Paranoid PlugX” article. The article “Winnti Abuses GitHub for C&C Communications” from Cedric Pernet ilustrates an APT group leveraging this technique using GitHub.

For sake of simplicity, in this article, I’m going to use the DNS Online technique using “www.builder.com” as C2 address. Next, on the “First” tab I specify the campaing ID and the password used by the payload to connect to the C2.

Next, on the Install tab I specify the persistence settings, in this case I’m telling the payload to install a service and I can specify different settings including where to deploy the binaries, the service name and service description. In addition, I can specify that if the Service persistence mechanism fails due to some reason the payload should install the persistence mechanism using the Registry and I can specify which HIVE should be used.

Then, In the inject tab I specify which process will be injected with the malicious payload. In this case I choose “svchost.exe”. This will make PlugX start a new instance of “svchost.exe” and then inject the malicious code into svchost.exe process address space using process hollowing technique.

Other than that, the operator could define a schedule and determine which time of the week the payload should communicate with the C2. Also the operator could define the Screen Recording capability that will take screenshots at a specific frequency and will save them encrypted in a specific folder.

Last settings on the “option” tab allow the operator to enable the keylogger functionality and specify if the payload should hide it self and also delete itself after execution.

Finally, after all the settings defined, the operator can create/download the payload in different formats. An executable, binary form (Shellcode), or an array in C that can then be plugged in another delivery mechanism e.g, PowerShell or MsBuild. After deploying and installing the payload on a system, that system will check-in into the PlugX controller and an operator can call the “Manager” to perform the different actions. In this example I show how an attacker, after having compromised a system, uses the C2 interface to:

  • Browse the network

  • Access remote systems via UNC path

  • Upload and execute a file e.g., upload PlugX binary

  • Invoke a command shell and perform remote commands e.g., execute PlugX binary on a remote system

Previous pictures illustrate actions that the attacker could perform to move laterally and, for example, at some point in time, access a domain controller via UNC path, upload the PlugX payload to a directory of its choice and execute it. In this case the pictures show that the PlugX payload was dropped into c:\PerfLogs\Admin folder and then was executed using WMI. Below example shows the view from the attacker with two C2 sessions. One for one workstation and another for a domain controller.

Having access to a domain controller is likely one of the goals of the attacker so he can obtain all the information he needs about an organization from the Active Directory database.

To access the Active Directory database, the attacker could, for example, run the “ntdsutil.exe” command to create a copy of the “NTDS.dit” file using Volume Shadow Copy technique. Then, the attacker can access the database and download it to a system under his control using the PlugX controller interface. The picture below illustrates an attacker obtained the relevant data that was produced using the “ntdsutil.exe” command.

Finally, the attacker might delete the artifacts that were left behind on the file system as consequence of running “ntdsutil.exe”.

So, in summary, we briefly looked at the different techniques a PlugX payload could be configured to speak with a Command and Controller. We built, deploy and install a payload. Compromised a system and obtain a perspective from PlugX operator. We move laterally to a domain controller and installed the PlugX payload and then used a command shell to obtain the Active Directory database. Of course, as you noted, the scenario was accomplished with an old version of the PlugX controller. Newer versions likely have many new features and capabilities. For example, the print screen below is from a PlugX builder from 2014 (MD5: 534d28ad55831c04f4a7a8ace6dd76c3) which can create different payloads that perform DLL Search order hijacking using Lenovo’s RGB LCD Display Utility for ThinkPad (tplcdclr.exe) or Steve Gibson’s Domain Name System Benchmarking Utility (sep_NE.exe). The article from Kaspersky “PlugX malware: A good hacker is an apologetic hacker” outlines a summary about it.

That’s it! With this article we set the table for the next article focusing on artifacts that might helps us uncover the hidden traits that were left behind by the attacker actions performed during this scenario. Stay tuned and have fun!

 

 

Tagged , , , ,

Threat Hunting in the Enterprise with AppCompatProcessor

Last April, at the SANS Threat Hunting and IR Summit, among other things, there was a new tool and technique released by Matias Bevilacqua. Matias’s presentation was titled “ShimCache and AmCache enterprise-wide hunting, evolving beyond grep” and he released the tool AppCompatProcessor. Matias also wrote a blog post “Evolving Analytics for Execution Trace Data” with more details.

In this article, I want to go over a quick exercise on how to use this tool and expand the existing signatures. First, let me write that, in case you have a security incident and you are doing enterprise incident response or you are performing threat hunting as part of your security operations duties, this is a fantastic tool that you should become familiar with and have on your toolkit. Why? Because it allows the security teams to digest, parse and analyze, at scale, two forensic artifacts that are very useful. The forensic artifacts are part of the Windows Application Experience and Compatibility features and are known as ShimCache and the AMCache.

To give you more context, the ShimCache can be obtained from the registry and from it we can obtain information about all executable binaries that have been executed in the system since it was rebooted. Furthermore, it tracks its size and the last modified date. In addition, the ShimCache tracks executables that have not been executed but were browsed for example through explorer.exe. This makes a valuable source of evidence for example to track executables that were on the system but weren’t executed – consider an attacker that used a directory on a system to move around his toolkit. The AMCache is stored on a file and from it we can retrieve information for every executable that run on the system such as the PATH, last modification time and created, SHA1 and PE properties. You can read more about those 2 artifacts in the article I wrote last year.

So, I won’t go over on how to acquire this data at scale – feel free to share you technique in the comments – but, AppCompatProcessor digests data that has been acquired by ShimCacheParser.py, Redline and MIR but also consumes raw ShimCache and AMCache registry hives. I will go directly to the features.At the time of this writing the tool version is 0.8 and one of the features I would like to focus today is the search module. This module allows us to search for known bad using regex expressions. The search module was coded with performance in mind, which means the regex searches are quite fast. By default, the tool includes more than 70 regex signatures for all kinds of interesting things an analyst will look for when perform threat hunting. Signatures include searching for dual usage tools like psexec , looking for binaries in places where they shouldn’t normally be, commonly named credential dumpers, etc. The great thing is that you can easily include your own signatures. Just add a regex line with your signature!

For this exercise, I want to use the search module to search for binaries that are commonly used by the PlugX backdoor family and friends. This backdoor is commonly used by different threat groups on targeted attacks. PlugX is also refered as KORPLUG, SOGU, DestroyRAT and is a modular backdoor that is designed to rely on the execution of signed and legitimated executables to load malicious code. PlugX, normally has three main components, a DLL, an encrypted binary file and a legitimated executable that is used to load the malware using a technique known as DLL search order. I won’t go discuss the details about PlugX in this article but you can read the White Paper “PlugX – Payload Extraction” done by Kevin O’Reilly from Context, the presentation about Plugx at Black Hat ASIA in 2014 given by Takahiro Haruyama and Hiroshi Suzuki, the analysis done by the Computer Incident Response Center Luxembourg and the Ahnlab threat report. With this and other reports you could start compiling information about different PlugX payloads. However, Adam Blaszczyk from Hexacorn, already did that job and wrote an article where he outlines different PlugX payloads seen in the wild.

Ok, with this information, we start creating the PlugX regex signatures. Essentially we will be looking for the signed and legitimate executables but in places where they won’t normaly be. The syntax to create a new regex signature is simple and you can add your own signatures to the existing AppCompatSearch.txt file or just create a new file called AppCompatSearch-PlugX.txt which will be consumed automatically by the tool. The figure below shows the different signatures that I produced. . At the time of this writing, this is still work in progress but is a starting point.

Next step, launch AppCompatProcessor against our data set using the newly created signatures. The following picture shows how the output of the search module looks like. In this particular case the search produced 25 hits and a nicely presented summary of the hits is displayed on a histogram. The raw dumps of the hits are saved on the file called Output.txt.  As an analyst or investigator, you would look at the results and verify which ones would be worth to further investigate and which ones are false positives. For this exercise, there was a hit that triggered on the file “c:\Temp\MsMpEng.exe”. This file is part of the Windows Defender suite but could be used by PlugX as part of DLL search order hijack technique. Basically, the attacker will craft a malicious DLL named MpSvc.dll and will place that in the same directory as the MsMpEng.exe file and execute MsMpEng.exe. The DLL would need to be crafted in a special way but that is what PlugX specializes in. This will load the attacker code.

Following these findings, we would want to look at the system that triggered the signature and view all the entries. The picture below shows this step where we use the dump module. The output shows all the ShimCache entries for this particular system. The entries are normally sorted in order of execution from bottom to top, and in this case, adjacent to the “c:\Temp\MsMpEng.exe” file there are several windows built-in commands that were executed and a file named “c:\Temp\m64.exe”. This is what Matias calls a strong temporal execution correlation. This is indicative that an attacker obtained access to the system, executed several windows built-in commands and and executed a file called “m64.exe” which likely is Mimikatz or a cousin. 

Following those leads, you might want to obtain those binaries from the system and perform malware analysis in order to extract indicators of compromise such as the C&C address, look at other artifacts such Windows Event Logs, UsnJournal, memory, etc.. and have additional leads. In addition, you might want to further use AppCompatProcessor to search for the “m64.exe” file and also use the tstack module, to search across all the data set for binaries that match the date of those two binaries. With these findings, among other things, you would need to scope the incident by understanding which systems the attacker accessed, find new investigation leads and pivot on the findings. AppCompatProcessor is a tool that helps doing that. This kind of finding would definitely trigger your incident response processes and procedures.

That’s it, hopefully, AppCompatProcessor will reduce the entry barrier for your security operations center or incident response teams to start performing threat hunting in your environment and produce actionable results. If you find this useful, contribute with your threat hunting signatures in AppCompatProcessor GitHub repo and Happy Hunting!

 

Tagged , , , , , ,

RIG Exploit Kit Analysis – Part 3

Over the course of the last two articles (part 1 & part 2), I analyzed a recent drive-by-download campaign that was delivering the RIG Exploit Kit . In this article, I will complete the analysis by looking at the shellcode that is executed when the exploit code is successful.  As mentioned in the previous articles, each one of the two exploits has shellcode that is used to run malicious code in the victims system. The shellcode objective is the same across the exploits: Download, decrypt and execute the malware.

In part 1, when analyzing the JavaScript that was extracted from the RIG landing page we see that at the end there was a function that contained a hex string of 2505 bytes. This is the shellcode for the exploit CVE-2013-2551.  In a similar way but to exploit  CVE-2015-5122 on the second stage Flash file, inside the DefineBinaryData tag 3, one of the decrypted strings contained a hex string of 2828 bytes.

So, how can we analyze this shellcode and determine what it does?

You can copy the shellcode and create a skeletal executable that can then be analyzed using a debugger or a dissassembler. First, the shellcode needs to be converted into hex notation (\x). This can be done by coping the shellcode string into a file and then running the following Perl one liner “$cat shellcode | perl -pe ‘s/(..)/\\x$1/g’ >shellcode.hex”. Then generate the skeletal shellcode executable with shellcode2exe.py script written by Mario Villa and later tweaked by Anand Sastry.  The command is “$shellcode2exe.py –s shellcode shellcode.exe”. The result is a windows executable for the x86 platform that can be loaded into a debugger.  Another way to convert shellcode is to use the converter tool from http://www.kahusecurity.com

Next step is to load the generated executable into OllyDbg. Stepping through the code one can see that the shellcode contains a deobfuscation routine. In this case, the shellcode author is using a XOR operation with key 0x84. After looping through the routine, the decoded shellcode shows a one liner command line.

rigexshellcodeolly

After completing the XOR de-obfuscation routine the shellcode has to be able to dynamically resolve the Windows API’s in order to make the necessary system calls on the environment where is being executed. To make system calls the shellcode needs to know the memory address of the DLL that exports the required function. Popular API calls among shellcode writers are LoadLibraryandGetProcAddress. These are common functions that are used frequently because they are available in the Kernel32.dll which is almost certainly loaded into every Windows operating system.  The author can then get the address of any user mode API call made.

Therefore, the first step of the shellcode is to locate the base address of the memory image of Kernel32.dll. It then needs to scan its export table to locate the address of the functions needed.

How does the shellcode locate the Kernel32.dll? On 32-bit systems, the malware authors use a well-known technique that takes advantage of a structure that resides in memory and is available for all processes. The Process Environment Block (PEB). This structure among other things contains linked lists with information about the DLLs that have been loaded into memory. How do we access this structure? A pointer exists to the PEB that resides insider another structure known as the Threat Information Block (TIB) which is always located at the FS segment register and can be identified as FS:[0x30](Zeltser, L.). Given the memory address of the PEB the shellcode author can then browse through the different PEB linked lists such as the InLoadOrderModuleList which contains the list of DLL’s that have been loaded by the process in load order. The third element of this list corresponds to the Kernel32.dll.  The code can then retrieve the base address of the DLL. This technique was pioneered by one of the members of the well-known and prominent virus and worm coder group 29A and written in volume 6 of their e-zine in 2002. The figure below shows a snippet of the shellcode that contains the different sequence of assembly instructions in order for the code to find the Kernel32.dll.

rigekshellcodeapildr

The next step is to retrieve the address of the required function. This can be obtained by navigating through the Export Directory Table of the DLL. In order to find the right API there is a comparison made by the shellcode against a string. When it matches, it fetches its location and proceeds.  This technique was pioneered and is well described in the paper “Win32 Assembly Components” written in 2002 by The Last Stage of Delirium Research Group (LSD). Finally, the code invokes the desired API. In this case, the shellcode uses the CreateProcessA API where it will spawn a new process that will carry out the command line specified in the command line string.

rigekshellcodecreateproc

This command will launch a new instance of the Windows command interpreter, navigate to the users %tmp% folder and then redirect a set of JavaScript  commands to a file. Finally it will invoke Windows Script Host and launch this JavaScript file with two parameters. One is the decryption key and the other is the URL from where to fetch the malicious payload. Essentially this shellcode is a downloader. The full command is shown in the figure below.

rigekshellcodedecoded

If the exploits are sucessfull and the shellcode is executed then the payload is downloaded. In this case the payload is variant of Locy ransomware and a user in Windows 7 box would see the User Account Control dialog box popping up asking to run it.

rigekuac

That’s it! We now have a better understanding how this variant of the RIG Exploit Kit works and what it does and how. All stages of the RIG Exploit Kit enforce different protection mechanisms that slow down analysis, prevent code reuse and evade detection. It begins with multiple layers of obfuscated JavaScript using junk code and string encoding that hides the code logic and launches a browser exploit. Then it goes further by having multiple layers of encrypted Flash files with obfuscated ActionScript. The ActionScript is then responsible to invoke an exploit with encoded shellcode that downloads encrypted payload. In addition, the modular backend framework allows the threat actors to use different distribution mechanisms to reach victims globally. Based on this modular backend different filtering rules are enforced and different payloads can be delivered based on the victim Geolocation, browser and operating system. This complexity makes these threats a very interesting case study and difficult to defend against. Against these capable and dynamic threats, no single solution is enough. The best strategy for defending against this type of attacks is to understand them and to use a defense in depth strategy – multiple security controls at different layers.

References:

SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Techniques
Neutrino Exploit Kit Analysis and Threat Indicators

Tagged , , , , , , , ,

RIG Exploit Kit Analysis – Part 2

Continuing with the analysis of the RIG exploit kit, let’s start where we left off and understand the part that contains the malicious Adobe Flash file. We saw, in the last post, that the RIG exploit kit landing page contains heavily obfuscated and encoded JavaScript. One of the things the JavaScript code does is verifying if the browser is vulnerable to CVE-2013-2551. If it is, it will launch an exploit followed by shellcode, as we saw in the last post. If the browser is not vulnerable, it continues and the browser is instructed to download a malicious Flash file. The HTTP request made to fetch the Flash file is made to the domain add.alislameyah.org. As you could see in the figure below, the HTTP answer is of content type x-shockwave-flash and the data downloaded starts with CWS (characters ‘C’,’W’,’S’ or bytes 0x43, 0x57, 0x53). This is the signature for a compressed Flash file.

rigekflash

Next  step of our analysis? Analyze this Flash file.  Before we start let’s go over some overview about Flash. There are two main reasons why Flash is an attractive target for malware authors. One is because of its presence in every modern endpoint and available across different browsers and content displayers. The other is due to its features and capabilities.

Let’s go over some of the features. Adobe Flash supports the scripting language known as ActionScript. The ActionScript is interpreted by the Adobe ActionScript Virtual Machine (AVM). Current Flash versions support two different versions of the ActionScript scripting language. The Action Script (AS2) and the ActionScript 3 (AS3) that are interpreted by different AVM’s. The AS3 appeared in 2006 with Adobe Flash player 9 and uses AVM2. The creation of a Flash file consists in compiling ActionScript code into byte code and then packaging that byte code into a SWF container. The combination of the complex SWF file format and the powerful AS3 makes Adobe Flash an attractive attack surface. For example, SWF files contain containers called tag’s that could be used to store ActionScript code or data. This is an ideal place for exploit writers and malware authors to conceal their intentions and to use it as vehicle for launching attacks against client side vulnerabilities. Furthermore, both AS2 and AS3 have the capability to load SWF embedded files at runtime that are stored inside tags using the loadMovie and Loader class respectively. AS3 even goes further by allowing referencing objects from one SWF to another SWF. As stated by Wressnegger et al., in the paper “Analyzing and Detecting Flash-based Malware using Lightweight Multi-Path Exploration ” this allows sophisticated capabilities that can leverage encrypted payloads, polymorphism and runtime packers .

Now, let’s go over the analysis and dissection of the Flash file. This is achieved using a combination of dynamic and static analysis. First, we look at the file capabilities and functionality by looking at its metadata. The command line tool Exiftool created by Phill Harvey can display the metadata included in the analyzed file. In this case, it shows that it takes advantage of the Action Script 3.0 functionality.  Information that is more comprehensive is available with the usage of the swfdump.exe tool that is part of the Adobe Flex SDK, which displays the different components of the Flash file. The output of swfdump displays that the SWF file contains the DoABC and DefineBinaryData tags.  This suggests the usage of ActionScript 3.0 and binary data containing other elements that might contain malicious code executed at runtime.

Second, we will go deeper and dissect the SWF file. Open source tools to dissect SWF files exist such as Flare and Flasm written by Igor Kogan. Regrettably, they do not support ActionScript 3.  Another option is the Adobe SWF Investigator. This tool was created by Peleus Uhley and released as open source by Adobe Labs. The tool can analyze and disassemble ActionScript 2 (AS2), ActionScript 3 (AS3) SWFs and include many other features. Unfortunately, sometimes the tool is unable to parse the SWF file in case has been packed using commercial tools like secureSWF and DoSWF.

One good alternative is to use JPEXS Flash File Decompiler (FFDec). FFDec is a powerful, feature rich and open source flash decompiler built in Java and originally written by Jindra Petřík. One key feature of FFDec is that it includes an Action Script debugger that can be used to add breakpoints to allow you to step into or over the code. Another feature is that it shows the decompiled ActionScript and its respective p-code.

One popular tool among Flash malware writers is doSWF. doSWF is a commercial product used to protect the intellectual property of different businesses that use Adobe Flash technology and want to prevent others copying it. Malware authors take advantage of this and use it for their own purposes. This tool can enforce different protections to the code level in order to defeat the decompiler. In addition to the different protections done to the code logic, doSWF can perform literal strings encryption using RC4 or AES. Also, it can be used to wrap an encrypted SWF inside another SWF file using the encrypted loader function.  The decryption occurs at runtime and the decrypted file is loaded into memory.

Opening the SWF file using FFDec and observing its structure using Action Script you can see the different strings referencing doSWF which is an indication that the file has been obfuscated using doSWF. FFDec has a P-Code deobfuscation feature that can restore the control flow, remove traps and remove dead code. In addition, there is a plugin that can help rename invalid identifiers.  The figure below shows a snippet of the ActionScript code after it has been deobfuscated by FFDec.

rigekfirststageflash

As seen in other Exploit Kits such as Angler and Neutrino, the first flash file is only used as carrier and malicious code such as exploit code or more Flash files are encrypted and obfuscated inside this first stage flash file.. The goal here is to perform static analysis of the Action Script code and determine what is happening behind the scenes. Normally, the DefineBinaryData contains further Flash files or exploit code but you need to understand the code and get the encryption keys in order to extract the data. Because my strengths are not in programming I tried to overcome this step before rolling up my sleeves and spending hours trying to understand the Action Script code like I did for Neutrino with the help of some friends. One way to carve the data is to use the Action Script debugger available in FFDec.  Essentially, setting a breakpoint in the LoadBytes() method. Then running the Flash file and then when the breakpoint is triggered, use the FFDec Search SWF in memory plugin in order to find SWF files inside the FFDec process memory address space. But for this sample I used SULO.

During Black Hat USA 2014, Timo Hirvonen presented a novel tool to perform dynamic analysis of malicious Flash files. He released an open source tool named SULO. This tool uses the Intel Pin framework to perform binary instrumentation in order to analyze Flash files dynamically. This method enables automated unpacking of embedded Flash files that are either obfuscated or encrypted using commercial tools like secureSWF and DoSWF.  The code is available for download on F-Secure GitHub repository (https://github.com/F-Secure/Sulo) and it should be compiled with Visual Studio 2010. The compilation process creates a .DLL file that can be used in conjunction with Intel Pin Kit for Visual Studio 2010. There are however limitations in the versions of Adobe Flash Player supported by SULO. At the time of writing only Flash versions 10.3.181.23 and 11.1.102.62 are supported. Nonetheless, one can use SULO with the aim to extract the packed Flash file in a simple and automated manner. In this case, I used the stand alone Flash player flashplayer11_1r102_62_win_sa_32bit.exe.

When using SULO to analyze the Flash file, the second stage Flash file is extracted automatically. The command shown in figure below will run and extract the packed SWF file.

rigeksulo

In this particular case, SULO manages to extract 2 SWF files. So, the next step is to analyze these second stage SWF files. Once again, using FFDec and observing its structure and Action Script code. The second stage Flash files are interesting because one contains the code and the other makes extensive use of DefineBinaryData tag’s to store encrypted data.  As a starting point, the analysis steps here are the same. Invoke the P-Code deobfuscation feature in order to restore the control flow, remove traps and remove dead code. In addition, the plugin to rename invalid identifiers was executed. After performing these two steps, the Action Script code is more readable.

In this post I won’t bother you with the details about the Action Script code. Nonetheless, one thing to mention about the code is that if you follow the site malware.dontneedcoffee.com and the amazing work done by Kaffeine on hunting down, analyzing and documenting Exploit Kits you might have noticed that he calls this version of RIG “RIG-v Neutrino-ish“. The reason might be due to the usage of the RC4 key to decrypt the payload and also the similarities in the way the Flash files are encoded and obfuscated.

Anyhow, understanding the code is important but is also important to understand what the Flash file is hiding from us. In a nutshell, one of the second stage Flash files contain several defineBinaryTags which contain encrypted strings that are used throughout the code in the other second stage Flash file. The data can be obtained by decrypting the data using RC4 keys that are also inside the defineBinaryTags. The figure below ilustrates this.

rigex-2ndstageflash

In summary DefineBinaryData tag 2 contains an array of one 16-byte RC4 key. DefineBinaryData tag 1 contains 19 (0x13) RC4-encrypted strings. The first dword contains the total number of strings. Then each string starts with a dword that contains the size of the string, followed by the RC4-encrypted data.

The DefineBinaryData tag 3 contains an array of three 16-byte RC4 keys. DefineBinaryData tag 4 contains 6 (0x06) RC4-encrypted strings. The first dword contains the total number of strings. Then each string starts with a dword that contains the size of the string, followed by the RC4-encrypted data. The RC4 decryption routine uses the 3 RC4 keys iteratively across the 19 strings.

The decrypted strings are used on different parts of the code. One of the strings is relevant because it contains shellcode that is nearly identical to the one seen in the JavaScript exploit inside the landing page.

Based on the decrypted strings it seems the Flash contains code to exploit CVE-2015-5122. This exploit has a CVSS score of 10 and is known as Adobe Flash ActionScript 3 opaque Background Use-After-Free Vulnerability. This exploit was found as a result of the public disclosure of the Hacking Team leak. In a matter of hours, the exploit was incorporated in the Angler Exploit Kit.

… and with a so lengthy post, that’s it for today. In the following post I will cover how to analyze the Shellcode to understand what is done behind the scenes when one of the exploits is successfully triggered.

First stage Flash file MD5: d11c936fecc72e44416dde49a570beb5
Second stage Flash file MD5: 574353ed63276009bc5d456da82ba7c1
Second stage Flash file MD5: e638fa878b6ea20fa8253d79b989fd7e
References:

Neutrino Exploit Kit Analysis and Threat Indicators

Tagged , , , , , , ,

RIG Exploit Kit Analysis – Part 1

One of the Exploits kits that has been in the news lately is the RIG Exploit Kit. Some of the infections seen by the community seem to be part of a campaign called Afraidgate. I had the chance to capture one infection from this campaign. So, I decided to give it a try and write a small write-up about this multistage weaponized malware kit. The following analysis focus is on a drive-by-download campaign observed few days ago. It leverages the RIG Exploit Kit to infect systems and drop a new version of Locky ransomware (Odin).

Due to the complex nature of Exploit Kits, in order to perform analyses I use a combination of both dynamic and static analysis techniques. For the dynamic analysis part, I used an enhanced version of the setup described here “Dynamic Malware Analysis with REMnux’.

As usual, the infection starts with a innocent victim browsing to a compromised website. The compromised website replies with a HTTP response similar to the one in the figuyre below. The response does not always include the malicious payload. The compromised web server contacts the RIG back-end infrastructure in order perform various checks before it delivers the malicious JavaScript code. Checks include verification of the victim IP address and its Geo-location. Furthermore, within the malicious JavaScript code, there are new domain names and URLs that are generated dynamically by the backend for each new infection. As you could see in the figure, inside the HTTP response, blended with the page content, there is code to invoke a malicious JavaScript code. In this particular case, the malicious code is retrieved from the URL /js/stream.js” hosted on “monro.nillaraujo.com”. Noteworthy is the fact that for each new request to the compromised site there is a new domain and URL generated dynamically by the Exploit Kit. This is a clever technique and is much more challenging to build defenses that block these sites

rigex1

To reach out to the server “monro.nillaraujo.com ” the operating system performs a DNS query in order to finds its IP address. The name server (NS) who is authoritative for the domain gives the DNS response. In this case the NS server for this domain is “ns1.afraid.org “. This server belongs to the Free DNS hosting. They provide everyone with free DNS access. In this case the threat actors take advantage of this. I think the name of the Afraidgate campaign might have derived from the fact that the DNS domains used in the gates are being answered by afraid.org. Brad Duncan might be able to answer this! Another interesting fact is that the answer received by the DNS server have a short time to live (TTL). This technique is often leveraged by the threat actors behind the EK because this will make the domain only available for of a limited amount of time, allowing them to shift infrastructure quickly. This makes the blocking and analysis much more difficult. The below figure show the DNS answer for the domain  “monro.nillaraujo.com”.

rigexafraiddnsreply

This particular domain at the time of the analysis was resolved to the IP 139.59.171.176. Then, after the DNS resolution, the browser makes the HTTP request in order to fetch the malicious JavaScript code. The HTTP answer is shown below.

rigex2

The line of code that contains the <iframe> tag is instrumental in the infection chain. This line of code will instruct the browser to make a request to the URL / ?xniKfreZKRjLCYU=l3SKfPrfJxzFGMSUb-nJDa9BNUXCRQLPh4SGhKrXCJ-ofSih17OIFxzsmTu2KTKvgJQyfu0SaGyj1BKeO10hjoUeWF8Z5e3x1RSL2x3fipSA9weEYQ4U-ZWVE7g-iVukmrITIs0uxRKA4DRYnuJJVlJD4xgY0Q
that is hosted in the server add.ALISLAMEYAH.ORG. This is the server hosting the RIG Exploit landing page for this particular infection and points to an IP address in Russia. When the browser processes this request, the victim lands in the Exploit Kit landing page that in turn delivers a HTML page with an script tag defined in its body. This script tag contains heavily encoded and obfuscated JavaScript code.

The widespread install base of JavaScript allows malware authors to produce malicious web code that runs in every browser and operating system version.  Due to its flexibility, the malware authors can be very creative when obfuscating the code within the page content.  In addition, due to the control the threat actors have over the compromised sites, they utilize advanced scripting techniques that can generate polymorphic code. This polymorphic code allows the JavaScript to be slightly different each time the user visits the compromised site.  This technique is a challenge for both security analysts and security controls.For each new victim request there is a different landing URL and slightly different payload. The figure below shows the JavaScript code inside the HTTP response. This is the RIG Exploit kit landing page.

rigeklanding

From an analysis perspective, the goal here is to understand the result of the obfuscated JavaScript. To be able to perform this analysis one needs to have a script debugger and a script interpreter. There are good JavaScript interpreters like SpiderMonkey or Google Chrome v8 that can help in this task. SpiderMonkey is a standalone command line JavaScript interpreter released by the Mozilla Foundation (SpiderMonkey). Google Chrome v8 is an open source JavaScript engine and an alternative to SpiderMonkey (Introduction Chrome V8).

In this particular case, the JavaScript contains dependencies of HTML components. Because of this, it is necessary to use a tool that can interpret both HTML and JavaScript. One tool option is JSDetox created by Sven Taute (JSDetox). JSDetox allows us to statically analyze and deobfuscate JavaScript.

Another great Java Script debugger suite is Microsoft Internet Explorer Developer Tools, which includes both a debugger for JavaScript and VBScript. This tool allows the user to set breakpoints. In this case by stepping through the code using the Microsoft IE Developer tool and watching the content of the different variables, the deobfuscation can be done.  Another option is to use Visual Studio client side script debugging functionality in conjunction with Internet Explorer.

In this case, I used the Microsoft Internet Explorer Developer. After some time analyzing the deobfuscation loop, stepping over the lines of code, inserting breakpoints in key lines and watching the different variables, some of the code is revealed.  The result is JavaScript code that triggers the exploit code based on the browser version and if you have Adobe Flash installed it will trigger the download of a malicious Flash file.

For the IE Browser, the RIG EK should leverage two or more exploits at a given time. In my setup I was limited to the the JavaScript code that seems to leverage CVE-2013-2551. This exploit has a CVSS score of 9.3 and exploits a Use-after-free vulnerability in Microsoft Internet Explorer 6 through 10. This vulnerability was initially discovered by VUPEN and demonstrated during the Pwn2Own contest at CanSecWest in 2013. After the detailed post from VUPEN, different exploit kits started to adopt it. According to the NTT Global Threat Intelligence Report 2015, this highly reliable exploit made its way to the top of being one of the most popular exploits used across all Exploit Kits today.

rigieexploit

The code is quite large so I won’t post it here but in the figure above you could see the last part of it. It contains the Shellcode and the URL plus RC4 key that are used to fetch the malicious payload and decrypt it e.g, Locky ransomware.

That’s it for today. In the following post I will cover the malicious Flash file and how to analyze the Shellcode to understand what is done behind the scenes when the exploit is successfully triggered.

References:

Neutrino Exploit Kit Analysis and Threat Indicators

Tagged , ,

Evolution of Stack Based Buffer Overflows

overflowOn the 2nd November, 1988 the Morris Worm was the first blended threat affecting multiple systems on the Internet.  One of the things the worm did was to exploit a buffer overflow against the fingerd daemon due to the usage of gets() library function. In this particular case the fingerd program had a 512-byte buffer for gets(). However, this function would not verify if the input received was bigger than the allocated buffer i.e., would not perform boundary checking. Due to this, Morris was able to craft an exploit of 536-bytes which will fill the gets() buffer and overwrite parts of the stack. More precisely it overwrote the memory address of the return stack frame with a new address. This new address would point into the stack where the crafted input has been stored. The shellcode consisted on a series of opcodes that would perform the execve(“/bin/sh”,0,0) system call. This would give a shell prompt to the attacker. A detailed analysis about it was written by the Eugene Spafford, an American professor of computer science at Purdue University. This was a big event and made buffer overflows gain notoriety.

Time has passed and the security community had to wait for information about the closely guarded technique to be publicly available.  One of the first articles on how to exploit buffer overflows was written in the fall of 1995 by Peiter Zatko a.k.a Mudge – at the time Mudge was one of the members of the prominent hacker group L0pht.  One year later, in the summer of 1996, the 49th issue of the Phrack e-zine was published. With it, came the notorious step-by-step article “Smashing the Stack for Fun and Profit” written by Elias Levy a.k.a. Aleph1.  This article is still today a reference for the academia and for the industry in order to understand buffer overflows.  In addition to these two articles another one was written in 1997 by Nathan Smith named ” Stack Smashing vulnerabilities in the UNIX Operating System.” These 3 articles, especially the article from Aleph1 allowed the security community to learn and understand the techniques needed to perform such attacks.

Meanwhile, in April 1997 Alexander Peslyak a.k.a. Solar Designer posted on Bugtraq mailling list a Linux patch in order to defeat this kind of attacks. His work consisted in changing the memory permissions of the stack to read and write instead of read, write and execute. This would defeat buffer overflows where the malicious code would reside in the stack and would need to be executed from there.

Nonetheless, Alexander went further and in August 1997 he was the first to demonstrate how to get around a non-executable stack using a technique known as return-to-libc. Essentially, when executing a buffer overflow the limits of the original buffer will be exceeded by the malicious input and the adjacent memory will be overwritten, especially the return stack frame address. The return stack frame address is overwritten with a new address. This new address, instead of pointing to an address on the stack it will point to a memory address occupied by the libc library e.g, system().  Libc is the C library that contains all the system functions on Linux such as printf(), system() and exit().  This is an ingenious technique which bypasses non-executable stack and doesn’t need shellcode.  This technique can be achieved in three steps. As Linus Torvalds wrote in 1998 you do something like this:

  • Overflow the buffer on the stack, so that the return value is overwritten by a pointer to the “system()” library function.
  • The next four bytes are crap (a “return pointer” for the system call, which you don’t care about)
  • The next four bytes are a pointer to some random place in the shared library again that contains the string “/bin/sh” (and yes, just do a strings on the thing and you’ll find it).

Apart of pioneering the demonstration of this technique, Alexander also improved his previous non-executable stack patch with a technique called ASCII Armoring. ASCII Armoring would make buffer overflows more difficult to happen because it will map the shared libraries on memory address that contain a zero byte such as 0xb7e39d00.  This was another clever defense because one of the causes of buffer overflows is the way the C language handles string routines like strcp(), gets() and many others. These routines are created to handle strings that terminate with a null byte i.e, a NULL character. So, you as an attacker when you are crafting your malicious payload you could provide malicious input that does not contain NULL character. This will be processed by the string handling routine with catastrophic consequences because it does not know where to stop.  By introducing this null byte into memory addresses the payload of buffer overflows that are processed by the string handling routines will break.

Based on the work from Alexander Peslyak, Rafal Wojtczuk a.k.a. Nergal, wrote in January 1998 to the Bugtraq mailing list another way to perform return-to-libc attacks in order to defeat the non-executable stack. This new technique presented a method that was not confined to return to system() libc  and could use other functions such as strcpy() and chain them together.

Meanwhile, In October 1999, Taeh Oh wrote “Advanced Buffer Overflow Exploits” describing novel techniques to create shellcode that could be used to exploit buffer overflow attack.

Following all this activity, Crispin Cowan presented on the 7th USENIX Security Symposium on January 1998 a technology known as StackGuard. StackGuard was a compiler extension that introduced the concept of “canaries”. In order to prevent buffer overflows, binaries compiled with this technology will have a special value that is created during the function epilogue and pushed into the stack next to the address of the return stack frame. This special value is referred as the canary. When preforming the prologue of a function call, StackGuard will check if the address of the return stack frame has been preserved. In case the address has been altered the execution of the program will be terminated.

As always in the never ending cat and mice game of the security industry, after this new security technique was introduced, others have had to innovate and take it to the next level in order to circumvent the implemented measures.  The first information about bypassing the StackGuard was discovered in November 1999 by the Polish hacker Mariusz Wołoszyn and posted on the BugTraq mailing list. Following that In January 2000, Mariuz a.k.a. Kil3r and Bulba, published on Phrack 56 the article “Bypassing StackGuard and StackShield”.  Following that a step forward was made in 2002 by Gerardo Richarte from CORE security who wrote the paper “Four different tricks to bypass StackShield and StackGuard protection”.

The non-executable stack patch developed by Alexander was not adopted by all Linux distributions and the industry had to until the year 2000 for something to be adopted more widely.  In August 2000, the PaX team (now part of GR-security) released a protection mechanism known as Page-eXec (PaX) that would make some areas of the process address space not executable i.e., the stack and the heap by changing the way memory paging is done.  This mitigation technique is nowadays standard in the GNU Compiler Collection (GCC) and can be turned off with the flag “-z execstack”.

Then in 2001, the PaX team implemented and released another mechanism known as Address Space Layout Randomization (ASLR). This method defeats the predictability of addresses in virtual memory. ASLR randomly arranges the virtual memory layout for a process. With this the addresses of shared libraries and the location of the stack and heap are randomized. This will make return-to-libc attacks more difficult because the address of the C libraries such as system() cannot be determined in advance.

By 2001, the Linux Kernel had two measures to protect against unwarranted code execution. The non-executable stack and ASLR. Nonetheless, Mariusz Wołoszyn wrote a breakthrough paper in issue 58 of Phrack on December 2001.  The article was called “The Advanced return-into-lib(c) exploits” and basically introduced a new techniques known as return-to-plt. This technique was able to defeat the first ASLR implementation. Then the PaX team strengthen the ASLR implementation and introduced a new feature to defend against return-to-plt. As expected this technique didn’t last long without a comprehensive study on how to bypass it. It was August 2002 and Tyler Durden published an article on Phrack issue 59 titled “Bypassing PaX ASLR protection”.

Today, ASLR is adopted by many Linux distributions. Nowadays is built into the Linux Kernel and on Debian and Ubuntu based systems is controlled by the parameter  /proc/sys/kernel/randomize_va_space. This mitigation can be changed with the command “echo <value > /proc/sys/kernel/randomize_va_space ” where value can be:

  • 0 – Disable ASLR. This setting is applied if the kernel is booted with the norandmaps boot parameter.
  • 1 – Randomize the positions of the stack, virtual dynamic shared object (VDSO) page, and shared memory regions. The base address of the data segment is located immediately after the end of the executable code segment.
  • 2 – Randomize the positions of the stack, VDSO page, shared memory regions, and the data segment. This is the default setting.

Interesting is the fact that on 32-bit Linux machines an attacker with local access could disable ASLR just by running the command “ulimit -c”. A patch has just been released to fix this weakness.

Following the work of StackGuard, the IBM researcher Hiroaki Etoh developed ProPolice in 2000. ProPolice is known today as Stack Smashing Protection (SSP) and was created based on the StackGuard foundations. However, it brought new techniques like protecting not only the return stack frame address as StackGuard did but also protecting the frame pointer and a new way to generate the canary values. Nowadays this feature is standard in the GNU Compiler Collection (GCC) and can be turned on with the flag “-fstack-protector”.  Ben Hawkes in 2006 presented at Ruxcoon a technique to bypass the ProPolice/SSP stack canaries using brute force methods to find the canary value.

Time passed and in 2004, Jakub Jelinek from RedHat introduced a new technique known as RELRO. This mitigation technique was implemented in order to harden data sections of ELF binaries.  ELF internal data sections will be reordered. In case of a buffer overflow in the .data or .bss section the attacker will not be able to use the GOT-overwrite attack because the entire Global Offset Table is (re)mapped as read only which will avoid format strings and 4-byte write attacks. Today this feature is standard in GCC and comes in two flavours. Partial RELRO (-z relro) and Full RELRO (-z relro -z now). More recently, Chris Rohlf wrote an article about it here and Tobias Klein wrote about it on a blog post.

Also in 2004 a new mitigation technique was introduced by RedHat engineers. The technique is known as Position Independent Executable (PIE). PIE is ASLR but for ELF binaries. ASLR works at the Kernel level and makes sure shared libraries and memory segments are arranged in randomized addresses. However, binaries don’t have this property. This means the addresses of the compiled binary when loaded into memory are not randomized and become a weak spot for protection against buffer overflows. To mitigate this weakness, RedHat introduced the PIE flag in GCC (-pie). Binaries that have been compiled with this flag will be loaded at random addresses.

The combination of RELRO, ASLR, PIE and Non-executable stack raised significantly the bar in protecting against buffer overflows using return-to-libc technique and its variants. However, this didn’t last long. First Sebastian Krahmer from SUSE developed a new variant of return-to-libc attack for x64 systems. Sebastian wrote a paper called “x86-64 buffer overflows exploits and the borrowed code chunks exploitation technique”.

Then with an innovative paper published on ACM in 2007, Hovav Shacham wrote “The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)”. Hovav introduced the concept of using return oriented programming and what he called gadgets to extend the return-to-libc technique and bypass different mitigation’s enforced by the Linux operating system. This technique was based on the work from Solar and Nergal and does not need to inject code and takes advantage of existing instructions from the binary itself. Reuse existing instructions and chain them together using the RET instruction to achieve the end goal of manipulating the program control flow execute code of attackers choice. This is a difficult technique to perform but is powerful and is known as ROP. A summary was  presented by Hovav on Black Hat 2008.

Also, in 2008, Tilo Müller wrote “ASLR Smack & Laugh Reference” explaining the different attacks against ASLR in a comprehensive study that outlines the various techniques. In 2009 the paper “Surgically returning to randomized lib(c)” from Giampaolo Fresi Roglia also explains how to bypass non-executable stack and ASLR.

In 2010, Black Hat had 3 talks about Return-Oriented exploitation. More recently and to facilitate ROP exploitation, the French security researcher Jonathan Salwan wrote a tool written in Python called ROPgadget. This tool supports many CPU architectures and allows the attacker to find the different gadgets needed to build its ROP chain. Jonathan is also gives lectures and makes his material accessible. Here is the 2014 course lecture on Return Oriented Programming and ROP chain generation. ROP is the current attack method of choice for exploitation and research is ongoing on mitigation and further evolution.

Hopefully, this is gives you good reference material and a good overview about the evolution of the different attacks and mechanisms against  Stack based buffer overflows. There are other type of buffer overflows like format strings, integer overflows and heap based but those are more complex. Buffer Overflows is a good starting point before understanding those. Apart of all the material linked in this article, good resources for learning about this topic are the books Hacking: The Art of Exploitation by Jon Erickson, The Shellcoder’s Handbook: Discovering and Exploiting Security Holes by Chris Anley et.al., and A Bug Hunter’s Diary: A Guided Tour Through the Wilds of Software Security by Tobias Klein.

 

Tagged , , , , , ,