MS17-010: EternalBlue’s Large Non-Paged Pool Overflow in SRV Driver

The EternalBlue exploit took the spotlight last May as it became the tie that bound the spate of malware attacks these past few weeks—the pervasive WannaCry, the fileless ransomware UIWIX, the Server Message Block (SMB) worm EternalRocks, and the cryptocurrency mining malware Adylkuzz.

EternalBlue (patched by Microsoft via MS17-010) is a security flaw related to how a Windows SMB 1.0 (SMBv1) server handles certain requests. If successfully exploited, it can allow attackers to execute arbitrary code in the target system. The severity and complexity of EternalBlue, alongside the other exploits released by hacking group Shadow Brokers, can be considered medium to high.

We further delved into EternalBlue’s inner workings to better understand how the exploit works and provide technical insight on the exploit that wreaked havoc among organizations across various industries around the world.

Vulnerability Analysis

The Windows SMBv1 implementation is vulnerable to buffer overflow in Large Non-Paged kernel Pool memory through the processing of File Extended Attributes (FEAs) in the kernel function, srv!SrvOs2FeaListToNt. The function srv!SrvOs2FeaListToNt will call srv!SrvOs2FeaListSizeToNt to calculate the received FEA LIST size before converting it to NTFEA (Windows NT FEA) list. The following sequence of operations happens:

  1. srv!SrvOs2FeaListSizeToNt will calculate the FEA List size and update the received FEA List size
  2. The resulting FEA size is greater than the original value because a wrong WORD cast
  3. When the FEA List is iterated to be converted to NTFEA LIST, there will be an overflow in the non-page pool because the original total size of list is miscalculated

Overflow Analysis
Our analysis of the overflow applies to srv.sys 6.1.7601.17514_x86. The vulnerable code can be triggered using srv!SrvSmbOpen2. The trace is as follows:
00 94527bb4 82171149 srv!SrvSmbOpen2
➜ SrvOs2FeaListSizeToNt() 
01 94527bc8 821721b8 srv!ExecuteTransaction+0x101
02 94527c00 8213b496 srv!SrvSmbTransactionSecondary+0x2c5
03 94527c28 8214a922 srv!SrvProcessSmb+0x187
04 94527c50 82c5df5e srv!WorkerThread+0x15c
05 94527c90 82b05219 nt!PspSystemThreadStartup+0x9e
06 00000000 00000000 nt!KiThreadStartup+0x19

To be able to analyze the overflow, we set the break points to:

bp srv!SrvSmbOpen2+0x79 “.printf \”feasize: %p indatasize: %p fealist addr: %p\\n\”,edx,ecx,eax;g;”

When the break point is hit we have the following (in hex and decimal values):

  • feasize: 00010000 (65536)
  • indatasize: 000103d0 (66512)
  • fealist addr: 89e980d8

From here we can see that the IN-DATA size 66512—the same value of the Total Data Count in the NT Trans Request—is bigger than the FEA list size 65536.

Figure 1: Snapshot of code showing IN-DATA size (highlighted in blue) and FEA list size (highlighted in red)

What’s notable here is that the pointer to IN-DATA will be cast to the FEA List structure, as shown below:

Figure 2: FEA List structure

After casting the IN-DATA buffer, we will have the FEA size 00010000 (65536) stored in FEALIST ➜ cbList. The next step in the SMB driver will be to allocate a buffer to convert the FEA List to NT FEA List. This means it is required to calculate the NTFEA list size, which is done by calling the srv!SrvOs2FeaListSizeToNt function.

To see the returned values for this function, we put the following break point:

bp srv!SrvOs2FeaListToNt+0x10 “.printf \”feasize before: %p\\n\”,poi(edi);r $t0 = @edi;g;”

bp srv!SrvOs2FeaListToNt+0x15 “.printf \”NTFEA size: %p feasize after: %p\\n\”,eax,poi(@$t0);g;”

After breaking we get:

  • feasize before: 00010000
  • feasize after: 0001ff5d
  • NTFEA size: 00010fe8

Accordingly, we found that FEALIST ➜ cbList was updated from 0x10000 to 0x1ff5d. But what part of the code is making the wrong calculation? The code below shows how the error happens:

Figure 3: Code snapshot showing error in calculating FEALIST  cbList

In the code snapshot above, list 40 onwards showed an example of the calculation error. Because the Original FEA list size was updated, the iteration to copy the values to the NTFEA LIST will go beyond the NTFEA size returned in v6 (which was 00010fe8). Note that if the function returns at line 28 or at line 21 the FEA list is not updated. The other condition that leads to the update of v1 other than the one used by EternalBlue is if there is trail data at the end of the FEA list, but not enough to store another FEA structure.

We also analyzed what happens in the kernel memory during a buffer overflow on LARGE NON-PAGE Kernel Pool. When the SrvOs2FeaListSizeToNt returns, the size required to store the NTFEA LIST is 00010fe8. This will require a Large Kernel POOL Allocation in SRV.sys. Using the following breakpoints helps track exactly what happens when the FEA list is converted to NTFEA list:

bp srv!SrvOs2FeaListToNt+0x99 “.printf \”NEXT: FEA: %p NTFEA: %p\\n\”,esi,eax;g;”
bp srv!SrvOs2FeaToNt+04d “.printf \”MOV2: dst: %p src: %p size: %p\\n\”,ebx,eax,poi(esp+8);g;”

bp srv!SrvOs2FeaListToNt+0xd5

To sum it up, once SrvOs2FeaListSizeToNt is called and the Pool allocated, the function SrvOs2FeaToNt is used while iterating over the FEA list to convert the elements of the list. Inside SrvOs2FeaToNt, there are two _memmove operations where all the buffer copy operations will happen. With the aforementioned break points, it is possible to track what happens during the FEA list conversion. The trace will take quite some time, however.

Figure 4: Code snapshot showing copy operations

After the trace, the break point srv!SrvOs2FeaListToNt+0xd5 will hit and we can get all data required to analyze the buffer overflow. There are 605 copy operations with size 0 because in the beginning of the payload, the FEA list will have a 0 bytes value, which corresponds to 605 FEA structs. The next FEA size will be F383 (copy 606) and the resulting copy will end in 85915ff0.

After the copy operation 606 we will see the buffer at the end: 85905008 + 10FE8 = 85915FF0. However, another FEA iteration will happen, and the size will be A8 in this case. That will overwrite the next memory area. Note how after overwriting the data, it will be in a different POOL—in this case, the SRVNET.sys pool.

After copy operation 607 is a corrupted FEA and the server return, STATUS_INVALID_PARAMETER (0xC000000D). The last FEA that is in the NT Transaction is sent to the server.

Figure 5: Code snapshot showing the corrupted FEA and server return

EternalBlue’s Exploitation Capabilities

The overflow happens in NON-PAGED Pool memory—and specifically in Large NON-PAGED Pool. Large non-page pool do not have a POOL Header. Because of this, after the large POOL buffer, another POOL Buffer can be allocated—one that is owned by a driver with specific DRIVER data.

Therefore, the attack has to manipulate the POOL buffer coming after the overflowed buffer. EternalBlue’s technique is to control the SRVNET driver buffer structures. To achieve this, both buffers should be aligned in memory. To create the NON-PAGED POOL alignment, the kernel pool should sprayed. The technique is as follows:

  1. Create multiple SRVNET buffers (grooming the pool)
  2. Free some of the buffers to create some holes where the SRV buffer will be copied
  3. Send the SRV buffer to overflow the SRVNET buffer.

Exploitation Mechanism

The vulnerable code for the buffer overflow works on KERNEL NON-PAGED memory. It also works in LARGE NON-PAGED POOL. Those kinds of pools do not have any POOL headers embedded at the beginning of the page, so special techniques are required to exploit them. The technique requires reversing some Structure that can be allocated in the overflow area, as shown below:

Figure 6: EternalBlue’s exploit mechanism

The creation of multiple SRVNET buffers (Kernel Grooming) approximates what happens in memory and simply used to represent the idea. Note that we’ve also intentionally omitted other details to prevent our analysis from being misused.

Figure 7: EternalBlue’s exploit chain

EternalBlue’s Exploit Chain

EternalBlue goes through a chain of processes in order to successfully exploit a vulnerable system or network, as shown above.

EternalBlue first sends an SRV buffer except the last packet. This is because the Large NON-PAGED POOL buffer will be created when the last data in the transaction arrives at the server. The SMB server will then accumulate the DATA in an Input buffer until all transaction data are read. The total transaction data will be specified in the initial TRANS packet. Once all transaction data have arrived, the SMB server will process the data. In this case, the data is dispatched to the SrvOpen2 function to read the data via Common Internet File System (CFIS).

At this point, EternalBlue ensures that all sent data is received by the server and sent to an SMB ECHO packet. Because the attack can be implemented over a slow network, this echo command is important.

In our analysis, even if we sent the initial data, the “Vulnerable Buffer” isn’t created in memory yet. Kernel grooming tries to allocate an SRV vulnerable buffer just before the SRVNET buffer. Kernel grooming employs these steps:

  1. FreeHole_A: EternalBlue will start creating a kernel hole A by sending SMBv1 packet
  2. SMBv2_1n: Send a group of SMBv2 packets
  3. FreeHole_B: Send another free hole buffer; this one should be sent before the previous hole is free to make sure another one is created
  4. FreeHole_A_CLOSE: close the connection to make the buffer free, after which close A in order to create free hole
  5. SMBv2_2n: Send a group of SMBv2 packets
  6. FreeHole_B_CLOSE: close the connection to make the buffer free
  7. FINAL_Vulnerable_Buffer: Send the last packet of the vulnerable buffer

A Vulnerable Buffer will be created in memory just before the SRVNET buffer and part of the SRVNET is overwritten. The conversion from FEA List to NTFEA List will return an error because FEA structs are invalid after a certain point, in which case the server will return with STATUS_INVALID_PARAMETER (0xC000000D).

Patch your systems

Given how EternalBlue served as the doorway for many of the malware that severely impacted end users and enterprises worldwide, it also serves as a lesson on the importance of applying the latest patches and keeping your systems and networks updated. EternalBlue has already been issued a fix for Windows systems, including unsupported operating systems.

Apart from implementing regular patch management to systems and networks, IT/system administrators are also recommended to adopt best practices such as enabling intrusion detection and prevention systems, disabling outdated or unnecessary protocols and ports (like 445), proactively monitoring network traffic, safeguarding the endpoints, and deploying security mechanisms such data categorization and network segmentation to mitigate damage in case of exposure. Employing virtual patching can also help against unknown vulnerabilities.

Trend Micro Solutions

Trend Micro™ Deep Security™ and Vulnerability Protection provide virtual patching that protects endpoints from threats such as fileless infections and those that abuse unpatched vulnerabilities. OfficeScan’s Vulnerability Protection shields endpoints from identified and unknown vulnerability exploits even before patches are deployed.

Trend Micro™ Deep Discovery™ provides detection, in-depth analysis, and proactive response to attacks using exploits and other similar threats through specialized engines, custom sandboxing, and seamless correlation across the entire attack lifecycle, allowing it to detect these kinds of attacks even without any engine or pattern update.

More in-depth information on Trend Micro’s solutions for EternalBlue and the malware that leverage the exploit can be found in these technical support pages:

Updated as of June 5, 2017, 7:45 PM PDT with minor corrections on some wordings and terms used in the article. 

Post from: Trendlabs Security Intelligence Blog – by Trend Micro

MS17-010: EternalBlue’s Large Non-Paged Pool Overflow in SRV Driver

Read more: MS17-010: EternalBlue’s Large Non-Paged Pool Overflow in SRV Driver

Incoming search terms

Story added 2. June 2017, content source with full text you can find at link above.