Thursday, October 2, 2014

World War Zero Access — When Zombie botnets come alive

To kick off cyber security awareness month and, of course, the Halloween month, today we discuss how to bring botnet zombies (or zombots, as we sometimes call them) back from the dead, and why you should care.

Botnets have become big business in criminal circles, and a scourge on the Internet. At any given moment there are many botnets active, controlling millions of unwilling computers to send spam, steal personal information, or whatever someone paying the botnet controllers wants them to do.

The industry, naturally, has taken measures against these botnets. Some of them are controlled by command-and-control servers who are continuously updated at certain algorithm-generated domains; hijacking these domains effectively stops the malware from receiving new commands.

More recent botnets, such as Zeus Gameover, are more decentralized and implement their own peer-to-peer protocols to make taking them down harder. The usual approach there is to sinkhole the peers, i.e., to inject numerous bogus peers to the botnet until they overtake the real ones. This is a half-measure at best, and botnets are quickly adapting to resist this sort of attack.

Fighting botnets is hard work.

The end result is that, unless the threat is removed directly from the infected machines, the malware will linger possibly indefinitely there. This can have several consequences, one of which is posthumous reactivation even after the original owners are gone.

Case study: Zero Access

Take the Zero Access malware, for example. This was originally a kernel-level rootkit that infected machines, and acts as a delivery framework for other malicious payloads. Its communication protocol was peer-to-peer, with a backup C&C server, on a few TCP ports, with two layers of communication:

 - 'getL', 'retL', 'newL' commands, which manage the dispersion of new infected nodes;
 - 'getF', which downloads a module which executes an actual malicious payload. Files are signed using a 512-bit RSA key.

Sometime around 2012, Zero Access upgraded their protocol: they switched to UDP, and switched packet encryption from RC4 with a hardcoded key to a custom 'cipher':

    void encrypt(unsigned char * msg, size_t msg_size) {
      uint32_t k = 0x66747032;
      for(size_t i = 0; i < msg_size; i += 4) {
        store32_le(msg + i, load32_le(msg + i) ^ k);
        k = rotate_left(k, 1);

This later version of Zero Access also upgraded the RSA key to sign payloads to 1024 bits. Besides that, the protocol didn't change much, beyond some basic IP filtering to prevent sinkholing.

A 2013 survey  by Christian-Rossow et al found around 50,000 Zero Access 1 nodes stilll around; other more recent crawls by Peter Kleissner show that this number has remained somewhat stable. Indeed, there are still many Zero Access 1 bots floating around, which have not upgraded to version 2 for one reason or another. Hijacking these nodes to do one's bidding is one RSA-512 key away:

-----END PUBLIC KEY-----
Is this a problem? Not at all. The first 512-bit key was factored 15 years ago by a team at CWI. It did take many machines and months of work. In the ensuing decade, however, this task became incredibly easier. Today, with tools like GGNFS, msieve, or CADO-NFS, you can factor keys of this size (and higher) in just a couple of days on a beefy machine. And we did. For all intents and purposes, every remaining old Zero Access node is controllable by anyone who has a spare machine to do this.

To show we did in fact factor the key, here is a Base64-encoded RSA-PSS signature for the message 'Braaaaaaaaaains!':

You can verify the signature with the following Python script:

    from Crypto.Signature import PKCS1_PSS
    from Crypto.Hash import SHA
    from Crypto.PublicKey import RSA

    pub = '''-----BEGIN PUBLIC KEY----- 
    -----END PUBLIC KEY-----'''
    pk  = RSA.importKey(pub)
    verifier =

    msg = 'Braaaaaaaaaains!'
    sig = '''HnwaZghZhtiPJ+OHbTnNDAVcg/yvciDdmBi+SRoKwWXV4k

    print verifier.verify(, sig)

What does this imply? An organization infected with a dead botnet's malware can be specifically targeted, by combining botnet crawling with WHOIS information. The malware could then be taken over for information extraction. Keep in mind that botnets are not solely after monetary profit, by way of credit card info. They often are rented to the highest bidder, and can be used for corporate espionage, spam, or as a delivery platform for more advanced intrusions.

This is less farfetched than you think. Among the many Snowden revelations of the last year, one of them was that the NSA has been busy taking over botnets for their own purposes, going as high as 140,000 hijacked peers.
It is safe to assume that they are not the only organization doing this. Until the infections themselves are killed by their respective organizations, this is a latent risk.

Crypto is hard

The buck does not stop here. Malware has routinely used cryptography in its operation for a while, be it C&C communication, encrypting files for ransom, or to authenticate payloads. Like every other software, however, malware also regularly gets it wrong, with hilarious effects:

The point here is that malware authors make cryptographic mistakes just like any other developer, and this can often translate to direct control of the infected machine by entities other than the author. In other words, even if the original authors are taken down, even if the original command and control servers disappear, so long as the malware is still present, in a dormant state, it is still a significant risk to the whole ecosystem.

But wait, there's more. The most common algorithm and key length in all the malware we have analyzed is 1024-bit RSA. This was once impregnable, like RSA-512 before it, but 1024-bit keys have already reached end-of-life status and are gradually becoming factorable, if they are not already (by well-funded agencies).

Arjen Lenstra estimated, back in 2009, that RSA-1024 was probably secure against community efforts (that is, relatively underfunded) until around 2014. A few years from now, we may see many of these botnetseven if abandoned by thenbecome alive again by the first attacker to factor those keys.
Yes, master...I will do your bidding


Botnets have always been quick to adapt to new landscapes, and are increasingly harder to fully take down as a result of the industry's attempts to do just that. Some of the more advanced malware today is not far from being an APT. In fact, some malware is APT. Therefore it is very dangerous to dismiss infections, even from allegedly inactive botnets, as low-risk. Most of them are equipped with information-stealing capabilities, and can fall into the wrong hands, even after the original owners are gone.

How can you know if your organization is plagued by malware or zombots? One way is by listening in to our breach intelligence feed, Vantage, which continuously monitors the Internet for common botnet activity, among other threats.

So be careful, because the next time you hear trick or treat, it might be a new zombie horde knocking at your ports! Happy Halloween Month ;)

Wednesday, July 2, 2014

H.323 Registration Weaknesses: Part 2

Last time, we have seen how Avaya's H.323 default gateway authentication method was vulnerable to an attacker that is able to passively intercept communications between a client and the gateway. Upon hearing this, we were pointed to the secure authentication method employed in Avaya's gear, with the proprietary OID 2.16.840.1.114187.1.6.2.

This new authentication method is based on DH-EKE, a password-based authenticated key exchange (PAKE). Invented by Bellovin and Merritt in the early 1990s, PAKE schemes kill two birds with one stone:

  • They prevent dictionary attacks to recover passwords, even if an attacker is able to view all traffic;
  • They prevent man-in-the-middle attacks if an active attacker does not know the password. This is an improvement over regular Diffie-Hellman key agreement.

Avaya's PAKE scheme is DH-EKE, One of the variants proposed originally by Bellovin and Merritt. Many more PAKE schemes have popped up over the years, like SPEKE, SRP, J-PAKE, or SPAKE2. Many of them are hindered by patents; EKE has the advantage of its patent having expired a few years ago.

In the secure scheme, the following message sequence happens between the gateway (\(G\)) and user (\(U\)):

  • gatekeeperRequest (\(U \rightarrow G\)): \(U\) generates their own Diffie-Hellman public key, \(\text{DH}_U\), and encrypts it using AES-128 in CTR mode using a key derived from the first 16 bytes of SHA-1(password). \(U\) sends this encrypted public key to \(G\), along with a nonce \(R_e\), and \(IV\), the CTR mode nonce.
  • gatekeeperConfirm (\(G \rightarrow U\)): \(G\) generates its own Diffie-Hellman key \(\text {DH}_G\) and computes the shared key \(K_m\) after decrypting \(U\)'s public key. \(G\) then sends back \(\text{DH}_G\) (unencrypted), a nonce \(R_g\), and the HMAC-SHA1-96 authenticator of this message (with a key derived from \(K_m\)).
  • registrationRequest (\(U \rightarrow G\)): This request proceeds without any significant cryptographic material, except that an HMAC-SHA1-96 authenticator is appended to ensure that \(U\) also knows the shared key \(K_m\).
  • registrationConfirm/registrationRejected (\(G \rightarrow U\)): This message notifies the client whether the registration failed or succeeded.

A good implementation of this protocol makes both offline password bruteforcing and man-in-the-middle attacks unfeasible, as pointed out above. When we analyzed Avaya's client software, however, we found the following optimization: the client did not generate a new Diffie-Hellman key for each registration. Instead, it caches the same \(\text{DH}_U\) for every registration within the same session. This allows offline bruteforcing of the password, which can also lead to man-in-the-middle attacks. 

How does this work? Suppose that an attacker captures two message exchanges like the above for the same user and password. Let \(\text{AES-CTR}_{K_p}(IV)\) be the keystream obtained by encrypting \(\text{IV}, \text{IV}+1\), etc, using \(K_p\) = SHA1(password)[0:16]. Since \(\text{DH}_U\) is the same for both exchanges, the encrypted content of the two gatekeeperRequest packets can be represented as follows:

\[ C_0 = \text{AES-CTR}_{K_p}(\text{IV}_0) \oplus \text{DH}_U \\ C_1 = \text{AES-CTR}_{K_p}(\text{IV}_1) \oplus \text{DH}_U \]

We can use this fact to test whether a guessed password is correct. Suppose we guess a password p; then it is the correct password only if 

\[ \text{AES-CTR}_{K_p}(\text{IV}_0) \oplus C_0 = \text{AES-CTR}_{K_p}(\text{IV}_1)  \oplus C_1 \]

Therefore one password check requires only two AES encryptions and one SHA-1 evaluation, and can be made extremely fast. GPU crackers could have a field day with this.

Unlike the case in the previous post, a sufficiently strong password is enough to thwart bruteforce, since this scheme is using stronger cryptographic primitives. But it is a safe assumption that most passwords are weak, especially so in phones, where it is more likely than not simple a PIN number.

I should also stress that this bug has been fixed; Avaya's PSST was rather forthcoming and professional in the handling of our report, and made the necessary steps to resolve this issue as quickly as possible.

Tuesday, June 24, 2014

H.323 Registration Weaknesses: Part 1

Voice-over-IP VoIP is a rather obscure niche. There are essentially two main standards: SIP and H.323. While the former may be more well-known by the Internet crowd, H.323 is still the dominant standard today for large deployments, due to the telecom heritage of ITU-T standards.

Some time ago, we found a vulnerability in the endpoint registration phase of several Avaya products. In fact, we found two vulnerabilities: one in the "default" autentication method for Avaya phones, and another one in the "secure" method. This post describes the first one. I will describe the second, more interesting, flaw in the next post.

The default authentication mode for Avaya's phones was based on the ISO/IEC 9798-2 standard. This is essentially a challenge-response protocol, wherein the user must prove to the gateway he knows the password by encrypting a gateway-chosen secret with a key derived from it. When a client wants to authenticate itself to a gateway, both knowing a shared secret (e.g., a password), the following interaction happens between the user (U) and the gatekeeper (G):

  • gatekeeperRequest (\(U \rightarrow G\)): this message contains the authenticationCapability field set to pwdSymEnc, and the algorithmOIDs field set to the OID, which is defined to be DES in CTS mode.
  • gatekeeperConfirm (\(G \rightarrow U\)): \(G\) sends back to \(U\) a ClearToken containing the challenge. This challenge consists of a timestamp, a random integer, and generalID, an identifier of the gateway.
  • registrationRequest \(U \rightarrow G\): \(U\) sends the ClearToken back to \(G\), encrypted using the user's key \(K_{p}\) using the aforementioned DES-ECB-CTS. The key derivation is virtually nonexistent, and resembles the weak Norton Diskreet function breakable with Copacobana:

  •  unsigned char deskey[8] = {0,0,0,0,0,0,0,0};  
     for(i=0; i < pwlen; ++i)  
       deskey[i%8] ^= 2*password[i];  
  • registrationConfirm/registrationRejected \(G \rightarrow U\): Depending on whether the authentication succeeded, the gateway will send back one of these messages to the user.
So what is the problem here? The protocol employed for the authentication, ISO/IEC 9798-2 (Mechanism 2), seems to be sound, according to its requirements. This protocol, however, does not do anything to prevent offline attacks that retrieve the shared key; since this key is derived from a (potentialy weak) password, bruteforce will work quite effectively.

One might argue that this is not really a weakness, and that a strong password should be able to keep attackers at bay. Unfortunately, due to the poor cipher choice (56-bit DES in ECB-CTS mode), poor key derivation function, and known plaintext (generalID), one is able to bruteforce any password, regardless of its strength.

Consider the following gatekeeperRequest sample packet:

0000   45 00 01 00 c0 17 24 09 12 02 6a df 14 00 44 00  E.....$...j...D.
0010   45 00 46 00 49 00 4e 00 49 00 54 00 59 00 2d 00  E.F.I.N.I.T.Y.-.
0020   47 00 4b                                         G.K             

We have known plaintext there, namely the UTF-16 string "DEFINITY-GK". Given the encrypted registrationRequest packet,

0000   3d 7a d2 d5 98 9a 04 1c 07 b0 b2 c0 ca 9b fb cb  =z..............
0010   c6 ab 9c 5a 74 e5 0f 4d 80 3e bf fe bb bb d9 ca  ...Zt..M.>......
0020   1d 33 ef                                         .3.             

we can obviously bruteforce the DES key out of a network capture. In this example, the password exployed was "3002".

DES bruteforcing is a very well studied problem, and specialized hardware can break any key quickly and at relatively low cost. With a weak key derivation as the above, it becomes much easier. Consider, for example, a long password that only uses the alphabet A-Z. This highly restricts the bit pattern to 010XXXXX, or 5 useful bits per byte, or \(2^{40}\) distinct keys, already smaller a keyspace than bruteforcing a 9-character password (\(\approx 2^{42}\) possibilities).

One can go even further, by noticing that similar products use similar generalID strings. This allows for precomputing to break those: using time-memory tradeoff techniques, an attacker can perform a single large precomputation that allows for  much faster key recovery. Martin Hellman showed that a precomputed table of \(2^{38}\) values allows for a recovery of the key in about \(2^{38}\) iterations, following a \(2^{56}\)-iteration table generation procedure. The state of the art today is rainbow tables, which provide a significant, albeit non-asymptotic, speedup over Hellman's method.

This is it for the old default authentication method. In short, the symmetric authentication method allows for offline password recovery, and to make it even worse, there is no password strong enough to withstand it. In the next post, we will see their public-key solution to the offline cracking problem, and how it was also broken.