Looking back at Teredo, IPv6 deployment, and protocol design

I just read the paper on Teredo published in the Computer Communication Review: Investigating the IPv6 Teredo Tunnelling Capability and Performance of Internet Clients by Sebastian Zander, Lachlan L. H. Andrew, Grenville Armitage, Geoff Huston and George Michaelson. This is a very well done study, which used image links on web sites to test the capability of clients to use IPv4, IPv6, and distinguish in IPv6 the native connections and the Teredo connections. Their conclusion is that many more hosts would use IPv6 if Microsoft was shipping Teredo in “active” instead of “dormant” state in Windows Vista and Windows 7, but that the communications using Teredo incur some very long delays, at least 1.5 second more to fetch a page than with native IPv4 or IPv6 connections. Both of these issues can be traced to specific elements of the protocol design and especially to our emphasis of security over performance.

I proposed the first Teredo drafts back in 2000, shortly after joining Microsoft. The idea was simple: develop a mechanism to tunnel IPv6 packets over the IPv4 Internet, in a fashion that works automatically across NAT and firewalls. It seemed obvious, but was also quite provocative – the part about working automatically across firewalls did not sit well with firewall vendors and other security experts. In fact, this was so controversial that I had to revise the proposal almost 20 times between July 2000 and the eventual publication of RFC 4380 in February 2006. Some of the revisions dealt with deployment issues, such as minimizing the impact on the server, but most were answers to security considerations. When Microsoft finally shipped Teredo, my colleagues added quite a few restrictions of their own, again largely to mitigate security issues and some deployment issues.

The connection between Teredo clients and IPv6 servers is certainly slowed down by decisions we made in name of security. When a Teredo client starts a connection with an IPv6 server, the first packet is not the TCP “SYN,” but rather a Teredo “bubble” encapsulated in an IPv6 ICMP echo request (ping) packet. The client will then wait to receive a response from the server through the Teredo relay closest to the server, and will then send the SYN packet through that server. Instead of a single round trip, we have at least two, one for the ICMP exchange and another for the SYN itself. That means at a minimum twice the set up delay. But in fact, since the client is dormant, it will send first a qualification request to the server, to make sure that the server will be able to relay the exchange, thus adding another round trip, for a total of three. The server happens to be often quite overloaded, and the queuing delays in the servers can cause quite a few additional latency. This is very much what the study is demonstrating.

We could have engineered the exchange for greater speed. For example, instead of first sending a ping to the server, we could have just sent the TCP SYN to the server, and used the SYN response to discover the relay. This would have probably increased the connection success rate, as many servers are “protected” by firewalls that discard the ICMP packets. But at the time we convinced ourselves that it would be too easy to attack. A hacker could send a well-timed spoofed TCP SYN response and hijack the connection. The randomness of the source port number and of the initial TCP sequence number provide some protection against spoofing, but these are only 16 and 32 bits, and that was deemed too risky. The ICMP exchange, in contrast, can carry a large random number and is almost impossible to spoof by hackers not in the path. So the protocol design picked the “slow and secure” option.

The connection to IPv6 hosts is just an example of these design choices for security over performance. There are quite a few other parts of the protocol were we could have chosen more aggressive options, using optimistic early transmission instead of relying on preliminary synchronization. But we really wanted to deliver a secure solution – secure computing was indeed becoming a core tenet of the Microsoft culture by that time. We were also concerned that if Teredo was perceived as insecure, more and more firewalls would simply block it, and our deployment efforts would fail. All of these were valid reasons, but the long latencies observed in the study are also an impediment to deployment. If I did it again, I would probably bring the balance a bit more towards the side of performance.

But then, the really cool part of the study is their point that removing some of the restrictions on Teredo would almost triple the number of hosts capable of downloading Internet content, adding IPv6 capability to 15-16% of Internet hosts. That would be very nice, and I would be happy to see that!

Advertisements
Posted in Uncategorized | Leave a comment

Making self-signed TLS happen

The IETF is in a pickle again. The security conscious folks would very much like to make TLS the norm for HTTP/2.0, but there is this little problem of certificate management. TLS requires that the server presents a public key certificate. Browsers like IE9 only trust the certificate if it is signed by a certification authority, but getting a signature like that costs money. A few hundred dollars here or there is not a problem for the big sites, but it is a significant expense for the smaller folks. They prefer a simpler solution, deploying TLS with “self-signed” certificates, which they can manufacture freely. But then, self-signed certificates are not all that secure…

Suppose that www.example.com tries to secure its TLS connections with a self-signed certificate. Alice clicks on the link, but a hacker managed to pollute her DNS server, and the request is redirected to a hacker-controlled server. That server can provide its own self signed certificate. Alice sees a “secure” connection, and believes she is landing at the right place, but her connection is really controlled by the attacker. Many see that as a fatal flaw of the “self-signed certificate” approach. The simple analysis is “self-signed bad, authority-signed good.” But things are not so simple.

Self-signed certificates do have some advantages over plain text connections, since the data is encrypted. The forging of the site’s certificate requires an active attack: hacking the DNS, forcing the client to use an untrusted proxy, or somehow affecting the routing of IP packets. In the absence of active attacks, the encryption will protect against passive eavesdropping. Since non-encrypted connection can also be affected by active attacks, it would seem that self-signed TLS only has advantages over plain text HTTP. The main disadvantage are in the realm of psychology. The client might come to trust the encryption and take more risks than with a plain text exchange.

The counterpoint to that argument is that authority-signed certificates are not entirely secure either. I discussed that in a previous post, The Comodo Hacker, PKI and Internet Standards. Hackers can compromise TLS by obtaining a fake certificate, either from a rogue authority or through some compromised service. In that post, I proposed a set of solutions, based on the user’s password, a shared secret between the site and the user. We can devise exchanges that provide mutual authentication by proving to both parties that they know the password. We can also use the password exchange to verify that the TLS exchange has not been compromised. It requires some work on standards, and some sharp use of cryptography, but it is entirely doable today.

The good news is that the password-based proofs would also prevent the hacks against the self-signed certificates. I have not attended the IETF for some time, I do not relish the idea of engaging in one more long standardization slog, and my current job description at Microsoft does not include spending time at the IETF. But securing the Internet is a really important outcome. I will have to find a way…

 

Posted in Uncategorized | Leave a comment

Dreaming of open Wi-Fi

Was it just yesterday that people could just sit on a bench in a foreign city, scan the airwave, and just connect to an open Wi-Fi network? It wasn’t called Wi-Fi stealing then, and I know many people who enjoyed the connectivity. But you cannot do that anymore. Today, a scan of the Wi-Fi band reveals lots of protected networks. The networks that are left open are either commercial or hackers. Commercial networks want to charge you $10 for an hour of connection. Hackers want you to connect to their network so they can do something nefarious with your data or your computer. No wonder we end up paying through the nose for a flimsy cellular Internet connection.

I actually had a part in the closing of Wi-Fi networks. A few years ago, I was managing the development of the Wi-Fi stack in Windows, and it was pretty obvious that we needed to do something about Wi-Fi security. Many people may not object to sharing their Internet bandwidth, especially when they do not use it. But sharing your Wi-Fi connection does more. It provides direct access to all the computers in your home. Chances are that some of these computers or appliances are old, or unpatched, or poorly configured. So we made it really easy for Windows users to manage Wi-Fi passwords. Together with the networking industry, we engaged in an education campaign to ask people to secure their network. It seems that we were really successful.

Of course, we will not come back to a time when all network were open. Besides the security risks, there is also a potential liability. If someone connects to the Internet through your network and does something nefarious, the IP traces will point to your router. You may well have to go to court and try to prove that no, it wasn’t you. That defense may or may not work well. In fact, Germany just passed a law that requires that every Wi-Fi user secures their access point, precisely so that people could not hide behind open Wi-Fi connections. So, we have to accept that networks in the future will be protected. If we want to enable sharing, we will have to work through that.

All that does not mean we cannot enable Wi-Fi sharing. We just need to engineer a solution that is secure for the Wi-Fi owner and also secure for the visitors. The visitor’s traffic shall be properly isolated from the home network, in order to not expose the local computers and appliances. The visitor shall be kept accountable, which probably requires signing with a verifiable network identity, and keeping logs of who visited what when. The visitor shall be able to verify that the shared network has a good standing, if only to not waste time connecting to poorly connected services.

None of that seems overly hard to do. In fact, we have examples of such deployment between universities worldwide, with EDUROAM. When a student or a professor from some EDUROAM member, say “A.EDU” visits another university, say “B.EDU”, they can connect to the local network, say “B-WIFI.” They are requested to sign-in, and they do that by providing their identity at their home university, e.g. john.smith@A.EDU, and their own password. This is done using professional grade equipment and protocols. Wi-Fi authentication uses 802.1x, which is also used in most enterprise networks. The authentication requests are routed using RADIUS, and they are typically encrypted so that man-in-the-middle cannot access the password. It appears to work very well.

Of course, EDUROAM only services universities, and there are big differences between universities and homes. For example, EDUROAM does not make too many efforts to isolate visitors. Visiting scholars may well need to access local computers, so isolation would be counterproductive. Also, a university network is typically shared by thousands of students, and is thus not very isolated to start with. The consumer grade Wi-Fi routers often do not support 802.1x, this is considered a “professional” feature. RADIUS is an “infrastructure” protocol which is well suited for linking universities, but would be hard to deploy between millions of homes. But EDUROAM outlines one of the key components of the solution: network based authentication.

Now, we just have to fill the gaps. Can we deploy solutions that are simple enough for home routers, yet have all the good properties of EDUROAM? I have seen many people and company try, but they have yet to succeed. Maybe it is just too hard, and we have to try something else, like the mobile connection sharing developed by Open Garden. But we can still dream of open Wi-Fi…

Posted in Uncategorized | 2 Comments

Locator-ID split in a new transport protocol

It seems that the Internet engineers like to periodically revisit old discussions. For example, every year or so, there will be intense exchanges on the IETF mailing list on the continuous use of ASCII for formatting RFC. The discussion will explore a variety of options before choosing to just maintain the status-quo. Of course ASCII RFC is just one of these recurring topics. There are a couple more, notably the “ID-locator separation,” which periodically resurfaces as one of the opportunity that was supposedly missed during the selection of IP next generation, now IPv6.

The ID-locator separation argument is fundamentally about routing, and how to scale the routing infrastructure to match an ever larger Internet. In the current Internet, IP addresses have two roles. They tell the network the destination of a packet, i.e. the location of the target, and they tell to the TCP layer the context of the packet, i.e. the identification of the peer. The argument is that if we separated these two roles, for example by splitting the address in two parts, we could use one part as a host identifier that never changes, and another part as a “locator” that could be changed in time to accommodate multi-homing and mobility, or to facilitate NAT traversal. TCP connections would be identified by the identifier part, and that same identifier part could also be used by firewalls and other filters.

The proposal looks nice, but it is actually quite hard to deploy. The proposal makes most sense if the locators can be rewritten inside the network, e.g. when crossing the boundaries between management areas, but that can only be done if we design and deploy a new service that retrieves the adequate “locator” for the identifier. This seems expensive, and the practical solutions are merely variations of NAT, which only rewrite the locators when entering a “private” network. But more importantly, the proposals belong to the “infrastructure” category, i.e. new functions that must be deployed by everybody before anybody reaches the full benefits. That means deploying new TCP stacks in pretty much every host and every router of the Internet. We saw with IPv6 how long this kind of deployment takes. See you back in 10 years!

Even if we could actually deploy it, we would have to resolve two nasty issues, security and privacy. The current linkage of address and identification provides for a minimal form of security, by checking that the return address works. If host A sends a message to host B and receives back a valid response, host A has a reasonable assumption that the initial message was delivered to the intended address, and that the response is indeed coming from host B. The assumption is false if the packet routing was somehow hacked, but hacking packet routing is hard, so there is some level of precaution. If there is no linkage between location and identity, if locators can be changed at will, we lose that minimal security.

Today, hosts get new addresses when they move to different network locations. That provides some measure of privacy. Of course hosts can still be tracked by other means, e.g. cookies, but at least the address is different. If we give hosts a strong “network identifier” that will not change with network location, we enable a new way of tracking. If the identifier cannot be changed, then network services can track users using their network identifier, even if these users take the precaution of destroying the web cookies in their browsers. Of course, the privacy issue could be mitigated by changing the identifier often, but that is contradictory with the idea of a stable identifier and a variable locator.

The more I think of it, the more I believe that the locator-ID split would be better addressed by building the ID in the transport protocol, instead of trying to redefine the network layer. Basically, the IP addresses with all their warts and their NATs become the locators, and the identifiers are entirely managed by the transport protocol. Of course, changing TCP is just as hard as changing IP, but we don’t necessarily have to “change TCP.” There can only be a single network protocol, but there can definitely be multiple transport protocols. We could define a new transport protocol for use by applications that want to manage multi-homing and mobility, or even NAT traversal. The broad lines are easily drawn:

  • Each connection has its own identifier, a large number that is independent of the IP addresses used to route the packet.
  • The identifier is negotiated during the initial exchange, using some cryptographic procedure.
  • The cryptographic procedure generates a secret key used to prove the authenticity of packets.
  • Packets can be sent or received through any pair of IP addresses, as long as they arrive to the destination.
  • Apart from the change in identifiers, the protocol behaves like TCP.

This will provide a large part of the advantages of the id/loc separation, without requiring any update to the network nodes. If I had to design it now, I would define an encapsulation on top of UDP, so that the packets could be sent across existing NAT. In fact, adding a NAT traversal function similar to ICE would enable deployment behind NAT. The main issue there is the design of the connection identifier negotiation, and its linkage to some form of host identity. There are many possibilities, from variations of Diffie-Hellman to designs similar to TLS. Clearly, that needs some work. But we would get a secure transport protocol that supports multi-homing, renumbering, mobility and NAT traversal. Seems like we should start writing the Internet drafts!

Posted in Uncategorized | 7 Comments

Thwarting Google Analytics with Internet Explorer options

Folks seem to discover that Google knows a whole lot about us. “Google seems to know your age, what you search…” Well, big news. Google is making money by selling advertisements, they hope to make even more money by selling even better advertisements, advertisements that are so well tuned to your preferences that you are very likely to be interested, click and buy something. That implies knowing you as well as possible, or, put another away, spy on you as efficiently as possible.

You may protest, “How can they do that to their customers?” But you are not the customer of Google. The customers are the folks that pay Google money, and these are the advertisers. Hence, the whole business of providing free services is not so much about helping you as it is about knowing you, capturing your attention, and selling your “eyeballs” to the advertisers. The search system may well have started from the noble goal of “indexing the world’s information,” but the side benefit of selling advertisements quickly became the dominant motivation. Gmail quickly “improved” on that: fee e-mail, but Google reads all of it so they know you better and can sell more ads. Chrome is a very nice and fast browser, but if you use it Google knows all the pages that you look at. Google Plus is the latest adjunct to these offers, because Google cannot possibly accept that Facebook would know something about you that they don’t. I don’t know whether Nathan Newman was the first to coin the phrase but his Huff Post paper makes the point: with Google, “you are not the customer, you are the product.”

Of course, the simple answer is to not use Google’s products. That’s what I do. I use Bing for search, I maintain my own mail server and never use Gmail, and I browse the web with IE9 without ever using Chrome. But it turns out that the simple answer is not sufficient. Google’s marketing is pretty good, and they have convinced a pretty large number of web sites to cooperate with their data collection. Many web pages include Google “analytics” service, many outsource their internal search Google, and then there are other niceties like the “plus” button. Each time I visit a page that has been spiked by one of these Google services, Google will get a “cookie” from my web browser, and somewhere in their gigantic data bases they will add an indication that I visited this or that page. I hate that, and I decided to do something about it.

As I mentioned above, I use IE9, the most recent version of Internet Explorer. IE9, Chrome and Firefox are basically equivalent in terms of standards and performance. I don’t want to use Chrome because of Google’s data collection practices. Firefox is nice, but I am more used to IE9 UI, and its optimizations like the “accelerators.” I work at Microsoft, of course, so I may well have some bias, but I also have a good exposure to the company’s practices regarding privacy, which I trust a lot more than those of Google. And the good thing is that, with IE9, I can easily block Google’s cookies, and drastically diminish their capacity to track me.

IE allows us to set various options. The “security” tab provides ways to tune IE security levels for various categories of sites. I am interested in blocking cookies, and that’s the default option of the “restricted sites.” So, here we go, I simply select the restricted sites option, and click on the “sites” button. Then, I add as untrusted a series of sites that should not be tracking me:

Voila, no more cookies to Google, no more tracking for me. In fact, I don’t just block Google, but also a variety of advertisement providers. I will still see the ads on the web pages, and I will be able to click on them if I really want more information, but they won’t get a cookie. They also won’t be allowed to play scripts, popup windows, and a bunch of other really annoying things.

Of course, there is a downside. If I really want to use Google’s services, Google won’t let me because the sites require that the cookie option be enabled. But there is a simple workaround. If for some reason I really have to use a Google site, I just use Firefox!

 

Posted in Uncategorized | 3 Comments

The Comodo Hacker, PKI and Internet Standards

It has been about 10 months since the “Comodo Hacker” managed to get false PKI certificates from the Italian reseller of the Comodo certification authority. Then, by end of August or early September 2011, the same hacker managed to get another batch of forged PKI certificates by hacking the servers of Diginotar, a Dutch certification authority. In both cases, the forged certificates were used by the Iranian authorities to spoof servers like Gmail, presumably to spy on their opponents. The attack was stopped by cancelling the forged certificates, but it was a one-time reaction. As long as we do not fix the PKI standard or the way we use it in HTTPS and other secure connections, we are at risk. Yet, I see very little activity in that direction. The IETF is in charge of “transport level security,” the SSL and TLS protocols used in HTTPS, but there is no particular initiative to correct it and prevent future attacks. So, here is my analysis and a couple of modest proposals.

PKI stands for “Public Key Infrastructure,” the standard that describes how web server can publish their public keys, which web clients then use to establish secure HTTP connections. The public keys are just large numbers that have no meaning by themselves, unless they are tied to the name of the server by a certificate, signed by a trusted “Certification Authority.” Web browsers come loaded with the certificates of certification authorities, so these signatures can be verified. For example, in Internet Explorer, the list can be seen in the “Internet Options” under the “Content” tab, by clicking the “Certificate” button. The list looks like this picture:

To simplify, we can say that according to PKI all certificate authorities are equal. Any authority in the list can sign a certificate for Google, Facebook, Microsoft, or any other server. The list itself is long enough, but behind these “root” authorities are a set of resellers, also known as “intermediate authorities,” who can present special certificates signed by one of the root authorities, and then get the right to sign certificates for any web site. These resellers too can sign certificates for Google, Facebook, Microsoft, or any other server.

The Comodo Hacker obtained forged certificates that allowed the Iranian authority to present, for example, a fake Gmail server. Some Iranian user would try to contact Gmail using the secure web protocol, HTTPS. The Iranian ISP, instead of sending the request directly to Google, would send it to an intermediate “spoofing” server, presumably managed by some kind of secret service. This server will show the forged credential, and the user will establish the connection. The user’s browser will indicate that everything is fine and secure, and the HTTP connection will indeed be encrypted. But it will only be encrypted up to the spoofing server. That server will probably relay the connection to Gmail, so the user can read his mail. But everything the user reads or write can be seen and copied by the spoofing server: user name, password, email, list of friends. Moreover, once the password is known, the spies can come back and read more mail. The opponents believed they were secure, but the secret police could read their messages and understand their networks. The Gestapo would have loved that.

Of course, the companies who manage the certification authorities quickly reacted to the incident. The forged Comodo certificates were erased, and the Diginotar authority was decertified. Users were told to change their password. This particular attack was stopped. But stopping one attack does not eliminate the risk. There have been similar attacks in the past, when various “national firewalls” colluded with intermediate regional authorities to mount pretty much the same attack as the Iranians. In 2009, security researchers found ways to spoof certificates by exploiting a bug in Microsoft’s security code. Of course, bugs were fixed and attacks were stopped, but the structural problem remains. In fact, there is a cottage industry of “SSL logging” products like this one from “Red Line Software”. They sell these products to enterprises who really want to monitor what their employees do on the web, encrypted or not. The enterprise installs their own “trusted authority certificate” on the users’ PC, and voila, SSL spoofing is enabled.

The PKI “certification chain” is the Achilles’ heel of HTTPS, has been for a long time, and the Internet technologists are not doing much about it. OK, that includes me. I too should be doing something there. So here is a small proposal. In fact, here are three modest proposals. One is very easy to implement: protect the user’s password, even when using HTTPS. The second is a bit harder: detect spoofing, and warn the user when someone else is listening on the connection. The third is the most difficult but also the most interesting: compute the encryption keys in a way that is impervious to SSL/TLS spoofing. All these proposals rely on the user’s password, and assume that the first action done on the connection is the verification of that password.

Let’s start by the simplest proposal, protecting the user password. Many web sites ask the user to provide a password. They ask that after the TLS connection is established. Since the connection is encrypted, they just send the password “in clear text,” and thus reward any SSL/TLS spoof with a clear text copy of the user password. This is silly. And it is relatively easy to fix. The password dialogue is typically implemented in JavaScript. Instead of just using JavaScript to display a form and read the password, web site programmers should include some code to send the password securely. The simplest mechanism would be to use a challenge/response protocol: the server sends a random number, and the client answers with a cryptographic hash of the random number and the client password. The spoofing server will see a random string instead of the password. In theory, the spoof is defeated… but the practice is different, so please continue to the next paragraph.

OK, anyone who knows a bit of cryptography knows that challenge response protocols are susceptible to dictionary attack. In our example, the spoofing server will see the challenge and then see the response. They can try a set of candidate passwords from a dictionary, until they find a password that produces the target response when hashing the challenge. With modern computers, these attacks will retrieve the password within minutes. So, instead of challenge/response, one should use a secure password protocol such as SRP or DH-EKE. Both protocols use cryptography tricks to make sure that even if the password is weak, the authentication is strong. The idea is simple: plan for the worst case scenario, and protect the password as if it was exchanged on an unencrypted connection. This causes a bit of additional computation on the server, but provides assurance that even if the PKI certificate is spoofed, the password will be protected.

Let’s look now at the slightly more complex proposal, detect the spoof. Detecting the attack is simple if the client already knows the authentic value of the server’s certificate. In practice, that happens when the client uses a specialized application. For example, my copy of Outlook routinely detects the presence of SSL spoofing proxies, like those deployed by some enterprises. It does so because the client is configured with the “real” certificate of the mail server, and can immediately detect when something is amiss. Arguably, the simplest way to detect these spoofs would be to just keep copies of the valid certificates of the most used servers. But this simple solution may not always work. It fails if the certificate changes, or if the client is not properly configured. Knowing the server certificate in advance is good, but we may need a second protection for “defense in depth.”

The secure password verification protocols, SRP and DH-EKE, can do more than merely verifying a password. They provide mutual authentication, and allow client and servers to negotiate a strong encryption key. If the authentication succeeds, the server has verified the client’s identity, but the client has also verified that the server knew the password. At that point, the client knows that it is speaking to the right server, although there may be a “man-in-the-middle” spying on the exchange. Suppose now that we augment the password exchange with a certificate verification exchange. The server sends to the client the value of the real certificate, encrypted with the strong encryption key that was just negotiated: the client can decrypt it, detect any mismatch, and thus detect the “man in the middle.” We could also let the client send to the server a copy of the certificate received during the SSL/TLS negotiation, allowing the server to detect the man in the middle and take whatever protective action is necessary. The users could be warned that “someone is spying on your connection,” and the value of the attack would be greatly diminished.

Password protection and spoof detection that rely on scripts can be spoofed, and that is a big weakness. Both password protection and spoof detection will typically be implemented in JavaScript, and the scripts will be transmitted with the web page content. The attacker could rewrite the scripts and replace the protected password exchange with a clear text exchange. The client will then send the password in clear text to the spoofing server, which will execute the protected exchanges with the real server. The protection could be defeated. This code spoofing attack is quite hard to prevent without some secure way to installing code on the client. This would lead to a cat-and-mouse game, in which the servers keep obfuscating the code in the web page to prevent detection and replacement by the spoof, while the spies keep analyzing the new versions of the web pages to make sure that they can keep intercepting exchanges. Arguably, the cat and mouse game is better than nothing. In any case, it will greatly increase the cost of managing spoofing servers. But we want a better solution.

We could solve the problem by installing dedicated software on the client’s machine. Instead of using a web browser to access Gmail or Facebook, the client would use an app. The application may include a copy of the real certificate of the server, and will ensure that the connection is properly secured. The problem with that solution is scale. To stay secure, we would have to install an app for each of the websites that we may visit, and the web sites will have to develop an app for each of the platforms that their clients may use. This is a tall order. It might work for very big services like Facebook, but it will be very difficult for small websites

My more complex proposal is to redefine the TLS protocol, to defend against certificate spoofing. In the most used TLS mechanism, the server sends a certificate to the client. The client picks a random number to be used as encryption key, encrypts this number with the public key in the certificate, and sends the encrypted value to the server. The security of that mechanism relies entirely on the security of the server’s certificate. If we want a mechanism that resists spoofing of the server certificate, we need an alternative.

There is a variant of TLS authentication that relies only on a secure password exchange. SRP TLS uses the extension mechanisms defined in the TLS 1.2 framework to implement mutual authentication and key negotiation using the strong password verification protocol, SRP. An extension to the client “Hello” protocol carries the client name, while the “Server Key Exchange” and “Client Key Exchange” carry the encrypted values defined in the SRP algorithm. Compared to classic TLS, SRP-TLS has the advantage of not requiring a PKI certificate, and thus being impervious to PKI spoofing. It has the perceived disadvantage of relying only on passwords for security, exposing a security hole whenever the password is compromised or guessed, arguably a more frequent event than certificate spoofing. It also has the disadvantage of sending the client identity in clear text; this is hard to avoid has the server needs to build the key exchange data using its copy of the client’s password, and thus needs to know the client’s identity before any exchange. SRP-TLS is not yet used much, and is not implemented in the Microsoft security suites that are used by major web browsers.

As we embrace the challenge of building a spoofing resistant exchange, we probably do not want to give up the strength of public key cryptography. Instead of just replacing the certificate by a password, we probably want to combine certificate and password. We also do not want to make the user identity public, at least when the server certificate is not spoofed. The main problem is that we have to insert a new message in the TLS initial handshake, to send the identity of the client encrypted with the server’s public key. This extra message changes the protocol specification, essentially from a single exchange (server certificate and client key) to a double exchange (identity request and client identity response, followed by the exchange of keys). The principle is simple, but the execution requires updating a key Internet standard, and that can take time.

While we wait for standards to be updated, we should apply some immediate defense in depth. Protect the password exchange instead of sending it in the clear, and if possible add code to detect a man-in-the-middle attack. For the web services that can afford it, develop applications that embed a copy of the sites’ certificates and some rigorous checks. There is no reason to reward attackers who spoof security certificates.

 


 

Posted in Uncategorized | 1 Comment

Your IP Address is not anonymous!

There was a thread on Slashdot today about an “Anonymous hacker” who got caught by the FBI after launching a Denial of Service attack. As explained in this news report, this young man downloaded the “Low Orbit Ion Cannon” tool favored by the “Anonymous” hackers, and targeted it at www.genesimmons.com. He wanted to sanction Gene Simmons’ statements about the punishment of P2P music downloaders. I may have my own opinion about music downloads, but mostly I am puzzled by the naivety of the attacker. He seems to have used is own computer for launching the attack!

Using your own computer for launching a DOS attack with the LOIC tool should really mark you as a newbie.

The LIOC tool works by issuing a large number of HTTP queries to the target site. That can be quite efficient. The Distributed Denial of Service attack triggered by the Blaster worm against the Microsoft web site used exactly that. Each infected computer was launching hundreds of HTTP queries towards the site, there were thousands of infected computers, and the total load arriving at the site was really large. But the attack against Gene Simmons involved far fewer computers – at most a few hundreds if we believe the analysis of “Anonymous” attacks by Craig Labovitz. And since Craig has a really good reputation, I would rather believe him.

The source IP addresses of incoming HTTP queries are routinely logged by the web sites. If someone launches hundreds of queries, the address will be quite visible in the logs. It will probably take the FBI no more than a couple minutes to associate the IP address with an ISP, and obtain the name and address of the person who is paying for the subscription. Even tracking a hundreds of addresses will not be too difficult. That’s probably how the young man in the story got caught, and he really should have known better.

Before someone asks, no, you cannot really hide behind dynamic IP addresses or NAT. The ISP who allocate dynamic IP addresses keep their own logs of which subscriber got what address at what time, and will of course politely answer the FBI requests. There may be multiple computers in your house behind the home router, but there may well be enough cues in the HTTP headers to identify a specific computer. Failing that, a police investigation will probably easily find out who did it among the people with access to the home router. You may try to argue that some random stranger somehow got access to your wireless network, or that the attack was due to a virus on your computer, but that won’t work too well if the police also finds some copies of the Anonymous manifesto lying around…

Your IP address is really easy to track. But when we think about privacy, we have better accept that as a reality. Any system that exposes our IP address to random third parties can get us tracked!

Posted in Uncategorized | Leave a comment