A security researcher has published details about two Tor security issues and promises to release three more.
Over the past week, a security researcher has published technical details about two vulnerabilities impacting the Tor network and the Tor browser.
In blog posts last week and today, Dr. Neal Krawetz said he was going public with details on two alleged zero-days after the Tor Project has repeatedly failed to address multiple security issues he reported throughout the past years.
The researcher also promised to reveal at least three more Tor zero-days, including one that can reveal the real-world IP address of Tor servers.
Approached for comment on Dr. Krawetz’s intentions, the Tor Project did not reply to a request for comment and provide additional details on its stance on the matter.
THE FIRST TOR SECURITY ISSUE
Dr. Krawetz, who operates multiple Tor nodes himself and has a long history of finding and reporting Tor bugs, disclosed the first Tor security issue last week.
In a blog post dated July 23, the researcher described how companies and internet service providers could block users from connecting to the Tor network by scanning network connections for “a distinct packet signature” that is unique to Tor traffic.
The packet could be used as a way to block Tor connections from initiating and effectively ban Tor altogether — an issue that oppressive regimes are very likely to abuse.
Earlier today, in a blog post shared with ZDNet, Dr. Krawetz disclosed a second issue. This one, like the first, allows network operators to detect Tor traffic.
However, while the first issue could be used to detect direct connections to the Tor network (to Tor guard nodes), the second one can be used to detect indirect connections.
These are connections that users make to Tor bridges, a special type of entry points into the Tor network that can be used when companies and ISPs block direct access to the Tor network.
Tor bridges act as proxy points and relay connections from the user to the Tor network itself. Because they are sensitive Tor servers, the list of Tor bridges is being constantly updated to make it difficult for ISPs to block it.
But Dr. Krawetz says connections to Tor bridges can be easily detected, as well, using a similar technique of tracking specific TCP packets.
“Between my previous blog entry and this one, you now have everything you need to enforce the policy [of blocking Tor on a network] with a real-time stateful packet inspection system. You can stop all of your users from connecting to the Tor network, whether they connect directly or use a bridge,” Dr. Krawetz said.
Both issues are specifically concerning for Tor users residing in countries with oppressive regimes.
DISSATISFACTION TOWARDS THE TOR PROJECT’S SECURITY STANCE
The reason why Dr. Krawetz is publishing these issues in Tor is that he believes the Tor Project does not take the security of its networks, tools, and users seriously enough.
The security researcher cites previous incidents when he tried to report bugs to the Tor Project only to be told that they were aware of the issue, working on a fix, but never actually deploying said fix. This includes:
- A bug that allows websites to detect and fingerprint Tor browser users by the width of their scrollbar, which the Tor Project has known about since at least June 2017.
- A bug that allows network adversaries to detect Tor bridge servers using their OR (Onion routing) port, reported eight years ago.
- A bug that lets attackers identify the SSL library used by Tor servers, reported on December 27, 2017.
All of these issues are still not fixed, which has led Dr. Krawetz in early June 2020 to abandon his collaboration with the Tor Project and take the current approach of publicly shaming the company into taking action.
I'm giving up reporting bugs to Tor Project. Tor has serious problems that need to be addressed, they know about many of them and refuse to do anything.
I'm holding off dropping Tor 0days until the protests are over. (We need Tor now, even with bugs.) After protests come 0days.
— Dr. Neal Krawetz (@hackerfactor) June 4, 2020
Updated at 20:30 ET, July 30:
The Tor Project has responded to Dr. Krawetz’ two blog posts. It’s a lengthy response detailing each issue, which we are reproducing in full below. In summary, the Tor Project’s reply is that they are aware of the issues the researcher reported, but they differ on the threats they pose to users, claiming they can’t be enforced at scale. The full reply is below:
“We have been working on the first issue raised in the blog post published 7/23 (scrollbar width) here: https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/22137. The blog post claims that the scrollbar width of a Tor Browser user can be used to
distinguish which operating system they are using. There are other ways a Tor Browser user’s operating system can be discovered. This is known and publicly documented. When Tor Browser does not communicate the operating system of its user, usability decreases. Commonly used websites cease to function (ie, Google Docs). The security downside of operating system detection is mild (you can still blend with everybody else who uses that operating system), while the usability tradeoff is quite extreme. Tor Browser has an end goal of eliminating these privacy leaks without breaking web pages, but it is a slow process (especially in a web browser like Firefox) and leaking the same information in multiple way is not worse than leaking it once. So, while we appreciate (and need) bug reports like this, we are slowly chipping away at the various leaks without further breaking the web, and that takes time.
“The second claim in the first blog post published 7/23 outlines a way to recognize vanilla Tor traffic based on how it uses TLS with firewall rules. Fingerprinting Tor traffic is a well-known and documented issue. It’s an issue that has been discussed for more than a decade. (Example: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/106-less-tls-constraint.txt). Fixing the way Tor traffic can be fingerprinted by its TLS use is very small step in the censorship arms race. We decided that we should not try to imitate normal SSL certs because that’s a fight we can’t win. Our goal is to help people connect to Tor from censored networks. Research has shown that making your traffic look like some other form of traffic usually leads to failure (http://www.cs.utexas.edu/~amir/papers/parrot.pdf). The strategy Tor has decided to take is better and more widely applicable, and that strategy is developing better pluggable transports. Tor has an entire anti-censorship team tackling this problem and has funding earmarked for this
specific purpose.
“The blog post published 7/30 is correct in suggesting that a finely-calibrated decision tree can be highly effective in detecting obfs4; this is a weakness of obfs4. However, what works in someone’s living room doesn’t necessarily work at nation-scale: running a decision tree on many TCP flows is expensive (but not impossible) and it takes work to calibrate it. When considering the efficacy of this, one also has to take into account the base rate fallacy: the proportion between circumvention traffic and non-circumvention traffic is not 1:1, meaning that false positives/negative rate of 1% (which seems low!) can still result in false positives significantly outweighing true positives. That said, obfs4 is certainly vulnerable to this class of attack. The post says “However, I know of no public disclosure for detecting and blocking obfs4.” There’s work in the academic literature. See Wang et al.’s CCS’15 paper: https://censorbib.nymity.ch/#Wang2015a. See also Frolov et al.’s NDSS’20 paper: https://censorbib.nymity.ch/#Frolov2020a The blog post cites Dunna’s FOCI’18 paper to support his claim that the GFW can detect obfs4. This must be a misunderstanding. On page 2, the paper says: “We find that the two most popular pluggable transports (Meek [7] and Obfs4 [18]) are still effective in evading GFW’s blocking of Tor (Section 5.1).” The blog post also cites another post to support the same claim: https://medium.com/@phoebecross/using-tor-in-china-1b84349925da. This blog post correctly points out that obfs4 bridges that are distributed over BridgeDB are blocked whereas private obfs4 bridges work. This means that censors are not blocking the obfs4 protocol, but are able to intercept bridge information from our distributors. One has to distinguish the protocol from the way one distributes endpoints.
“The findings published today (7/30) are variants of existing attacks (which is great!) but not 0-days. They are worth investigating but are presented with little evidence that they work at scale.”
The Tor Project also disagreed with Dr. Krawetz’ classification of the issues he detailed on the blog as zero-days. The title has been updated accordingly.
Source: (https://www.zdnet.com/)