| Commit message (Collapse) | Author | Age |
| ... | |
| | |
|
| | |
|
| |
|
|
| |
Generally, allow digits in flag names.
|
| |
|
|
|
|
|
|
|
|
|
| |
We support searches for relays by hashed fingerprint and for bridges by
hashed hashed fingerprint. The reason is that applications should always
hash full fingerprints in order not to accidentally leak non-hashed bridge
fingerprints.
However, the spec is vague about searching for beginnings of hashed relay
fingerprints and hashed hashed bridge fingerprints. The current code did
not support those, but it should. This commit changes that.
|
| | |
|
| | |
|
| |
|
|
| |
Implements #8678.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Implements step 3 of #6509.
|
| |
|
|
| |
Implements step 2 of #6509.
|
| |
|
|
| |
When set, respond to all requests with 503 Service Unavailable.
|
| |
|
|
|
|
| |
Also fix a potential bug in the servlet's filtering and sorting code.
It's unclear whether this really was a bug, but let's clean up the code
just in case.
|
| | |
|
| |
|
|
| |
Implements Onionoo side of #8374.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The IP-to-city database to be deployed with Onionoo needs to have its "A1"
("Anonymous Proxy") entries fixed just like Tor's IP-to-country file. See
Tor's src/config/README.geoip for detailed information.
- Ship with a variant of Tor's deanonymind.py that removes A1 entries from
IP-to-city databases. Also ship with a custom geoip-manual for manual
replacements..
- Use our own GeoIP file parser, because MaxMind's library doesn't work
with .csv files. On the plus side this removes a dependency and makes
it easier to build Onionoo. On the minus side it adds a bunch of new
code.
- Update index.html to say that some _name entries may be missing if
empty.
- Update .gitignore and INSTALL.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
99.9% of details documents contain parts from the relay's or bridge's
server descriptor. But 0.01% of these server descriptors cannot be found.
Handle these missing descriptor parts correctly, and don't produce invalid
JSON.
Bug found by gsathya.
|
| |
|
|
| |
Fixes #7701, found by hellais.
|
| |
|
|
|
|
|
|
| |
Non-positive offsets were previously ignored, which is still the case.
However, non-positive limits were ignored, too, which seems wrong. If
a client wants even less than none documents, we shouldn't respond with
all documents we have. Instead, we should return an empty response.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
With 10 relays and 10 bridges, setting an offset of 11 should have
returned 9 bridges (relays are removed first), but it return 10. An
offset of 12 would have returned 9 bridges, and so on.
Here's why: we erroneously added a null value to the set of relays before
applying the offset parameter. After throwing out the last relay we threw
out that null value instead of the first bridge.
|
| |
|
|
|
|
| |
The spec says that we support looking up and searching for hashed full
fingerprints, but only the former was implemented. Implement the latter,
too.
|
| | |
|
| | |
|
| |
|
|
| |
Implements step 1 of 3 as suggested in #6509.
|
| | |
|
| |
|
|
|
|
|
|
| |
Running the hourly cronjob twice an hour leads to resetting
guard/middle/exit weights, because we don't store Wxx weights anywhere.
We usually don't run that cronjob more than once per hour, so should be
fine to just add a warning. Should prevent bugs like #7596 in the future,
or at least make us aware of them before users have to tell us.
|
| |
|
|
|
| |
Prevents bugs like #7587 where not updating metrics-lib led to a nasty
graphing problem in Atlas. Now we'd at least learn about the problem.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Also specify bridges[_published] in weights documents.
|
| |
|
|
|
|
|
|
|
| |
Right now, there are 300k bandwidth history files and 140k weights history
files in Onionoo's status directory. weasel suggests to store those files
in subdirectories, e.g., 1/2/1234567.. to speed up access. Let's do that.
The real fix is to use a database, of course. But until we have that,
let's try to make the file system based solution not suck.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Using 4 updater threads reduces the time to process 1 consensus from 20 to
11 seconds here.
|
| | |
|
| | |
|