| Commit message (Collapse) | Author | Age |
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Doesn't make the test pass, but was masking the actual issue which is
"SocksError: [6] TTL expired".
======================================================================
ERROR: test_attachstream
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/atagar/Desktop/stem/test/integ/control/controller.py", line 1206, in test_attachstream
our_stream = [stream for stream in streams if stream.target_address == host][0]
UnboundLocalError: local variable 'streams' referenced before assignment
|
| |\
| |
| |
| |
| |
| |
| | |
Balking if provided with unrecognized arguments, and more intuitive handling
for targets...
https://trac.torproject.org/projects/tor/ticket/14804
|
| | |
| |
| |
| |
| |
| | |
Actually, on reflection if the user only provides attribute targets (ex.
'--target ONLINE') there's no point in erroring. They clearly want to keep the
default.
|
| | |
| |
| |
| |
| | |
Sebastian was understandably confused on ticket 14804 when running '--target
ONLINE' didn't run anything.
|
| | |
| |
| |
| |
| |
| |
| |
| | |
Interesting, thought getopt did this. For both run_tests.py and the tor-prompt
providing an error when unrecognized arguments are provided. Caught by
Sebastian on...
https://trac.torproject.org/projects/tor/ticket/14804
|
| |/
|
|
|
|
|
|
| |
The '-t' argument was an alias for '--target' and '--test'. In practice and
according to the help output it was actually just the former. This souldn't
actually change anything, just clarifying the code. Found as part of...
https://trac.torproject.org/projects/tor/ticket/14804
|
| |
|
|
|
| |
This test compares the output to 'GETINFO version' so being unable to connect
to tor breaks the test.
|
| |
|
|
|
|
| |
Turns out I make foo test paths a lot. Running 'mkdir /tmp/foo' for something
else broke the unit tests for me. Easy enough to do the proper thing and pick a
random unused path.
|
| |\
| |
| |
| |
| |
| |
| |
| | |
Moving a revised version of tor's test_cmdline_args.py into our process tests.
Now that tor has a make target for using Stem to supplement its tests this is
the right home for them.
https://trac.torproject.org/projects/tor/ticket/14109
|
| | | |
|
| | |
| |
| |
| |
| | |
Test for specifying a configuration via arguments, stdin, or torrc that doesn't
exist.
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
Checking both in the happy case and when we don't provide an argument.
|
| | |
| |
| |
| | |
Simple port of test_cmdline_args.py's test_hush().
|
| | |
| |
| |
| | |
Simple port of test_cmdline_args.py's test_help().
|
| | |
| |
| |
| | |
Simple port of test_cmdline_args.py's test_quiet().
|
| | |
| |
| |
| |
| |
| |
| | |
Improved version of test_cmdline_args.py's test_version(). This is kinda a
duplicate of test/integ/version.py's test_get_system_tor_version_value(), but
checks this a bit more directly. Besides, it gets us to lay the groundwork for
the other tests.
|
| |/
|
|
|
|
|
| |
Spotted that we're missing a test for this while looking into an issue where we
possibly leave lingering tor processes...
https://trac.torproject.org/projects/tor/ticket/14419
|
| |\
| |
| |
| |
| |
| |
| | |
Feature from Foxboron for adding tox support to make it easier to test against
multiple versions of python...
https://trac.torproject.org/projects/tor/ticket/14091
|
| | |
| |
| |
| |
| |
| | |
Added a gitignore for the autogenerated .tox directory, but stem.egg-info I'm
less tolerant of since it isn't a hidden directory. Iirc it's not terribly
important so cleaning it up after our run.
|
| | |
| |
| |
| | |
Bit of rewording and adding it to our changelog.
|
| |/ |
|
| |
|
|
|
|
|
|
|
|
|
| |
We have a few 'if python2 do X, if python3 do Y' spots to have compatability
with both versions. Pyflakes understandably complained about not having long,
unicode, or raw_input under python3...
https://trac.torproject.org/projects/tor/ticket/14559
Adding these to our ignore pattern. Now running the tests under python3 with
pyflakes and pep8 the static checks pass.
|
| |
|
|
|
|
|
|
|
|
|
| |
Our docs state that...
* 'strict' is true: we can exit to *all* instances of the given address or port
* 'strict' is false: we can exit to *any* instances of the given address or port
Caught thanks to nskinkel...
https://trac.torproject.org/projects/tor/ticket/14314
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
Not quite keeping with backward compatability, but what most users want.
When parsing descriptors validation is now opt-in rather than opt-out. With our
recent change to lazy load this is much quicker. This change also helped me
catch a few lazy-loading bugs.
|
| |\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I've been wanting to do this for years.
When reading a descriptor we parsed every field in it. This is necessary if
we're validating it, but usually users don't care about validation and only
want an attribute or two.
When parsing without validation we now lazy load the document, meaning we
parse fields on-demand rather than everything upfront. This naturally greatly
improves our performance for reading descriptors...
Server descriptors: 27% faster
Extrainfo descriptors: 71% faster
Microdescriptors: 43% faster
Consensus: 37% faster
It comes at a small cost to our performance for when we read with validation,
but not big enough for it to be a concern. As an added benefit this actually
makes our code a lot more maintainable too!
https://trac.torproject.org/projects/tor/ticket/14011
--------------------------------------------------------------------------------
Benchmarking script
--------------------------------------------------------------------------------
import time
from stem.descriptor import parse_file
start_time, fingerprints = time.time(), []
for desc in parse_file('/home/atagar/.tor/cached-descriptors', validate = True):
fingerprints.append(desc.fingerprint)
count, runtime = len(fingerprints), time.time() - start_time
print 'read %i descriptors with validation, took %0.2f seconds (%0.5f seconds per descriptor)' % (count, runtime, runtime / count)
start_time, fingerprints = time.time(), []
for desc in parse_file('/home/atagar/.tor/cached-descriptors', validate = False):
fingerprints.append(desc.fingerprint)
count, runtime = len(fingerprints), time.time() - start_time
print 'read %i descriptors without validation, took %0.2f seconds (%0.5f seconds per descriptor)' % (count, runtime, runtime / count)
--------------------------------------------------------------------------------
Results
--------------------------------------------------------------------------------
Please keep in mind these are just the results on my system. These are, of
course, influenced by your system and background load...
Server descriptors:
before: read 6679 descriptors with validation, took 10.71 seconds (0.00160 seconds per descriptor)
before: read 6679 descriptors without validation, took 4.46 seconds (0.00067 seconds per descriptor)
after: read 6679 descriptors with validation, took 11.48 seconds (0.00172 seconds per descriptor)
after: read 6679 descriptors without validation, took 3.25 seconds (0.00049 seconds per descriptor)
Extrainfo descriptors:
before: read 6677 descriptors with validation, took 7.91 seconds (0.00119 seconds per descriptor)
before: read 6677 descriptors without validation, took 7.64 seconds (0.00114 seconds per descriptor)
after: read 6677 descriptors with validation, took 8.91 seconds (0.00133 seconds per descriptor)
after: read 6677 descriptors without validation, took 2.22 seconds (0.00033 seconds per descriptor)
Microdescriptors:
before: read 10526 descriptors with validation, took 2.41 seconds (0.00023 seconds per descriptor)
before: read 10526 descriptors without validation, took 2.34 seconds (0.00022 seconds per descriptor)
after: read 10526 descriptors with validation, took 2.74 seconds (0.00026 seconds per descriptor)
after: read 10526 descriptors without validation, took 1.34 seconds (0.00013 seconds per descriptor)
Consensus:
before: read 6688 descriptors with validation, took 2.11 seconds (0.00032 seconds per descriptor)
before: read 6688 descriptors without validation, took 2.04 seconds (0.00030 seconds per descriptor)
after: read 6688 descriptors with validation, took 2.47 seconds (0.00037 seconds per descriptor)
after: read 6688 descriptors without validation, took 1.28 seconds (0.00019 seconds per descriptor)
|
| | |
| |
| |
| | |
Just some quick non-impactful stylistic revisions.
|
| | |
| |
| |
| |
| | |
As discussed in the prior commit it doesn't seem to help performance in
practice and hurts maintainability, so leaving it out for now.
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In theory popping from the head of our unread lines is inefficient since we
create a new list with every line read. However, oddly changeing this doesn't
seem to have a benefit in terms of performance...
It's more readable how it was so including this commit and reversion in our
history so we can loop back and include this if we later discover it is
beneficial.
|
| | |
| |
| |
| | |
Not truly a line attribute, but we can treat it in a similar fashion.
|
| | |
| |
| |
| |
| | |
Every descriptor type does the bytes => unicode conversion for it. Might as
well just do this in the helper itself.
|
| | |
| |
| |
| |
| |
| |
| | |
These lines are special in that they're raw bytes rather than unicode (that is
to say, they're not necessarily recognizable text). As such I left their
parsing alone which was eager loading, but on reflection we get a nice
performance boost by making these lazy too.
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| | |
For lazy loading to work we need this class to behave more like other
descriptor types. Starting by merging the footer rather than the header since
it's simpler.
|
| | |
| |
| |
| | |
Limiting their scope to locals so we can merge them into the document itself.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| | |
The v3 network status document is gonna be a bit trickier since it delegates
parsing to sub-objects. Essentially it acts as a collection of sub-documents,
then adds those attributes to itself.
Starting by moving the header parsing to helpers like the other document types.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
Oops, this parsing function appended to the descriptor's existing dictionary
rather than assigning one of its own. Assuming it ran over the same content
this wasn't an issue in practice since it would clobber the existing results,
but still not rights.
|
| | |
| |
| |
| | |
Another subsection of network status documents.
|
| | |
| |
| |
| | |
Lazy loading support for part of network status documents.
|
| | |
| |
| |
| | |
Simplest descriptor type so pretty simple switch.
|
| | |
| |
| |
| | |
Moving a couple common helpers to the common descriptor __init__.py module.
|
| | |
| |
| |
| |
| |
| | |
Implement lazy loading for extrainfo descriptors. This highlighted a bug in
that we need a shallow copy of our default values. Otherwise defaults like
lists and dictionaries will be shared between descriptors.
|
| | |
| |
| |
| |
| |
| | |
Shifting lazyloading methods to the Descriptor parsent class so all subclasses
will be able to take advantage of it. Actually, this should let this whole
module become a little nicer and more succinct than when we started.
|