| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
| |
* check lengths of country, search, lookup parameters
* make error_msg more modular
* refactor
|
| |
|
|
|
| |
Now that we have a lookup column which is case insensitive,
we can make fingerprint and hashed_fingerprint case sensitive.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There were two problems with requests containing the lookup parameter:
- We currently mix AND and OR conditions in the SELECT statement without
correctly parethesizing them. This could lead to surprising results
when combining the lookup parameter with other parameters.
- Lookup requests took 1.6 seconds on my local machine compared to 20
milliseconds for other requests.
Using the same LIKE-based approach for the lookup parameter that we also
used for the search parameter fixes both problems.
|
| |\ |
|
| | | |
|
| |\ \ |
|
| | |/
| |
| |
| |
| |
| | |
get_timestamp() queries the database for the latest know
published relay consensus and bridge network status document
and returns a tuple containing these 2 values.
|
| | | |
|
| | | |
|
| |/ |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
Exlude '$' if prefixed. 'filter' is a python keyword,
lets use 'search_string' instead.
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| |/
|
|
|
|
|
| |
Bridges without any flags have two subsequent spaces in the summary file.
Pass a value for sep when calling split(), so that "consecutive delimiters
are not grouped together and are deemed to delimit empty strings" (Python
documentation).
|
| |
|
|
|
|
|
| |
SQLite doesn't understand OFFSET without LIMIT, which is why the current
code used a local counter to request TOTAL_ROWS-offset_value as the LIMIT.
However, that code is not transaction-safe. A better solution is to pass
-1 as the LIMIT, which has the same effect.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
| |
Use summary table to handle all the data. Add a search, flags,
and addresses column to the summary table.
get_router_tuple(fields) returns the values corresponding to the
fields
|
| |
|
|
|
|
| |
sqlite doesn't understand OFFSET without a LIMIT
clause. Use TOTAL_ROWS-offset_value to find the
LIMIT.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a pretty big commit with lots of refactoring:
- Remove certain global vars, use config file
- Remove freshen_database() - It's now a part of update_databases()
- get_database() is now renamed to get_database_conn(), and removed
the unused conn.row_factory variable
- update_database() is now renamed to update_databases()
- Add more logging info
- Make modular insertion statements
- get_summary_routers() returns relays/bridges which are now Router
objects
- Refactor code
* Remove unnecessary return statements and whitespace
* Make multiline statements more pythonic
|
| |\ |
|
| | | |
|
| |/ |
|
| |
|
|
|
|
|
|
| |
Probably most paths should be specified in pyonionoo.conf; this
is a quick hack just to make the codebase marginally more
portable. pyonionoo.conf must be edited with correct paths.
Probably command-line options could be used as well, but I haven't
looked into this.
|
| |
|
|
| |
Pyonionoo needs to run on Python 2.6.
|
| |
|
|
| |
In retrospect, a 30-minute interval was silly.
|
| |
|
|
|
|
| |
Each request is handled by a separate thread for querying the database
and assembling the results. Also fixed the (incorrect) regular
expressions that define the URLs we respond to.
|
| | |
|
| |
|
|
|
|
|
|
| |
We use a Timer object that updates the database every
database.DB_UPDATE_INTERVAL seconds. As a Timer, this is not performed
in the main thread, so doesn't block incoming connections. Happily,
the database update is automatically a single transaction, so the
update is not visible to clients until it is complete.
|
| |
|
|
|
|
|
|
|
| |
of date.
We have not tested whether transactions are doing the "right thing,"
allowing us to update the database while allowing other threads to
continue reading the "not-yet-updated" database. And see comments
at beginning of database.py related to using an in-memory database.
|
| |
|
|
|
|
|
|
|
| |
Having done so, we can now treat each row returned by a SELECT
as a dictionary, with keys equal to the table field names. This
simplifies the summary handler.
Note that there are also a few comments related to implementing
the 'search' parameter, which still hasn't been done yet.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Added handlers/arguments.py, which has a single function parse. parse
takes the GET request parameter dictionary as an argument, and returns
a dictionary mapping strings to values. That dictionary can be used
as the keyword arguments dictionary for the database module functions,
so now the database module does not need to understand anything about
GET requests.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
statement.
Made a few other minor refactorings, like having database() return
the connection in order to eliminate the global CURSOR variable.
Main tasks still to do: externalize request parameter handling to
the handlers module; handle 'search' parameter; have another thread
to create database; transactions on the database.
|
| |
|
|
|
|
|
|
|
|
| |
been implemented.
'flags' and 'addresses' tables have been added to the Summary database
in addition to the 'summary' table. Each flag and address have their own
row in their respective tables, but they have corresponding row ids with
their respective router in the 'summary' table. User can now also order
the requested routers by consensus_weight.
|
| |
|
|
|
|
|
|
|
|
|
| |
Created a database to store the information of each router according to
the different fields. One table (summary) contains all the fields except
for 'flags' and 'addresses'. A database and the summary table is created
using database(). get_summary_routers() takes in the arguments from
SummaryHandler, reads the summary table, and appends the necessary
routers to a tuple of lists that will be passed back to the SummaryHandler.
'flags' and 'addresses' will be in their own separate tables. The next step
will be to add these two tables to the database.
|
| | |
|
| |
|
|
|
|
|
| |
Ensured SummaryHandler is returning JSON to client. get_router.py has been
renamed to database.py and it reads the summary document upon each request.
Next phase entails starting database work and storing the contents in an
in-memory sqlite database.
|
| | |
|
| |
|
|
|
|
|
|
| |
Code that goes through handler's arguments and filters the routers is
now in get_router.py. As a result, the handlers will now focus solely on
creating their respective documents. Timestamps for relays and bridges
have been added, but more work needs to be done for the bandwidth
handler, as well as some of the fields in the details handler.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Detail Handler has been added and is working properly along with the
Summary handler. Users can now provide the following parameters for both
handlers: type, running, search, lookup, country. The last three
parameters have not been implemented yet: order, offset, limit. Bandwidth
handler was started but is not finished yet. Note some of the relay information
is incomplete in the detail handler.
|
| |
|
|
|
|
| |
Added new module get_router.py, and created a handlers package
which will contain all handlers including summary, details, and
bandwidth. Summary.py has been moved to this package.
|