* add docusaurus scalar api documentation structure
* bump openapi 3.0 to 3.1 so we can mark internal endpoints
* improve search api docs
* webgraph api docs
* point docs to prod
* overall structure for new webgraph store
* webgraph schema structure and HostLinksQuery
* deserialize edge
* forward/backlink queries
* full edge queries and iter smalledges
* [wip] use new store in webgraph
* remove id2node db
* shortcircuit link queries
* [wip] remote webgraph trait structure
* [wip] shard awareness
* finish remote webgraph trait structure
* optimize read
* merge webgraphs
* construct webgraph store
* make sure 'just configure' works and everything looks correct
* [WIP] live index code structure with a ton of todos
* update meta file with segment changes
* add endpoint to index webpages into live index
* compact segments by date
* cleanup old segments
* fix clippy warnings
* fix clippy warnings
Doesn't handle concurrent writes and flushes after each write. This will cause a lot of fsync's which will impact performance, but as this will be used for the live index where each item (a full webpage) is quite large, this will hopefully not be too detrimental.
this should fix a reported stack overflow (might be related to https://github.com/maciejhirsz/logos/issues/384) and should also make it easier to add additional scripts besides latin in the future
most of the time, we want to fetch multiple columns for each document in the result set. by ordering the fields by rows, we can fetch all the relevant fields with a minimum number of IO operations, whereas we would need at least one IO operation for each field if they were column ordered
our segments are starting to grow too big, so the assumption that number of position bytes can be in a u32 is no longer the case. storing it in u64 might not be what regular users of tantivy want, as our use of the library most likely doesn't resemble the average user. forking tantivy allows us to customize it directly for our usecase
this allows us to skip links to tag pages etc. when calculating harmonic centrality which should greatly improve the centrality values for the page graph
* allow connection reuse by not taking ownership in send methods
* [sonic] continously handle requests from each connection in the server as long as the connection is not closed
* add connection pool to sonic based on deadpool
* use connection pool in remote webgraph and distributed searcher
* hopefully fix flaky test
* hopefully fix flaky test
'c++' gets tokenized as ['c', '+', '+'] which we use in a phrase query to enforce that the result must have 'c++' in sequence instead of simply having 'c' somewhere on the page and '+' another place. however, some fields don't have the necesarry position data stored which caused these queries to crash when trying to perform the phrase query on these fields
this allows us to short-cirquit the query by default which significantly improves performance as we therefore don't have to iterate the non-scored results simply to count them
* implement random access index in file_store where keys are u64 and values are serialised to a constant size
* cleanup: move all webgraph store writes into store_writer
* add a 'ConstIterableStore' that can store items on disk without needing to interleave headers in the case that all items can be serialized to a constant number of bytes known up front
* change edges file format to make edges for a given node iterable.
this allows us to only load a subset of the edges for a node in the future
* compress webgraph labels in blocks of 128
* ability to limit number of edges returned by webgraph
* sort edges in webgraph store by the host rank of the opposite node
to truncate the database, we would have to implement deletes and possibly also some kind of auto merging strategy in speedy_kv. to keep things simple, we use redb for this db instead.
this basically describes most of our workloads. as an example, in the webgraph we know that we only ever get inserts when constructing the graph, after which all the reads will happen.
the key-value database consists of the following components:
* an fst index (key -> blob_id)
* a memory mapped blob index (blob_id -> blob_ptr)
* a memory mapped blob store (blob_ptr -> blob)
this allows us to move everything over from rocksdb to speedy_kv, and thereby removing the rocksdb dependency.
* [WIP] structure for mapreduce -> ampc and introduce tables in dht
* temporarily disable failing lints in ampc/mod.rs
* establish dht connection in ampc
* support batch get/set in dht
* ampc implementation (not tested yet)
* dht upsert
* no more todo's in ampc harmonic centrality impl
* return 'UpsertAction' instead of bool from upserts
this makes it easier to see what action was taken from the callers perspective. a bool is not particularly descriptive
* add ability to have multiple dht tables for each ampc algorithm
gives better type-safety as each table can then have their own key-value type pair
* some bundled bug/correctness fixes.
* await currently scheduled jobs after there are no more jobs to schedule.
* execute each mapper fully at a time before scheduling next mapper.
* compute centrality scores from set cardinalities.
* refactor into smaller functions
* happy path ampc dht test and split ampc into multiple files
* correct harmonic centrality calculation in ampc
* run distributed harmonic centrality worker and coordinator from cli
* stream key/values from dht using range queries in batches
* benchmark distributed centrality calculation
* faster hash in shard selection and drop table in background thread
* Move all rpc communication to bincode2. This should give a significant serilization/deserilization performance boost
* dht store copy-on-write for keys and values to make table clone faster
* fix flaky dht test and improve .set performance using entries
* dynamic batch size based on number of shards in dht cluster
* refactor data that is re-used across fields for a particular page during indexing into an 'FnCache'
* automatically generate ALL_FIELDS and ALL_SIGNALS arrays with strum macro. ensures the arrays are always fully up to date
* split up schema fields into submodules
* add textfield trait with enum-dispatch
* add fastfield trait with enum-dispatch
* move field names into trait
* move some trivial functions from 'FastFieldEnum' and 'TextFieldEnum' into their respective traits
* move methods from Field into TextField and FastField traits
* extract html .as_tantivy into textfield trait
* extract html .as_tantivy into fastfield trait
* extract webpage .as_tantivy into field traits
* fix indexer example cleanup
* model that inbody:... intitle:... etc can have either simple term or phrase query as subterm
* re-write query parser using nom
* all whitespace queries should return empty terms vec