* add docusaurus scalar api documentation structure
* bump openapi 3.0 to 3.1 so we can mark internal endpoints
* improve search api docs
* webgraph api docs
* point docs to prod
Doesn't handle concurrent writes and flushes after each write. This will cause a lot of fsync's which will impact performance, but as this will be used for the live index where each item (a full webpage) is quite large, this will hopefully not be too detrimental.
this allows us to skip links to tag pages etc. when calculating harmonic centrality which should greatly improve the centrality values for the page graph
* allow connection reuse by not taking ownership in send methods
* [sonic] continously handle requests from each connection in the server as long as the connection is not closed
* add connection pool to sonic based on deadpool
* use connection pool in remote webgraph and distributed searcher
* hopefully fix flaky test
* hopefully fix flaky test
this allows us to short-cirquit the query by default which significantly improves performance as we therefore don't have to iterate the non-scored results simply to count them
* implement random access index in file_store where keys are u64 and values are serialised to a constant size
* cleanup: move all webgraph store writes into store_writer
* add a 'ConstIterableStore' that can store items on disk without needing to interleave headers in the case that all items can be serialized to a constant number of bytes known up front
* change edges file format to make edges for a given node iterable.
this allows us to only load a subset of the edges for a node in the future
* compress webgraph labels in blocks of 128
* ability to limit number of edges returned by webgraph
* sort edges in webgraph store by the host rank of the opposite node
to truncate the database, we would have to implement deletes and possibly also some kind of auto merging strategy in speedy_kv. to keep things simple, we use redb for this db instead.
this basically describes most of our workloads. as an example, in the webgraph we know that we only ever get inserts when constructing the graph, after which all the reads will happen.
the key-value database consists of the following components:
* an fst index (key -> blob_id)
* a memory mapped blob index (blob_id -> blob_ptr)
* a memory mapped blob store (blob_ptr -> blob)
this allows us to move everything over from rocksdb to speedy_kv, and thereby removing the rocksdb dependency.
* [WIP] structure for mapreduce -> ampc and introduce tables in dht
* temporarily disable failing lints in ampc/mod.rs
* establish dht connection in ampc
* support batch get/set in dht
* ampc implementation (not tested yet)
* dht upsert
* no more todo's in ampc harmonic centrality impl
* return 'UpsertAction' instead of bool from upserts
this makes it easier to see what action was taken from the callers perspective. a bool is not particularly descriptive
* add ability to have multiple dht tables for each ampc algorithm
gives better type-safety as each table can then have their own key-value type pair
* some bundled bug/correctness fixes.
* await currently scheduled jobs after there are no more jobs to schedule.
* execute each mapper fully at a time before scheduling next mapper.
* compute centrality scores from set cardinalities.
* refactor into smaller functions
* happy path ampc dht test and split ampc into multiple files
* correct harmonic centrality calculation in ampc
* run distributed harmonic centrality worker and coordinator from cli
* stream key/values from dht using range queries in batches
* benchmark distributed centrality calculation
* faster hash in shard selection and drop table in background thread
* Move all rpc communication to bincode2. This should give a significant serilization/deserilization performance boost
* dht store copy-on-write for keys and values to make table clone faster
* fix flaky dht test and improve .set performance using entries
* dynamic batch size based on number of shards in dht cluster
* refactor data that is re-used across fields for a particular page during indexing into an 'FnCache'
* automatically generate ALL_FIELDS and ALL_SIGNALS arrays with strum macro. ensures the arrays are always fully up to date
* split up schema fields into submodules
* add textfield trait with enum-dispatch
* add fastfield trait with enum-dispatch
* move field names into trait
* move some trivial functions from 'FastFieldEnum' and 'TextFieldEnum' into their respective traits
* move methods from Field into TextField and FastField traits
* extract html .as_tantivy into textfield trait
* extract html .as_tantivy into fastfield trait
* extract webpage .as_tantivy into field traits
* fix indexer example cleanup
* [WIP] raft consensus using openraft on sonic networking
* handle rpc's on nodes
* handle get/set application requests
* dht get/set stubs that handles leader changes and retries
also improve sonic error handling. there is no need for handle to return a sonic::Result, it's better that the specific message has a Result<...> as their response as this can then be properly handled on the caller side
* join existing raft cluster
* make sure node state is consisten in case of crash -> rejoin
* ResilientConnection in sonic didn't retry requests, only connections, and was therefore a bit misleading. remove it and add a send_with_timeout_retry method to normal connection with sane defaults in .send method
* add Response::Empty to raft in order to avoid having to send back hacky Response::Set(Ok(())) for internal raft entries
* change key/value in dht to be arbitrary bytes
* dht chaos proptest
* make dht tests more reliable
in raft, writes are written to a majority quorom. if we have a cluster of 3 nodes, this means that we can only be sure that 2 of the nodes get's the data. the test might therefore fail if we are unlucky and check the node that didn't get the data yet. by having a cluster of 2 nodes instead, we can be sure that both nodes always receives all writes.
* sharded dht client
* change indexer to prepare webpages in batches
* some clippy lints
* split 'IndexingWorker::prepare_webpages' into more readable sub functions and fix more clippy pedantic lints
* use dual encoder to embed title and keywords of page during indexing
* make sure we don't open harmonic centrality rocksdb in core/src during test...
* add indexer example used for benchmark
* add option to only compute embeddings of top ranking sites.
this is not really ideal, but it turns out to be way too slow to compute
the embeddings for all the sites in the index. this way, we at least get embeddings
for the sites that are most likely to appear in the search results while it is
still tractable to compute.
* store embeddings in index as bytes
* refactor ranking pipeline to statically ensure we score the different stages as expected
* use similarity between title and query embeddings during ranking
* use keyword embeddings during ranking
* handle missing fastfields in index gracefully
* remove unneeded Arc clone when constructing 'RecallRankingWebpage'
some websites, especially older ones, sometimes use a different encoding scheme than utf8 or latin1. before, we simply tried different encoding schemes until one successfully decoded the bytes but this approach can fail unexpectedly as some encodings can erroneously get decoded by other encodings without errors being reported.
we now use the encoding detection crate 'chardetng' which is also [used in firefox](https://github.com/hsivonen/chardetng?tab=readme-ov-file#purpose).
* move crates into a 'crates' folder
* added cargo-about to check dependency licenses
* create ggml-sys bindings and build as a static library.
simple addition sanity test passes
* update licenses
* yeet alice
* yeet qa model
* yeet fact model
* [wip] idiomatic rust bindings for ggml
* [ggml] mul, add and sub ops implemented for tensors.
i think it would be easier to try and implement a bert model in order to figure out which ops we should include in the binding. for instance, is view and concat needed?