An open source, non-profit search engine implemented in python
Find a file
2022-08-01 23:33:02 +01:00
analyse Store the best items, not the worst ones 2022-07-31 22:55:15 +01:00
docs/assets/images docs: added branding to readme and required assets files 2022-02-04 20:50:43 +01:00
mwmbl Allow running with no background script 2022-08-01 23:33:02 +01:00
test Index batches in memory 2022-07-24 15:44:01 +01:00
.dockerignore Get Dockerfile working 2021-12-23 21:30:51 +00:00
.gcloudignore Add .gcloudignore file to fix gcloud run deploy 2021-12-30 21:17:18 +00:00
.gitignore Update .gitignore: fix ignoroing data folder in root of repository 2021-12-29 09:21:57 +01:00
Dockerfile Enable CORS in nginx 2022-06-19 11:16:03 +01:00
LICENSE GPLv3 -> AGPLv3 2021-12-26 22:05:15 +00:00
nginx.conf.sigil Use updated CORS settings 2022-06-19 11:31:55 +01:00
poetry.lock Add script to process historical data 2022-06-18 15:31:35 +01:00
pyproject.toml Add script to process historical data 2022-06-18 15:31:35 +01:00
README.md Update readme for recent changes 2022-02-04 22:07:09 +00:00

banner

Mwmbl - No ads, no tracking, no cruft, no profit

Matrix

Mwmbl is a non-profit, ad-free, free-libre and free-lunch search engine with a focus on useability and speed. At the moment it is little more than an idea together with a proof of concept implementation of the web front-end and search technology on a very small index. A crawler is still to be implemented.

Our vision is a community working to provide top quality search particularly for hackers, funded purely by donations.

Crawling

Update 2022-02-05: We now have a distributed crawler that runs on our volunteers' machines! If you have Firefox you can help out by installing our extension. This will crawl the web in the background, retrieving one page a second. It does not use or access any of your personal data. Instead it crawls the web at random, using the top scoring sites on Hacker News as seed pages. After extracting a summary of each page, it batches these up and sends the data to a central server to be stored and indexed.

Why a non-profit search engine?

The motives of ad-funded search engine are at odds with providing an optimal user experience. These sites are optimised for ad revenue, with user experience taking second place. This means that pages are loaded with ads which are often not clearly distinguished from search results. Also, eitland on Hacker News comments:

Thinking about it it seems logical that for a search engine that practically speaking has monopoly both on users and as mattgb points out - [to some] degree also on indexing - serving the correct answer first is just dumb: if they can keep me going between their search results and tech blogs with their ads embedded one, two or five times extra that means one, two or five times more ad impressions.

But what about...?

The space of alternative search engines has expanded rapidly in recent years. Here's a very incomplete list of some that have interested me:

  • YaCy - an open source distributed search engine
  • search.marginalia.nu - a search engine favouring text-heavy websites
  • Gigablast - a privacy-focused search engine whose owner makes money by selling the technology to third parties
  • Brave
  • DuckDuckGo

Of these, YaCy is the closest in spirit to the idea of a non-profit search engine. The index is distributed across a peer-to-peer network. Unfortunately this design decision makes search very slow.

Marginalia Search is fantastic, but it is more of a personal project than an open source community.

All other search engines that I've come across are for-profit. Please let me know if I've missed one!

Designing for non-profit

To be a good search engine, we need to store many items, but the cost of running the engine is at least proportional to the number of items stored. Our main consideration is thus to reduce the cost per item stored.

The design is founded on the observation that most items rank for a small set of terms. In the extreme version of this, where each item ranks for a single term, the usual inverted index design is grossly inefficient, since we have to store each term at least twice: once in the index and once in the item data itself.

Our design is a giant hash map. We have a single store consisting of a fixed number N of pages. Each page is of a fixed size (currently 4096 bytes to match a page of memory), and consists of a compressed list of items. Given a term for which we want an item to rank, we compute a hash of the term, a value between 0 and N - 1. The item is then stored in the corresponding page.

To retrieve pages, we simply compute the hash of the terms in the user query and load the corresponding pages, filter the items to those containing the term and rank the items. Since each page is small, this can be done very quickly.

Because we compress the list of items, we can rank for more than a single term and maintain an index smaller than the inverted index design. Well, that's the theory. This idea has yet to be tested out on a large scale.

How to contribute

There are lots of ways to help:

If you would like to help in any of these or other ways, thank you! Please join our Matrix chat server or email the main author (email address is in the git commit history).

Development

Using Docker

  1. Create a new folder called data in the root of the repository
  2. Download the index file and place it the new data folder
  3. Run $ docker build . -t mwmbl
  4. Run $ docker run -p 8080:8080 mwmbl

Local Testing

  1. Create and activate a python (3.10) environment using any tool you like e.g. poetry,venv, conda etc.
  2. Run $ pip install .
  3. Run $ mwmbl-tinysearchengine --config config/tinysearchengine.yaml

Frequently Asked Question

How do you pronounce "mwmbl"?

Like "mumble". I live in Mumbles, which is spelt "Mwmbwls" in Welsh. But the intended meaning is "to mumble", as in "don't search, just mwmbl!"