-Let's see if this file shows up as we hope it does.
+This is the docker website repository
+
+installation
+------------
+* Checkout this repo to your local dir
+* Install sphinx: ``pip install sphinx``
+* Push this to dotcloud
+
+
+Usage
+-----
+* run make docs
+* your static website can now be found in the _build dir
+* change the .rst files with your favorite editor to your liking
+* run *make clean* to clean up
+* run *make docs* to build the new version
+
+
+Notes
+-----
+* The index.html file is copied from the source dir to the output dir without modification. So changes to
+ the index.html page should be made directly in html
+* a simple way to run locally. cd into _build and then run 'python -m SimpleHTTPServer 8000'
+* For the Template the css is compiled from less. When changes are needed they can be compiled using lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.less -o main.css``
-An image is a root filesystem + some metadata. It is uniquely identified by a SHA256, and it can be given a symbolic name as well (so you can `docker run mystuff` instead of `docker run 819f04e5706f5...`.
-
-The metadata is a JSON blob. It contains at least:
-
-- the hash of the parent (only if the image is based on another image; if it was created from scratch, then the parent is `null`),
-- the creation date (this is when the `docker commit` command was done).
-
-The hash of the image is defined as:
-
-`SHA256(SHA256(jsondata)+SHA256(tarball))`
-
-When you run something into an image, it automatically creates a container. The container has a unique ID, and when you `docker commit <container_id> mystuff`, you are creating a new image, and giving it the nickname `mystuff`.
-
-
-Repository
-----------
-
-A repository:
-
-- belongs to a specific user,
-- has a given name chosen by the user,
-- is a set of tagged images.
-
-The typical use case is to group different versions of something under a repository.
-
-Example: you are John Doe, maintainer of a collection of PostgreSQL images based on different releases of Ubuntu. Your docker ID is `jdoe`; you decide that the repository name will by `pgsql`. You pull a bunch of base images for the different Ubuntu releases, then you setup different versions of PostgreSQL in them. You end up with the following set of images:
-
-- a base lucid image,
-- a base precise image,
-- a base quantal image,
-- PostgreSQL 9.1 installed on top of the lucid image,
-- PostgreSQL 9.2 installed on top of the lucid image,
-- PostgreSQL 9.1 installed on top of the precise image,
-- PostgreSQL 9.2 installed on top of the precise image,
-- PostgreSQL 9.1 installed on top of the quantal image,
-- PostgreSQL 9.2 installed on top of the quantal image,
-- PostgreSQL 9.3 installed on top of the quantal image.
-
-The first three won't be in the repository, but the other ones will. You decide that the tags will be lucid9.1, lucid9.2, precise9.1, etc.
-
-Note: those images do not have to share a common ancestor. In this case, we have three "root" images (one for each base Ubuntu release).
-
-When someone wants to use one of your images, he will do something like:
-
- docker run -p 5432 jdoe/pgsql@lucid9.2 postgres -D /var/lib/...
-
-Docker will do the following:
-
-- notice that the image name contains a slash, and is therefore a reference to a repository;
-- notice that the image name contains an arroba, and is therefore a reference to a specific version;
-- query the docker registry to resolve jdoe/pgsql@lucid9.2 into an image ID;
-- download the image metadata+tarball from the registry (unless it already has them locally);
-- recursively download all the parent images of the image (unless it already has them locally);
-- run the image.
-
-There is one special version: `latest`. When you don't request a specific version, you are implying that you want the `latest` version. When you push a version (any version!) to the repository, you are also pushing to `latest` as well.
-
-QUESTION: do we want to update `latest` even if the commit date of the image is older than the current `latest` image?
-
-QUESTION: who should update `latest`? Should it be done by the docker client, or automatically done server-side?
-
-
-
-Confused?
----------
-
-Another way to put it: a "repository" is like the "download binaries" page for a given product of a software vendor. Once version 1.42.5 is there, it probably won't be modified (they will rather release 1.42.6), unless there was something really harmful or embarrassing in 1.42.5.
-The S3 storage is authoritative. I.E. the registry will very probably keep some cache of the metadata, but it will be just a cache.
-
-
-Storage of repositories
------------------------
-
-TBD
-
-
-Pull images
------------
-
-Pulling an image is fairly straightforward:
-
- GET /v1/images/<id>/json
- GET /v1/images/<id>/layer
- GET /v1/images/<id>/history
-
-The first two calls redirect you to their S3 counterparts. But before redirecting you, the registry checks (probably with `HEAD` requests) that both `json` and `layer` objects actually exist on S3. I.E., if there was a partial upload, when you try to `GET` the `json` or the `layer` object, the registry will give you a 404 for both objects, even if one of them does exist.
-
-The last one sends you a JSON payload, which is a list containing all the metadata of the image and all its ancestors. The requested image comes first, then its parent, then the parent of the parent, etc.
-
-SUGGESTION: rename `history` to `ancestry` (it sounds more hipstery, but it's actually more accurate)
-
-SUGGESTION: add optional parameter `?length=X` to `history`, so you can limit to `X` ancestors, and avoid pulling 42000 ancestors in one go - especially if you have most of them already...
-
-
-Push images
------------
-
-The first thing is to push the meta data:
-
- PUT /v1/images/<id>/json
-
-Four things can happen:
-
-- invalid/empty JSON: the server tells you to go away (HTTP 400?)
-- image already exists with the same JSON: the server tells you that it's fine (HTTP 204?)
-- image already exists but is different: the server informs you that something's wrong (?)
-- image doesn't exist: the server puts the JSON on S3, then generates an upload URL for the tarball, and sends you an HTTP 200 containing this upload URL
-
-In the latter case, you want to move to the next step:
-
- PUT the tarball to whatever-URL-you-got-on-previous-stage
-
-SUGGESTION: consider a `PUT /v1/images/<id>/layer` with `Except: 100-continue` and honor a 301/302 redirect. This might or might not be legal HTTP.
-
-The last thing is to try to push the parent image (unless you're sure that it is already in the registry). If the image is already there, stop. If it's not there, upload it, and recursively upload its parents in a similar fashion.
-
-
-Pull repository
----------------
-
-This:
-
- GET /v1/users/<userid>/<reponame>
-
-Sends back a JSON dict mapping version tags to image version, e.g.:
-The request body should be the image version hash.
-
-
-Example session
----------------
-
-First idea:
-
- # Automatically pull base, aka docker/base@latest, and run something in it
- docker run base ...
- (Output: 42424242)
- docker commit 42424242 databeze
- docker login jdoe s3s4me!
- # The following two commands are equivalent
- docker push jdoe/pgsql databeze
- docker push jdoe/pgsql 42424242
-
-Second idea:
-
- docker run base ...
- docker commit 42424242 jdoe/pgsql
- docker login jdoe s3s4me!
- docker push jdoe/pgsql
-
-Maybe this would work too:
-
- docker commit 42424242 pgsql
- docker push pgsql
-
-And maybe this too:
-
- docker push -a
-
-NOTE: when your commit overwrites an existing tag, the image should be marked "dirty" so that docker knows that it has to be pushed.
-
-NOTE: if a pull would cause some local tag to be overwritten, docker could refuse, and ask you to rename your local tag, or ask you to specify a -f flag to overwrite. Your local changes won't be lost, but the tag will be lost, so if yon don't know the image ID it could be hard to figure out which one it was.
-
-NOTE: we probably need some commands to move/remove tags to images.
-
-Collaborative workflow:
-
- alice# docker login mybigco p455w0rd
- bob# docker login mybigco p455w0rd
- alice# docker pull base
- alice# docker run -a -t -i base /bin/sh
- ... hard core authoring takes place ...
- alice# docker commit <container_id> wwwbigco
- alice# docker push wwwbigco
- ... the latter actually does docker push mybigco/wwwbigco@latest ...
- bob# docker pull mybigco/wwwbigco
- bob# docker run mybigco/wwwbigcom /usr/sbin/nginx
+Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
+a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
+
+The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
+
+A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
+
+Standard operations
+-----------------------
+
+Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
+
+
+Content-agnostic
+---------------------
+
+Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
+
+
+Infrastructure-agnostic
+--------------------------
+
+Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
+
+
+Designed for automation
+--------------------------
+
+Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
+
+Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
+
+Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
+
+
+Industrial-grade delivery
+----------------------------
+
+There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
+
+With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
+:description: A simple hello world example with Docker
+:keywords: docker, example, hello world
+
+.. _hello_world:
+
+Hello World
+===========
+This is the most basic example available for using docker
+
+This example assumes you have Docker installed and it will download the busybox image and then use that image to run a simple echo command, that will echo hello world back to the console over standard out.
+
+.. code-block:: bash
+
+ $ docker run busybox /bin/echo hello world
+
+**Explanation:**
+
+- **"docker run"** run a command in a new container
+- **"busybox"** is the image we want to run the command inside of.
+- **"/bin/echo"** is the command we want to run in the container
+- **"hello world"** is the input for the echo command
+:description: A simple hello world daemon example with Docker
+:keywords: docker, example, hello world, daemon
+
+.. _hello_world_daemon:
+
+Hello World Daemon
+==================
+The most boring daemon ever written.
+
+This example assumes you have Docker installed and with the busybox image already imported. We will use the busybox image to run a simple hello world daemon that will just print hello world to standard out every second. It will continue to do this until we stop it.
+
+**Steps:**
+
+.. code-block:: bash
+
+ $ CONTAINER_ID=$(docker run -d busybox /bin/sh -c "while true; do echo hello world; sleep 1; done")
+
+We are going to run a simple hello world daemon in a new container made from the busybox daemon.
+
+- **"docker run -d "** run a command in a new container. We pass "-d" so it runs as a daemon.
+- **"busybox"** is the image we want to run the command inside of.
+- **"/bin/sh -c"** is the command we want to run in the container
+- **"while true; do echo hello world; sleep 1; done"** is the mini script we want to run, that will just print hello world once a second until we stop it.
+- **$CONTAINER_ID** the output of the run command will return a container id, we can use in future commands to see what is going on with this process.
+
+.. code-block:: bash
+
+ $ docker logs $CONTAINER_ID
+
+Check the logs make sure it is working correctly.
+
+- **"docker logs**" This will return the logs for a container
+- **$CONTAINER_ID** The Id of the container we want the logs for.
+
+.. code-block:: bash
+
+ $ docker attach $CONTAINER_ID
+
+Attach to the container to see the results in realtime.
+
+- **"docker attach**" This will allow us to attach to a background process to see what is going on.
+- **$CONTAINER_ID** The Id of the container we want to attach too.
+
+.. code-block:: bash
+
+ $ docker ps
+
+Check the process list to make sure it is running.
+
+- **"docker ps"** this shows all running process managed by docker
+
+.. code-block:: bash
+
+ $ docker stop $CONTAINER_ID
+
+Stop the container, since we don't need it anymore.
+
+- **"docker stop"** This stops a container
+- **$CONTAINER_ID** The Id of the container we want to stop.
+:description: Building your own python web app using docker
+:keywords: docker, example, python, web app
+
+.. _python_web_app:
+
+Building a python web app
+=========================
+The goal of this example is to show you how you can author your own docker images using a parent image, making changes to it, and then saving the results as a new image. We will do that by making a simple hello flask web application image.
+
+**Steps:**
+
+.. code-block:: bash
+
+ $ docker import shykes/pybuilder
+
+We are importing the "shykes/pybuilder" docker image
+We set a URL variable that points to a tarball of a simple helloflask web app
+
+.. code-block:: bash
+
+ $ BUILD_JOB=$(docker run -t shykes/pybuilder:1d9aab3737242c65 /usr/local/bin/buildapp $URL)
+
+Inside of the "shykes/pybuilder" image there is a command called buildapp, we are running that command and passing the $URL variable from step 2 to it, and running the whole thing inside of a new container. BUILD_JOB will be set with the new container_id. "1d9aab3737242c65" came from the output of step 1 when importing image. also available from 'docker images'.
+
+.. code-block:: bash
+
+ $ docker attach $BUILD_JOB
+ [...]
+
+We attach to the new container to see what is going on. Ctrl-C to disconnect
+Save the changed we just made in the container to a new image called "_/builds/github.com/hykes/helloflask/master" and save the image id in the BUILD_IMG variable name.
+
+.. code-block:: bash
+
+ $ WEB_WORKER=$(docker run -p 5000 $BUILD_IMG /usr/local/bin/runapp)
+
+Use the new image we just created and create a new container with network port 5000, and return the container id and store in the WEB_WORKER variable.
+
+.. code-block:: bash
+
+ $ docker logs $WEB_WORKER
+ * Running on http://0.0.0.0:5000/
+
+view the logs for the new container using the WEB_WORKER variable, and if everything worked as planned you should see the line "Running on http://0.0.0.0:5000/" in the log output.
+Docker is 100% free, it is open source, so you can use it without paying.
+
+2. What open source license are you using?
+
+We are using the Apache License Version 2.0, see it here: https://github.com/dotcloud/docker/blob/master/LICENSE
+
+3. Does Docker run on Mac OS X or Windows?
+
+Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a virtual machine on your box, and get the best of both worlds. Check out the getting started guides for help on setting up your machine.
+
+4. How do containers compare to virtual machines?
+
+Containers are more light weight and can start in less then a second, and are great for lots of different tasks, but they aren't as full featured as virtual machines.
+
+5. Can I help by adding some questions and answers?
+
+Definitely! You can fork the repo and edit the documentation sources right there.
+
+
+Looking for something else to read? Checkout the :ref:`hello_world` example.
+We recommend having at least 2Gb of free disk space and 2Gb of RAM (or more).
+
+Opening a command prompt
+------------------------
+
+First open a cmd prompt. Press Windows key and then press “R” key. This will open the RUN dialog box for you. Type “cmd” and press Enter. Or you can click on Start, type “cmd” in the “Search programs and files” field, and click on cmd.exe.
+
+.. image:: images/win/_01.gif
+ :alt: Git install
+ :align: center
+
+This should open a cmd prompt window.
+
+.. image:: images/win/_02.gif
+ :alt: run docker
+ :align: center
+
+Alternatively, you can also use a Cygwin terminal, or Git Bash (or any other command line program you are usually using). The next steps would be the same.
+
+Launch an Ubuntu virtual server
+-------------------------------
+
+Let’s download and run an Ubuntu image with docker binaries already installed.
+Congratulations! You are running an Ubuntu server with docker installed on it. You do not see it though, because it is running in the background.
+
+Log onto your Ubuntu server
+---------------------------
+
+Let’s log into your Ubuntu server now. To do so you have two choices:
+
+- Use Vagrant on Windows command prompt OR
+- Use SSH
+
+Using Vagrant on Windows Command Prompt
+```````````````````````````````````````
+
+Run the following command
+
+.. code-block:: bash
+
+ vagrant ssh
+
+You may see an error message starting with “`ssh` executable not found”. In this case it means that you do not have SSH in your PATH. If you do not have SSH in your PATH you can set it up with the “set” command. For instance, if your ssh.exe is in the folder named “C:\Program Files (x86)\Git\bin”, then you can run the following command:
+
+.. code-block:: bash
+
+ set PATH=%PATH%;C:\Program Files (x86)\Git\bin
+
+.. image:: images/win/run_03.gif
+ :alt: run docker
+ :align: center
+
+Using SSH
+`````````
+
+First step is to get the IP and port of your Ubuntu server. Simply run:
+
+.. code-block:: bash
+
+ vagrant ssh-config
+
+You should see an output with HostName and Port information. In this example, HostName is 127.0.0.1 and port is 2222. And the User is “vagrant”. The password is not shown, but it is also “vagrant”.
+
+.. image:: images/win/ssh-config.gif
+ :alt: run docker
+ :align: center
+
+You can now use this information for connecting via SSH to your server. To do so you can:
+
+- Use putty.exe OR
+- Use SSH from a terminal
+
+Use putty.exe
+'''''''''''''
+
+You can download putty.exe from this page http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
+Launch putty.exe and simply enter the information you got from last step.
+
+.. image:: images/win/putty.gif
+ :alt: run docker
+ :align: center
+
+Open, and enter user = vagrant and password = vagrant.
+
+.. image:: images/win/putty_2.gif
+ :alt: run docker
+ :align: center
+
+SSH from a terminal
+'''''''''''''''''''
+
+You can also run this command on your favorite terminal (windows prompt, cygwin, git-bash, …). Make sure to adapt the IP and port from what you got from the vagrant ssh-config command.
+
+.. code-block:: bash
+
+ ssh vagrant@127.0.0.1 –p 2222
+
+Enter user = vagrant and password = vagrant.
+
+.. image:: images/win/cygwin.gif
+ :alt: run docker
+ :align: center
+
+Congratulations, you are now logged onto your Ubuntu Server, running on top of your Windows machine !
+
+Running Docker
+--------------
+
+First you have to be root in order to run docker. Simply run the following command:
+
+.. code-block:: bash
+
+ sudo su
+
+You are now ready for the docker’s “hello world” example. Run
+
+.. code-block:: bash
+
+ docker run -a busybox echo hello world
+
+.. image:: images/win/run_04.gif
+ :alt: run docker
+ :align: center
+
+All done!
+
+Now you can continue with the :ref:`hello_world` example.
+ <pre>sudo ./docker run -i -t base /bin/bash</pre>
+ </div>
+ <p>Done!</p>
+ <p>Consider adding docker to your <code>PATH</code> for simplicity.</p>
+ </li>
+
+ Continue with the <a href="documentation/examples/hello_world.html#hello-world">Hello world</a> example.
+ </ol>
+ </section>
+
+ <section class="contentblock">
+ <h2>Contributing to Docker</h2>
+
+ <p>Want to hack on Docker? Awesome! We have some <a href="/documentation/contributing/contributing.html">instructions to get you started</a>. They are probably not perfect, please let us know if anything feels wrong or incomplete.</p>
+ </section>
+
+ </div>
+ <div class="span6">
+ <section class="contentblock">
+ <h2>Quick install on other operating systems</h2>
+ <p><strong>For other operating systems we recommend and provide a streamlined install with virtualbox,
+ vagrant and an Ubuntu virtual machine.</strong></p>
+
+ <ul>
+ <li><a href="documentation/installation/macos.html">Mac OS X and other linuxes</a></li>
+ <h2 style="font-size: 16px; line-height: 1.5em;">Docker encapsulates heterogeneous payloads in Standard Containers, and runs them on any server with strong guarantees of isolation and repeatability.</h2>
+ <a class="btn btn-custom btn-large" href="gettingstarted.html">Let's get started</a>
+ </div>
+
+
+
+ <br style="clear: both"/>
+ </section>
+ </div>
+ </div>
+</div>
+
+<div class="container">
+ <div class="row" style="margin-top: 20px;">
+
+ <div class="span3">
+ <section class="contentblock">
+ <h4>Heterogeneous payloads</h4>
+ <p>Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.</p>
+ </section>
+ </div>
+ <div class="span3">
+ <section class="contentblock">
+ <h4>Any server</h4>
+ <p>Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.</p>
+ </section>
+ </div>
+ <div class="span3">
+ <section class="contentblock">
+ <h4>Isolation</h4>
+ <p>docker isolates processes from each other and from the underlying host, using lightweight containers.</p>
+ </section>
+ </div>
+ <div class="span3">
+ <section class="contentblock">
+ <h4>Repeatability</h4>
+ <p>Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.</p>
+ <em>John Feminella @superninjarobot:</em> So, @getdocker is pure excellence. If you've ever wished for arbitrary, PaaS-agnostic, lxc/aufs Linux containers, this is your jam!
+ </section>
+ </div>
+ </div>
+</div>
+
+<!-- <p>Docker encapsulates heterogeneous payloads in <a href="#container">Standard Containers</a>, and runs them on any server with strong guarantees of isolation and repeatability.</p>
+
+ <p>It is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.</p>
+ <li>Filesystem isolation: each process container runs in a completely separate root filesystem.</li>
+ <li>Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.</li>
+ <li>Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.</li>
+ <li>Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremeley fast, memory-cheap and disk-cheap.</li>
+ <li>Logging: the standard streams (stdout/stderr/stdin) of each process container is collected and logged for real-time or batch retrieval.</li>
+ <li>Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.</li>
+ <li>Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.</li>
+ </ul>
+
+ <h2>Under the hood</h2>
+
+ <p>Under the hood, Docker is built on the following components:</p>
+
+ <ul>
+ <li>The <a href="http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c">cgroup</a> and <a href="http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part">namespacing</a> capabilities of the Linux kernel;</li>
+ <li><a href="http://aufs.sourceforge.net/aufs.html">AUFS</a>, a powerful union filesystem with copy-on-write capabilities;</li>
+ <p>Docker encapsulates heterogeneous payloads in <a href="#container">Standard Containers</a>, and runs them on any server with strong guarantees of isolation and repeatability.</p>
+
+ <p>It is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.</p>
+
+ <ul>
+ <li><em>Heterogeneous payloads</em>: any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.</li>
+ <li><em>Any server</em>: docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.</li>
+ <li><em>Isolation</em>: docker isolates processes from each other and from the underlying host, using lightweight containers. </li>
+ <li><em>Repeatability</em>: because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.</li>
+ </ul>
+ </section>
+ </div>
+
+ <div class="span3">
+ <section class="contentblock">
+ <h4>Docker will be open source soon</h4>
+ <br>
+ <div id="wufoo-z7x3p3">
+ Fill out my <a href="http://dotclouddocker.wufoo.com/forms/z7x3p3">online form</a>.
+ <li>Filesystem isolation: each process container runs in a completely separate root filesystem.</li>
+ <li>Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.</li>
+ <li>Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.</li>
+ <li>Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremeley fast, memory-cheap and disk-cheap.</li>
+ <li>Logging: the standard streams (stdout/stderr/stdin) of each process container is collected and logged for real-time or batch retrieval.</li>
+ <li>Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.</li>
+ <li>Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.</li>
+ </ul>
+
+ <h2>Under the hood</h2>
+
+ <p>Under the hood, Docker is built on the following components:</p>
+
+ <ul>
+ <li>The <a href="http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c">cgroup</a> and <a href="http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part">namespacing</a> capabilities of the Linux kernel;</li>
+ <li><a href="http://aufs.sourceforge.net/aufs.html">AUFS</a>, a powerful union filesystem with copy-on-write capabilities;</li>
+ <li><a href="http://lxc.sourceforge.net/">lxc</a>, a set of convenience scripts to simplify the creation of linux containers.</li>
+ </ul>
+
+ </section>
+ </div>
+ </div>
+
+ <div class="row">
+ <div class="span12">
+ <section id="container" class="contentblock">
+
+
+ <h1>What is a Standard Container?</h1>
+
+
+ <p>Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
+ a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.</p>
+
+ <p>The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.</p>
+
+ <p>A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.</p>
+
+ <h3>1. STANDARD OPERATIONS</h3>
+
+ <p>Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.</p>
+
+ <h3>2. CONTENT-AGNOSTIC</h3>
+
+ <p>Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffe or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.</p>
+
+ <h3>3. INFRASTRUCTURE-AGNOSTIC</h3>
+
+ <p>Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.</p>
+
+ <h3>4. DESIGNED FOR AUTOMATION</h3>
+
+ <p>Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.</p>
+
+ <p>Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.</p>
+
+ <p>Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.</p>
+
+ <h3>5. INDUSTRIAL-GRADE DELIVERY</h3>
+
+ <p>There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in <em>less time</em> than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.</p>
+
+ <p>With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.</p>
+ </section>
+ </div>
+ </div>
+
+ <div class="row">
+ <div class="span6">
+ <section class="contentblock">
+ <h3 id="twitter">Twitter</h3>
+ <a class="twitter-timeline" href="https://twitter.com/getdocker" data-widget-id="312730839718957056">Tweets by @getdocker</a>
+ Docker is a project by <a href="http://www.dotcloud.com">dotCloud</a>. When released, find us on <a href="http://github.com/dotcloud/docker/">Github</a>
+.ui-tabs .ui-tabs-nav li.ui-tabs-selected a, .ui-tabs .ui-tabs-nav li.ui-state-disabled a, .ui-tabs .ui-tabs-nav li.ui-state-processing a { cursor: text; }
+.ui-tabs .ui-tabs-nav li a, .ui-tabs.ui-tabs-collapsible .ui-tabs-nav li.ui-tabs-selected a { cursor: pointer; } /* first selector in group seems obsolete, but required to overcome bug in Opera applying cursor: text overall if defined elsewhere... */