Bläddra i källkod

now, with shiney markdown

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@fosiki.com> (github: SvenDowideit)
Sven Dowideit 11 år sedan
förälder
incheckning
ac999a9cb2
76 ändrade filer med 14766 tillägg och 0 borttagningar
  1. 8 0
      docs/sources/articles.md
  2. 60 0
      docs/sources/articles/baseimages.md
  3. 438 0
      docs/sources/articles/runmetrics.md
  4. 258 0
      docs/sources/articles/security.md
  5. 7 0
      docs/sources/contributing.md
  6. 24 0
      docs/sources/contributing/contributing.md
  7. 149 0
      docs/sources/contributing/devenvironment.md
  8. 25 0
      docs/sources/examples.md
  9. 113 0
      docs/sources/examples/apt-cacher-ng.md
  10. 152 0
      docs/sources/examples/cfengine_process_management.md
  11. 50 0
      docs/sources/examples/couchdb_data_volumes.md
  12. 166 0
      docs/sources/examples/hello_world.md
  13. 109 0
      docs/sources/examples/https.md
  14. 89 0
      docs/sources/examples/mongodb.md
  15. 204 0
      docs/sources/examples/nodejs_web_app.md
  16. 157 0
      docs/sources/examples/postgresql_service.md
  17. 130 0
      docs/sources/examples/python_web_app.md
  18. 95 0
      docs/sources/examples/running_redis_service.md
  19. 138 0
      docs/sources/examples/running_riak_service.md
  20. 60 0
      docs/sources/examples/running_ssh_service.md
  21. 121 0
      docs/sources/examples/using_supervisord.md
  22. 218 0
      docs/sources/faq.md
  23. 1 0
      docs/sources/genindex.md
  24. 104 0
      docs/sources/http-routingtable.md
  25. 25 0
      docs/sources/installation.md
  26. 106 0
      docs/sources/installation/amazon.md
  27. 69 0
      docs/sources/installation/archlinux.md
  28. 104 0
      docs/sources/installation/binaries.md
  29. 95 0
      docs/sources/installation/cruxlinux.md
  30. 67 0
      docs/sources/installation/fedora.md
  31. 58 0
      docs/sources/installation/frugalware.md
  32. 80 0
      docs/sources/installation/gentoolinux.md
  33. 64 0
      docs/sources/installation/google.md
  34. 180 0
      docs/sources/installation/mac.md
  35. 65 0
      docs/sources/installation/openSUSE.md
  36. 88 0
      docs/sources/installation/rackspace.md
  37. 80 0
      docs/sources/installation/rhel.md
  38. 37 0
      docs/sources/installation/softlayer.md
  39. 330 0
      docs/sources/installation/ubuntulinux.md
  40. 72 0
      docs/sources/installation/windows.md
  41. 9 0
      docs/sources/reference.md
  42. 100 0
      docs/sources/reference/api.md
  43. 355 0
      docs/sources/reference/api/docker_io_accounts_api.md
  44. 256 0
      docs/sources/reference/api/docker_io_oauth_api.md
  45. 348 0
      docs/sources/reference/api/docker_remote_api.md
  46. 1238 0
      docs/sources/reference/api/docker_remote_api_v1.10.md
  47. 1242 0
      docs/sources/reference/api/docker_remote_api_v1.11.md
  48. 1255 0
      docs/sources/reference/api/docker_remote_api_v1.9.md
  49. 525 0
      docs/sources/reference/api/index_api.md
  50. 501 0
      docs/sources/reference/api/registry_api.md
  51. 691 0
      docs/sources/reference/api/registry_index_spec.md
  52. 89 0
      docs/sources/reference/api/remote_api_client_libraries.md
  53. 510 0
      docs/sources/reference/builder.md
  54. 7 0
      docs/sources/reference/commandline.md
  55. 1170 0
      docs/sources/reference/commandline/cli.md
  56. 422 0
      docs/sources/reference/run.md
  57. 10 0
      docs/sources/search.md
  58. 13 0
      docs/sources/terms.md
  59. 46 0
      docs/sources/terms/container.md
  60. 36 0
      docs/sources/terms/filesystem.md
  61. 40 0
      docs/sources/terms/image.md
  62. 35 0
      docs/sources/terms/layer.md
  63. 20 0
      docs/sources/terms/registry.md
  64. 39 0
      docs/sources/terms/repository.md
  65. 17 0
      docs/sources/toctree.md
  66. 13 0
      docs/sources/use.md
  67. 157 0
      docs/sources/use/ambassador_pattern_linking.md
  68. 180 0
      docs/sources/use/basics.md
  69. 75 0
      docs/sources/use/chef.md
  70. 63 0
      docs/sources/use/host_integration.md
  71. 142 0
      docs/sources/use/networking.md
  72. 140 0
      docs/sources/use/port_redirection.md
  73. 92 0
      docs/sources/use/puppet.md
  74. 121 0
      docs/sources/use/working_with_links_names.md
  75. 178 0
      docs/sources/use/working_with_volumes.md
  76. 235 0
      docs/sources/use/workingwithrepository.md

+ 8 - 0
docs/sources/articles.md

@@ -0,0 +1,8 @@
+# Articles
+
+## Contents:
+
+- [Docker Security](security/)
+- [Create a Base Image](baseimages/)
+- [Runtime Metrics](runmetrics/)
+

+ 60 - 0
docs/sources/articles/baseimages.md

@@ -0,0 +1,60 @@
+page_title: Create a Base Image
+page_description: How to create base images
+page_keywords: Examples, Usage, base image, docker, documentation, examples
+
+# Create a Base Image
+
+So you want to create your own [*Base
+Image*](../../terms/image/#base-image-def)? Great!
+
+The specific process will depend heavily on the Linux distribution you
+want to package. We have some examples below, and you are encouraged to
+submit pull requests to contribute new ones.
+
+## Create a full image using tar
+
+In general, you’ll want to start with a working machine that is running
+the distribution you’d like to package as a base image, though that is
+not required for some tools like Debian’s
+[Debootstrap](https://wiki.debian.org/Debootstrap), which you can also
+use to build Ubuntu images.
+
+It can be as simple as this to create an Ubuntu base image:
+
+    $ sudo debootstrap raring raring > /dev/null
+    $ sudo tar -C raring -c . | sudo docker import - raring
+    a29c15f1bf7a
+    $ sudo docker run raring cat /etc/lsb-release
+    DISTRIB_ID=Ubuntu
+    DISTRIB_RELEASE=13.04
+    DISTRIB_CODENAME=raring
+    DISTRIB_DESCRIPTION="Ubuntu 13.04"
+
+There are more example scripts for creating base images in the Docker
+GitHub Repo:
+
+-   [BusyBox](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-busybox.sh)
+-   CentOS / Scientific Linux CERN (SLC) [on
+    Debian/Ubuntu](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-rinse.sh)
+    or [on
+    CentOS/RHEL/SLC/etc.](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-yum.sh)
+-   [Debian /
+    Ubuntu](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-debootstrap.sh)
+
+## Creating a simple base image using `scratch`
+
+There is a special repository in the Docker registry called
+`scratch`, which was created using an empty tar
+file:
+
+    $ tar cv --files-from /dev/null | docker import - scratch
+
+which you can `docker pull`. You can then use that
+image to base your new minimal containers `FROM`:
+
+    FROM scratch
+    ADD true-asm /true
+    CMD ["/true"]
+
+The Dockerfile above is from extremely minimal image -
+[tianon/true](https://github.com/tianon/dockerfiles/tree/master/true).

+ 438 - 0
docs/sources/articles/runmetrics.md

@@ -0,0 +1,438 @@
+page_title: Runtime Metrics
+page_description: Measure the behavior of running containers
+page_keywords: docker, metrics, CPU, memory, disk, IO, run, runtime
+
+# Runtime Metrics
+
+Linux Containers rely on [control
+groups](https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt)
+which not only track groups of processes, but also expose metrics about
+CPU, memory, and block I/O usage. You can access those metrics and
+obtain network usage metrics as well. This is relevant for "pure" LXC
+containers, as well as for Docker containers.
+
+## Control Groups
+
+Control groups are exposed through a pseudo-filesystem. In recent
+distros, you should find this filesystem under
+`/sys/fs/cgroup`. Under that directory, you will see
+multiple sub-directories, called devices, freezer, blkio, etc.; each
+sub-directory actually corresponds to a different cgroup hierarchy.
+
+On older systems, the control groups might be mounted on
+`/cgroup`, without distinct hierarchies. In that
+case, instead of seeing the sub-directories, you will see a bunch of
+files in that directory, and possibly some directories corresponding to
+existing containers.
+
+To figure out where your control groups are mounted, you can run:
+
+    grep cgroup /proc/mounts
+
+## Enumerating Cgroups
+
+You can look into `/proc/cgroups` to see the
+different control group subsystems known to the system, the hierarchy
+they belong to, and how many groups they contain.
+
+You can also look at `/proc/<pid>/cgroup` to see
+which control groups a process belongs to. The control group will be
+shown as a path relative to the root of the hierarchy mountpoint; e.g.
+`/` means “this process has not been assigned into a
+particular group”, while `/lxc/pumpkin` means that
+the process is likely to be a member of a container named
+`pumpkin`.
+
+## Finding the Cgroup for a Given Container
+
+For each container, one cgroup will be created in each hierarchy. On
+older systems with older versions of the LXC userland tools, the name of
+the cgroup will be the name of the container. With more recent versions
+of the LXC tools, the cgroup will be `lxc/<container_name>.`
+.literal}
+
+For Docker containers using cgroups, the container name will be the full
+ID or long ID of the container. If a container shows up as ae836c95b4c3
+in `docker ps`, its long ID might be something like
+`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`
+.literal}. You can look it up with `docker inspect`
+or `docker ps -notrunc`.
+
+Putting everything together to look at the memory metrics for a Docker
+container, take a look at
+`/sys/fs/cgroup/memory/lxc/<longid>/`.
+
+## Metrics from Cgroups: Memory, CPU, Block IO
+
+For each subsystem (memory, CPU, and block I/O), you will find one or
+more pseudo-files containing statistics.
+
+### Memory Metrics: `memory.stat`
+
+Memory metrics are found in the "memory" cgroup. Note that the memory
+control group adds a little overhead, because it does very fine-grained
+accounting of the memory usage on your host. Therefore, many distros
+chose to not enable it by default. Generally, to enable it, all you have
+to do is to add some kernel command-line parameters:
+`cgroup_enable=memory swapaccount=1`.
+
+The metrics are in the pseudo-file `memory.stat`.
+Here is what it will look like:
+
+    cache 11492564992
+    rss 1930993664
+    mapped_file 306728960
+    pgpgin 406632648
+    pgpgout 403355412
+    swap 0
+    pgfault 728281223
+    pgmajfault 1724
+    inactive_anon 46608384
+    active_anon 1884520448
+    inactive_file 7003344896
+    active_file 4489052160
+    unevictable 32768
+    hierarchical_memory_limit 9223372036854775807
+    hierarchical_memsw_limit 9223372036854775807
+    total_cache 11492564992
+    total_rss 1930993664
+    total_mapped_file 306728960
+    total_pgpgin 406632648
+    total_pgpgout 403355412
+    total_swap 0
+    total_pgfault 728281223
+    total_pgmajfault 1724
+    total_inactive_anon 46608384
+    total_active_anon 1884520448
+    total_inactive_file 7003344896
+    total_active_file 4489052160
+    total_unevictable 32768
+
+The first half (without the `total_` prefix)
+contains statistics relevant to the processes within the cgroup,
+excluding sub-cgroups. The second half (with the `total_`
+prefix) includes sub-cgroups as well.
+
+Some metrics are "gauges", i.e. values that can increase or decrease
+(e.g. swap, the amount of swap space used by the members of the cgroup).
+Some others are "counters", i.e. values that can only go up, because
+they represent occurrences of a specific event (e.g. pgfault, which
+indicates the number of page faults which happened since the creation of
+the cgroup; this number can never decrease).
+
+cache
+:   the amount of memory used by the processes of this control group
+    that can be associated precisely with a block on a block device.
+    When you read from and write to files on disk, this amount will
+    increase. This will be the case if you use "conventional" I/O
+    (`open`, `read`,
+    `write` syscalls) as well as mapped files (with
+    `mmap`). It also accounts for the memory used by
+    `tmpfs` mounts, though the reasons are unclear.
+rss
+:   the amount of memory that *doesn’t* correspond to anything on disk:
+    stacks, heaps, and anonymous memory maps.
+mapped\_file
+:   indicates the amount of memory mapped by the processes in the
+    control group. It doesn’t give you information about *how much*
+    memory is used; it rather tells you *how* it is used.
+pgfault and pgmajfault
+:   indicate the number of times that a process of the cgroup triggered
+    a "page fault" and a "major fault", respectively. A page fault
+    happens when a process accesses a part of its virtual memory space
+    which is nonexistent or protected. The former can happen if the
+    process is buggy and tries to access an invalid address (it will
+    then be sent a `SIGSEGV` signal, typically
+    killing it with the famous `Segmentation fault`
+    message). The latter can happen when the process reads from a memory
+    zone which has been swapped out, or which corresponds to a mapped
+    file: in that case, the kernel will load the page from disk, and let
+    the CPU complete the memory access. It can also happen when the
+    process writes to a copy-on-write memory zone: likewise, the kernel
+    will preempt the process, duplicate the memory page, and resume the
+    write operation on the process’ own copy of the page. "Major" faults
+    happen when the kernel actually has to read the data from disk. When
+    it just has to duplicate an existing page, or allocate an empty
+    page, it’s a regular (or "minor") fault.
+swap
+:   the amount of swap currently used by the processes in this cgroup.
+active\_anon and inactive\_anon
+:   the amount of *anonymous* memory that has been identified has
+    respectively *active* and *inactive* by the kernel. "Anonymous"
+    memory is the memory that is *not* linked to disk pages. In other
+    words, that’s the equivalent of the rss counter described above. In
+    fact, the very definition of the rss counter is **active\_anon** +
+    **inactive\_anon** - **tmpfs** (where tmpfs is the amount of memory
+    used up by `tmpfs` filesystems mounted by this
+    control group). Now, what’s the difference between "active" and
+    "inactive"? Pages are initially "active"; and at regular intervals,
+    the kernel sweeps over the memory, and tags some pages as
+    "inactive". Whenever they are accessed again, they are immediately
+    retagged "active". When the kernel is almost out of memory, and time
+    comes to swap out to disk, the kernel will swap "inactive" pages.
+active\_file and inactive\_file
+:   cache memory, with *active* and *inactive* similar to the *anon*
+    memory above. The exact formula is cache = **active\_file** +
+    **inactive\_file** + **tmpfs**. The exact rules used by the kernel
+    to move memory pages between active and inactive sets are different
+    from the ones used for anonymous memory, but the general principle
+    is the same. Note that when the kernel needs to reclaim memory, it
+    is cheaper to reclaim a clean (=non modified) page from this pool,
+    since it can be reclaimed immediately (while anonymous pages and
+    dirty/modified pages have to be written to disk first).
+unevictable
+:   the amount of memory that cannot be reclaimed; generally, it will
+    account for memory that has been "locked" with `mlock`
+. It is often used by crypto frameworks to make sure that
+    secret keys and other sensitive material never gets swapped out to
+    disk.
+memory and memsw limits
+:   These are not really metrics, but a reminder of the limits applied
+    to this cgroup. The first one indicates the maximum amount of
+    physical memory that can be used by the processes of this control
+    group; the second one indicates the maximum amount of RAM+swap.
+
+Accounting for memory in the page cache is very complex. If two
+processes in different control groups both read the same file
+(ultimately relying on the same blocks on disk), the corresponding
+memory charge will be split between the control groups. It’s nice, but
+it also means that when a cgroup is terminated, it could increase the
+memory usage of another cgroup, because they are not splitting the cost
+anymore for those memory pages.
+
+### CPU metrics: `cpuacct.stat`
+
+Now that we’ve covered memory metrics, everything else will look very
+simple in comparison. CPU metrics will be found in the
+`cpuacct` controller.
+
+For each container, you will find a pseudo-file `cpuacct.stat`
+.literal}, containing the CPU usage accumulated by the processes of the
+container, broken down between `user` and
+`system` time. If you’re not familiar with the
+distinction, `user` is the time during which the
+processes were in direct control of the CPU (i.e. executing process
+code), and `system` is the time during which the CPU
+was executing system calls on behalf of those processes.
+
+Those times are expressed in ticks of 1/100th of a second. Actually,
+they are expressed in "user jiffies". There are `USER_HZ`
+*"jiffies"* per second, and on x86 systems,
+`USER_HZ` is 100. This used to map exactly to the
+number of scheduler "ticks" per second; but with the advent of higher
+frequency scheduling, as well as [tickless
+kernels](http://lwn.net/Articles/549580/), the number of kernel ticks
+wasn’t relevant anymore. It stuck around anyway, mainly for legacy and
+compatibility reasons.
+
+### Block I/O metrics
+
+Block I/O is accounted in the `blkio` controller.
+Different metrics are scattered across different files. While you can
+find in-depth details in the
+[blkio-controller](https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt)
+file in the kernel documentation, here is a short list of the most
+relevant ones:
+
+blkio.sectors
+:   contain the number of 512-bytes sectors read and written by the
+    processes member of the cgroup, device by device. Reads and writes
+    are merged in a single counter.
+blkio.io\_service\_bytes
+:   indicates the number of bytes read and written by the cgroup. It has
+    4 counters per device, because for each device, it differentiates
+    between synchronous vs. asynchronous I/O, and reads vs. writes.
+blkio.io\_serviced
+:   the number of I/O operations performed, regardless of their size. It
+    also has 4 counters per device.
+blkio.io\_queued
+:   indicates the number of I/O operations currently queued for this
+    cgroup. In other words, if the cgroup isn’t doing any I/O, this will
+    be zero. Note that the opposite is not true. In other words, if
+    there is no I/O queued, it does not mean that the cgroup is idle
+    (I/O-wise). It could be doing purely synchronous reads on an
+    otherwise quiescent device, which is therefore able to handle them
+    immediately, without queuing. Also, while it is helpful to figure
+    out which cgroup is putting stress on the I/O subsystem, keep in
+    mind that is is a relative quantity. Even if a process group does
+    not perform more I/O, its queue size can increase just because the
+    device load increases because of other devices.
+
+## Network Metrics
+
+Network metrics are not exposed directly by control groups. There is a
+good explanation for that: network interfaces exist within the context
+of *network namespaces*. The kernel could probably accumulate metrics
+about packets and bytes sent and received by a group of processes, but
+those metrics wouldn’t be very useful. You want per-interface metrics
+(because traffic happening on the local `lo`
+interface doesn’t really count). But since processes in a single cgroup
+can belong to multiple network namespaces, those metrics would be harder
+to interpret: multiple network namespaces means multiple `lo`
+interfaces, potentially multiple `eth0`
+interfaces, etc.; so this is why there is no easy way to gather network
+metrics with control groups.
+
+Instead we can gather network metrics from other sources:
+
+### IPtables
+
+IPtables (or rather, the netfilter framework for which iptables is just
+an interface) can do some serious accounting.
+
+For instance, you can setup a rule to account for the outbound HTTP
+traffic on a web server:
+
+    iptables -I OUTPUT -p tcp --sport 80
+
+There is no `-j` or `-g` flag,
+so the rule will just count matched packets and go to the following
+rule.
+
+Later, you can check the values of the counters, with:
+
+    iptables -nxvL OUTPUT
+
+Technically, `-n` is not required, but it will
+prevent iptables from doing DNS reverse lookups, which are probably
+useless in this scenario.
+
+Counters include packets and bytes. If you want to setup metrics for
+container traffic like this, you could execute a `for`
+loop to add two `iptables` rules per
+container IP address (one in each direction), in the `FORWARD`
+chain. This will only meter traffic going through the NAT
+layer; you will also have to add traffic going through the userland
+proxy.
+
+Then, you will need to check those counters on a regular basis. If you
+happen to use `collectd`, there is a nice plugin to
+automate iptables counters collection.
+
+### Interface-level counters
+
+Since each container has a virtual Ethernet interface, you might want to
+check directly the TX and RX counters of this interface. You will notice
+that each container is associated to a virtual Ethernet interface in
+your host, with a name like `vethKk8Zqi`. Figuring
+out which interface corresponds to which container is, unfortunately,
+difficult.
+
+But for now, the best way is to check the metrics *from within the
+containers*. To accomplish this, you can run an executable from the host
+environment within the network namespace of a container using **ip-netns
+magic**.
+
+The `ip-netns exec` command will let you execute any
+program (present in the host system) within any network namespace
+visible to the current process. This means that your host will be able
+to enter the network namespace of your containers, but your containers
+won’t be able to access the host, nor their sibling containers.
+Containers will be able to “see” and affect their sub-containers,
+though.
+
+The exact format of the command is:
+
+    ip netns exec <nsname> <command...>
+
+For example:
+
+    ip netns exec mycontainer netstat -i
+
+`ip netns` finds the "mycontainer" container by
+using namespaces pseudo-files. Each process belongs to one network
+namespace, one PID namespace, one `mnt` namespace,
+etc., and those namespaces are materialized under
+`/proc/<pid>/ns/`. For example, the network
+namespace of PID 42 is materialized by the pseudo-file
+`/proc/42/ns/net`.
+
+When you run `ip netns exec mycontainer ...`, it
+expects `/var/run/netns/mycontainer` to be one of
+those pseudo-files. (Symlinks are accepted.)
+
+In other words, to execute a command within the network namespace of a
+container, we need to:
+
+-   Find out the PID of any process within the container that we want to
+    investigate;
+-   Create a symlink from `/var/run/netns/<somename>`
+ to `/proc/<thepid>/ns/net`
+-   Execute `ip netns exec <somename> ....`
+
+Please review [*Enumerating Cgroups*](#run-findpid) to learn how to find
+the cgroup of a pprocess running in the container of which you want to
+measure network usage. From there, you can examine the pseudo-file named
+`tasks`, which containes the PIDs that are in the
+control group (i.e. in the container). Pick any one of them.
+
+Putting everything together, if the "short ID" of a container is held in
+the environment variable `$CID`, then you can do
+this:
+
+    TASKS=/sys/fs/cgroup/devices/$CID*/tasks
+    PID=$(head -n 1 $TASKS)
+    mkdir -p /var/run/netns
+    ln -sf /proc/$PID/ns/net /var/run/netns/$CID
+    ip netns exec $CID netstat -i
+
+## Tips for high-performance metric collection
+
+Note that running a new process each time you want to update metrics is
+(relatively) expensive. If you want to collect metrics at high
+resolutions, and/or over a large number of containers (think 1000
+containers on a single host), you do not want to fork a new process each
+time.
+
+Here is how to collect metrics from a single process. You will have to
+write your metric collector in C (or any language that lets you do
+low-level system calls). You need to use a special system call,
+`setns()`, which lets the current process enter any
+arbitrary namespace. It requires, however, an open file descriptor to
+the namespace pseudo-file (remember: that’s the pseudo-file in
+`/proc/<pid>/ns/net`).
+
+However, there is a catch: you must not keep this file descriptor open.
+If you do, when the last process of the control group exits, the
+namespace will not be destroyed, and its network resources (like the
+virtual interface of the container) will stay around for ever (or until
+you close that file descriptor).
+
+The right approach would be to keep track of the first PID of each
+container, and re-open the namespace pseudo-file each time.
+
+## Collecting metrics when a container exits
+
+Sometimes, you do not care about real time metric collection, but when a
+container exits, you want to know how much CPU, memory, etc. it has
+used.
+
+Docker makes this difficult because it relies on `lxc-start`
+.literal}, which carefully cleans up after itself, but it is still
+possible. It is usually easier to collect metrics at regular intervals
+(e.g. every minute, with the collectd LXC plugin) and rely on that
+instead.
+
+But, if you’d still like to gather the stats when a container stops,
+here is how:
+
+For each container, start a collection process, and move it to the
+control groups that you want to monitor by writing its PID to the tasks
+file of the cgroup. The collection process should periodically re-read
+the tasks file to check if it’s the last process of the control group.
+(If you also want to collect network statistics as explained in the
+previous section, you should also move the process to the appropriate
+network namespace.)
+
+When the container exits, `lxc-start` will try to
+delete the control groups. It will fail, since the control group is
+still in use; but that’s fine. You process should now detect that it is
+the only one remaining in the group. Now is the right time to collect
+all the metrics you need!
+
+Finally, your process should move itself back to the root control group,
+and remove the container control group. To remove a control group, just
+`rmdir` its directory. It’s counter-intuitive to
+`rmdir` a directory as it still contains files; but
+remember that this is a pseudo-filesystem, so usual rules don’t apply.
+After the cleanup is done, the collection process can exit safely.

+ 258 - 0
docs/sources/articles/security.md

@@ -0,0 +1,258 @@
+page_title: Docker Security
+page_description: Review of the Docker Daemon attack surface
+page_keywords: Docker, Docker documentation, security
+
+# Docker Security
+
+> *Adapted from* [Containers & Docker: How Secure are
+> They?](blogsecurity)
+
+There are three major areas to consider when reviewing Docker security:
+
+-   the intrinsic security of containers, as implemented by kernel
+    namespaces and cgroups;
+-   the attack surface of the Docker daemon itself;
+-   the "hardening" security features of the kernel and how they
+    interact with containers.
+
+## Kernel Namespaces
+
+Docker containers are essentially LXC containers, and they come with the
+same security features. When you start a container with
+`docker run`, behind the scenes Docker uses
+`lxc-start` to execute the Docker container. This
+creates a set of namespaces and control groups for the container. Those
+namespaces and control groups are not created by Docker itself, but by
+`lxc-start`. This means that as the LXC userland
+tools evolve (and provide additional namespaces and isolation features),
+Docker will automatically make use of them.
+
+**Namespaces provide the first and most straightforward form of
+isolation**: processes running within a container cannot see, and even
+less affect, processes running in another container, or in the host
+system.
+
+**Each container also gets its own network stack**, meaning that a
+container doesn’t get a privileged access to the sockets or interfaces
+of another container. Of course, if the host system is setup
+accordingly, containers can interact with each other through their
+respective network interfaces — just like they can interact with
+external hosts. When you specify public ports for your containers or use
+[*links*](../../use/working_with_links_names/#working-with-links-names)
+then IP traffic is allowed between containers. They can ping each other,
+send/receive UDP packets, and establish TCP connections, but that can be
+restricted if necessary. From a network architecture point of view, all
+containers on a given Docker host are sitting on bridge interfaces. This
+means that they are just like physical machines connected through a
+common Ethernet switch; no more, no less.
+
+How mature is the code providing kernel namespaces and private
+networking? Kernel namespaces were introduced [between kernel version
+2.6.15 and
+2.6.26](http://lxc.sourceforge.net/index.php/about/kernel-namespaces/).
+This means that since July 2008 (date of the 2.6.26 release, now 5 years
+ago), namespace code has been exercised and scrutinized on a large
+number of production systems. And there is more: the design and
+inspiration for the namespaces code are even older. Namespaces are
+actually an effort to reimplement the features of
+[OpenVZ](http://en.wikipedia.org/wiki/OpenVZ) in such a way that they
+could be merged within the mainstream kernel. And OpenVZ was initially
+released in 2005, so both the design and the implementation are pretty
+mature.
+
+## Control Groups
+
+Control Groups are the other key component of Linux Containers. They
+implement resource accounting and limiting. They provide a lot of very
+useful metrics, but they also help to ensure that each container gets
+its fair share of memory, CPU, disk I/O; and, more importantly, that a
+single container cannot bring the system down by exhausting one of those
+resources.
+
+So while they do not play a role in preventing one container from
+accessing or affecting the data and processes of another container, they
+are essential to fend off some denial-of-service attacks. They are
+particularly important on multi-tenant platforms, like public and
+private PaaS, to guarantee a consistent uptime (and performance) even
+when some applications start to misbehave.
+
+Control Groups have been around for a while as well: the code was
+started in 2006, and initially merged in kernel 2.6.24.
+
+## Docker Daemon Attack Surface
+
+Running containers (and applications) with Docker implies running the
+Docker daemon. This daemon currently requires root privileges, and you
+should therefore be aware of some important details.
+
+First of all, **only trusted users should be allowed to control your
+Docker daemon**. This is a direct consequence of some powerful Docker
+features. Specifically, Docker allows you to share a directory between
+the Docker host and a guest container; and it allows you to do so
+without limiting the access rights of the container. This means that you
+can start a container where the `/host` directory
+will be the `/` directory on your host; and the
+container will be able to alter your host filesystem without any
+restriction. This sounds crazy? Well, you have to know that **all
+virtualization systems allowing filesystem resource sharing behave the
+same way**. Nothing prevents you from sharing your root filesystem (or
+even your root block device) with a virtual machine.
+
+This has a strong security implication: if you instrument Docker from
+e.g. a web server to provision containers through an API, you should be
+even more careful than usual with parameter checking, to make sure that
+a malicious user cannot pass crafted parameters causing Docker to create
+arbitrary containers.
+
+For this reason, the REST API endpoint (used by the Docker CLI to
+communicate with the Docker daemon) changed in Docker 0.5.2, and now
+uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
+latter being prone to cross-site-scripting attacks if you happen to run
+Docker directly on your local machine, outside of a VM). You can then
+use traditional UNIX permission checks to limit access to the control
+socket.
+
+You can also expose the REST API over HTTP if you explicitly decide so.
+However, if you do that, being aware of the abovementioned security
+implication, you should ensure that it will be reachable only from a
+trusted network or VPN; or protected with e.g. `stunnel`
+and client SSL certificates.
+
+Recent improvements in Linux namespaces will soon allow to run
+full-featured containers without root privileges, thanks to the new user
+namespace. This is covered in detail
+[here](http://s3hh.wordpress.com/2013/07/19/creating-and-using-containers-without-privilege/).
+Moreover, this will solve the problem caused by sharing filesystems
+between host and guest, since the user namespace allows users within
+containers (including the root user) to be mapped to other users in the
+host system.
+
+The end goal for Docker is therefore to implement two additional
+security improvements:
+
+-   map the root user of a container to a non-root user of the Docker
+    host, to mitigate the effects of a container-to-host privilege
+    escalation;
+-   allow the Docker daemon to run without root privileges, and delegate
+    operations requiring those privileges to well-audited sub-processes,
+    each with its own (very limited) scope: virtual network setup,
+    filesystem management, etc.
+
+Finally, if you run Docker on a server, it is recommended to run
+exclusively Docker in the server, and move all other services within
+containers controlled by Docker. Of course, it is fine to keep your
+favorite admin tools (probably at least an SSH server), as well as
+existing monitoring/supervision processes (e.g. NRPE, collectd, etc).
+
+## Linux Kernel Capabilities
+
+By default, Docker starts containers with a very restricted set of
+capabilities. What does that mean?
+
+Capabilities turn the binary "root/non-root" dichotomy into a
+fine-grained access control system. Processes (like web servers) that
+just need to bind on a port below 1024 do not have to run as root: they
+can just be granted the `net_bind_service`
+capability instead. And there are many other capabilities, for almost
+all the specific areas where root privileges are usually needed.
+
+This means a lot for container security; let’s see why!
+
+Your average server (bare metal or virtual machine) needs to run a bunch
+of processes as root. Those typically include SSH, cron, syslogd;
+hardware management tools (to e.g. load modules), network configuration
+tools (to handle e.g. DHCP, WPA, or VPNs), and much more. A container is
+very different, because almost all of those tasks are handled by the
+infrastructure around the container:
+
+-   SSH access will typically be managed by a single server running in
+    the Docker host;
+-   `cron`, when necessary, should run as a user
+    process, dedicated and tailored for the app that needs its
+    scheduling service, rather than as a platform-wide facility;
+-   log management will also typically be handed to Docker, or by
+    third-party services like Loggly or Splunk;
+-   hardware management is irrelevant, meaning that you never need to
+    run `udevd` or equivalent daemons within
+    containers;
+-   network management happens outside of the containers, enforcing
+    separation of concerns as much as possible, meaning that a container
+    should never need to perform `ifconfig`,
+    `route`, or ip commands (except when a container
+    is specifically engineered to behave like a router or firewall, of
+    course).
+
+This means that in most cases, containers will not need "real" root
+privileges *at all*. And therefore, containers can run with a reduced
+capability set; meaning that "root" within a container has much less
+privileges than the real "root". For instance, it is possible to:
+
+-   deny all "mount" operations;
+-   deny access to raw sockets (to prevent packet spoofing);
+-   deny access to some filesystem operations, like creating new device
+    nodes, changing the owner of files, or altering attributes
+    (including the immutable flag);
+-   deny module loading;
+-   and many others.
+
+This means that even if an intruder manages to escalate to root within a
+container, it will be much harder to do serious damage, or to escalate
+to the host.
+
+This won’t affect regular web apps; but malicious users will find that
+the arsenal at their disposal has shrunk considerably! You can see [the
+list of dropped capabilities in the Docker
+code](https://github.com/dotcloud/docker/blob/v0.5.0/lxc_template.go#L97),
+and a full list of available capabilities in [Linux
+manpages](http://man7.org/linux/man-pages/man7/capabilities.7.html).
+
+Of course, you can always enable extra capabilities if you really need
+them (for instance, if you want to use a FUSE-based filesystem), but by
+default, Docker containers will be locked down to ensure maximum safety.
+
+## Other Kernel Security Features
+
+Capabilities are just one of the many security features provided by
+modern Linux kernels. It is also possible to leverage existing,
+well-known systems like TOMOYO, AppArmor, SELinux, GRSEC, etc. with
+Docker.
+
+While Docker currently only enables capabilities, it doesn’t interfere
+with the other systems. This means that there are many different ways to
+harden a Docker host. Here are a few examples.
+
+-   You can run a kernel with GRSEC and PAX. This will add many safety
+    checks, both at compile-time and run-time; it will also defeat many
+    exploits, thanks to techniques like address randomization. It
+    doesn’t require Docker-specific configuration, since those security
+    features apply system-wide, independently of containers.
+-   If your distribution comes with security model templates for LXC
+    containers, you can use them out of the box. For instance, Ubuntu
+    comes with AppArmor templates for LXC, and those templates provide
+    an extra safety net (even though it overlaps greatly with
+    capabilities).
+-   You can define your own policies using your favorite access control
+    mechanism. Since Docker containers are standard LXC containers,
+    there is nothing “magic” or specific to Docker.
+
+Just like there are many third-party tools to augment Docker containers
+with e.g. special network topologies or shared filesystems, you can
+expect to see tools to harden existing Docker containers without
+affecting Docker’s core.
+
+## Conclusions
+
+Docker containers are, by default, quite secure; especially if you take
+care of running your processes inside the containers as non-privileged
+users (i.e. non root).
+
+You can add an extra layer of safety by enabling Apparmor, SELinux,
+GRSEC, or your favorite hardening solution.
+
+Last but not least, if you see interesting security features in other
+containerization systems, you will be able to implement them as well
+with Docker, since everything is provided by the kernel anyway.
+
+For more context and especially for comparisons with VMs and other
+container systems, please also see the [original blog
+post](blogsecurity).

+ 7 - 0
docs/sources/contributing.md

@@ -0,0 +1,7 @@
+# Contributing
+
+## Contents:
+
+- [Contributing to Docker](contributing/)
+- [Setting Up a Dev Environment](devenvironment/)
+

+ 24 - 0
docs/sources/contributing/contributing.md

@@ -0,0 +1,24 @@
+page_title: Contribution Guidelines
+page_description: Contribution guidelines: create issues, conventions, pull requests
+page_keywords: contributing, docker, documentation, help, guideline
+
+# Contributing to Docker
+
+Want to hack on Docker? Awesome!
+
+The repository includes [all the instructions you need to get
+started](https://github.com/dotcloud/docker/blob/master/CONTRIBUTING.md).
+
+The [developer environment
+Dockerfile](https://github.com/dotcloud/docker/blob/master/Dockerfile)
+specifies the tools and versions used to test and build Docker.
+
+If you’re making changes to the documentation, see the
+[README.md](https://github.com/dotcloud/docker/blob/master/docs/README.md).
+
+The [documentation environment
+Dockerfile](https://github.com/dotcloud/docker/blob/master/docs/Dockerfile)
+specifies the tools and versions used to build the Documentation.
+
+Further interesting details can be found in the [Packaging
+hints](https://github.com/dotcloud/docker/blob/master/hack/PACKAGERS.md).

+ 149 - 0
docs/sources/contributing/devenvironment.md

@@ -0,0 +1,149 @@
+page_title: Setting Up a Dev Environment
+page_description: Guides on how to contribute to docker
+page_keywords: Docker, documentation, developers, contributing, dev environment
+
+# Setting Up a Dev Environment
+
+To make it easier to contribute to Docker, we provide a standard
+development environment. It is important that the same environment be
+used for all tests, builds and releases. The standard development
+environment defines all build dependencies: system libraries and
+binaries, go environment, go dependencies, etc.
+
+## Install Docker
+
+Docker’s build environment itself is a Docker container, so the first
+step is to install Docker on your system.
+
+You can follow the [install instructions most relevant to your
+system](https://docs.docker.io/en/latest/installation/). Make sure you
+have a working, up-to-date docker installation, then continue to the
+next step.
+
+## Install tools used for this tutorial
+
+Install `git`; honest, it’s very good. You can use
+other ways to get the Docker source, but they’re not anywhere near as
+easy.
+
+Install `make`. This tutorial uses our base Makefile
+to kick off the docker containers in a repeatable and consistent way.
+Again, you can do it in other ways but you need to do more work.
+
+## Check out the Source
+
+    git clone http://git@github.com/dotcloud/docker
+    cd docker
+
+To checkout a different revision just use `git checkout`
+with the name of branch or revision number.
+
+## Build the Environment
+
+This following command will build a development environment using the
+Dockerfile in the current directory. Essentially, it will install all
+the build and runtime dependencies necessary to build and test Docker.
+This command will take some time to complete when you first execute it.
+
+    sudo make build
+
+If the build is successful, congratulations! You have produced a clean
+build of docker, neatly encapsulated in a standard build environment.
+
+## Build the Docker Binary
+
+To create the Docker binary, run this command:
+
+    sudo make binary
+
+This will create the Docker binary in
+`./bundles/<version>-dev/binary/`
+
+### Using your built Docker binary
+
+The binary is available outside the container in the directory
+`./bundles/<version>-dev/binary/`. You can swap your
+host docker executable with this binary for live testing - for example,
+on ubuntu:
+
+    sudo service docker stop ; sudo cp $(which docker) $(which docker)_ ; sudo cp ./bundles/<version>-dev/binary/docker-<version>-dev $(which docker);sudo service docker start
+
+Note
+
+Its safer to run the tests below before swapping your hosts docker
+binary.
+
+## Run the Tests
+
+To execute the test cases, run this command:
+
+    sudo make test
+
+If the test are successful then the tail of the output should look
+something like this
+
+    --- PASS: TestWriteBroadcaster (0.00 seconds)
+    === RUN TestRaceWriteBroadcaster
+    --- PASS: TestRaceWriteBroadcaster (0.00 seconds)
+    === RUN TestTruncIndex
+    --- PASS: TestTruncIndex (0.00 seconds)
+    === RUN TestCompareKernelVersion
+    --- PASS: TestCompareKernelVersion (0.00 seconds)
+    === RUN TestHumanSize
+    --- PASS: TestHumanSize (0.00 seconds)
+    === RUN TestParseHost
+    --- PASS: TestParseHost (0.00 seconds)
+    === RUN TestParseRepositoryTag
+    --- PASS: TestParseRepositoryTag (0.00 seconds)
+    === RUN TestGetResolvConf
+    --- PASS: TestGetResolvConf (0.00 seconds)
+    === RUN TestCheckLocalDns
+    --- PASS: TestCheckLocalDns (0.00 seconds)
+    === RUN TestParseRelease
+    --- PASS: TestParseRelease (0.00 seconds)
+    === RUN TestDependencyGraphCircular
+    --- PASS: TestDependencyGraphCircular (0.00 seconds)
+    === RUN TestDependencyGraph
+    --- PASS: TestDependencyGraph (0.00 seconds)
+    PASS
+    ok      github.com/dotcloud/docker/utils        0.017s
+
+If $TESTFLAGS is set in the environment, it is passed as extra
+arguments to ‘go test’. You can use this to select certain tests to run,
+eg.
+
+> TESTFLAGS=’-run \^TestBuild\$’ make test
+
+If the output indicates "FAIL" and you see errors like this:
+
+    server.go:1302 Error: Insertion failed because database is full: database or disk is full
+
+    utils_test.go:179: Error copy: exit status 1 (cp: writing '/tmp/docker-testd5c9-[...]': No space left on device
+
+Then you likely don’t have enough memory available the test suite. 2GB
+is recommended.
+
+## Use Docker
+
+You can run an interactive session in the newly built container:
+
+    sudo make shell
+
+    # type 'exit' or Ctrl-D to exit
+
+## Build And View The Documentation
+
+If you want to read the documentation from a local website, or are
+making changes to it, you can build the documentation and then serve it
+by:
+
+        sudo make docs
+    # when its done, you can point your browser to http://yourdockerhost:8000
+        # type Ctrl-C to exit
+
+**Need More Help?**
+
+If you need more help then hop on to the [\#docker-dev IRC
+channel](irc://chat.freenode.net#docker-dev) or post a message on the
+[Docker developer mailing
+list](https://groups.google.com/d/forum/docker-dev).

+ 25 - 0
docs/sources/examples.md

@@ -0,0 +1,25 @@
+
+# Examples
+
+## Introduction:
+
+Here are some examples of how to use Docker to create running processes,
+starting from a very simple *Hello World* and progressing to more
+substantial services like those which you might find in production.
+
+## Contents:
+
+- [Check your Docker install](hello_world/)
+- [Hello World](hello_world/#hello-world)
+- [Hello World Daemon](hello_world/#hello-world-daemon)
+- [Node.js Web App](nodejs_web_app/)
+- [Redis Service](running_redis_service/)
+- [SSH Daemon Service](running_ssh_service/)
+- [CouchDB Service](couchdb_data_volumes/)
+- [PostgreSQL Service](postgresql_service/)
+- [Building an Image with MongoDB](mongodb/)
+- [Riak Service](running_riak_service/)
+- [Using Supervisor with Docker](using_supervisord/)
+- [Process Management with CFEngine](cfengine_process_management/)
+- [Python Web App](python_web_app/)
+

+ 113 - 0
docs/sources/examples/apt-cacher-ng.md

@@ -0,0 +1,113 @@
+page_title: Running an apt-cacher-ng service
+page_description: Installing and running an apt-cacher-ng service
+page_keywords: docker, example, package installation, networking, debian, ubuntu
+
+# Apt-Cacher-ng Service
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+-   **If you’re using OS X or docker via TCP** then you shouldn’t use
+    sudo
+
+When you have multiple Docker servers, or build unrelated Docker
+containers which can’t make use of the Docker build cache, it can be
+useful to have a caching proxy for your packages. This container makes
+the second download of any package almost instant.
+
+Use the following Dockerfile:
+
+    #
+    # Build: docker build -t apt-cacher .
+    # Run: docker run -d -p 3142:3142 --name apt-cacher-run apt-cacher
+    #
+    # and then you can run containers with:
+    #   docker run -t -i --rm -e http_proxy http://dockerhost:3142/ debian bash
+    #
+    FROM        ubuntu
+    MAINTAINER  SvenDowideit@docker.com
+
+    VOLUME      ["/var/cache/apt-cacher-ng"]
+    RUN     apt-get update ; apt-get install -yq apt-cacher-ng
+
+    EXPOSE      3142
+    CMD     chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*
+
+To build the image using:
+
+    $ sudo docker build -t eg_apt_cacher_ng .
+
+Then run it, mapping the exposed port to one on the host
+
+    $ sudo docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng
+
+To see the logfiles that are ‘tailed’ in the default command, you can
+use:
+
+    $ sudo docker logs -f test_apt_cacher_ng
+
+To get your Debian-based containers to use the proxy, you can do one of
+three things
+
+1.  Add an apt Proxy setting
+    `echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/conf.d/01proxy`
+
+2.  Set an environment variable:
+    `http_proxy=http://dockerhost:3142/`
+3.  Change your `sources.list` entries to start with
+    `http://dockerhost:3142/`
+
+**Option 1** injects the settings safely into your apt configuration in
+a local version of a common base:
+
+    FROM ubuntu
+    RUN  echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/apt.conf.d/01proxy
+    RUN apt-get update ; apt-get install vim git
+
+    # docker build -t my_ubuntu .
+
+**Option 2** is good for testing, but will break other HTTP clients
+which obey `http_proxy`, such as `curl`
+.literal}, `wget` and others:
+
+    $ sudo docker run --rm -t -i -e http_proxy=http://dockerhost:3142/ debian bash
+
+**Option 3** is the least portable, but there will be times when you
+might need to do it and you can do it from your `Dockerfile`
+too.
+
+Apt-cacher-ng has some tools that allow you to manage the repository,
+and they can be used by leveraging the `VOLUME`
+instruction, and the image we built to run the service:
+
+    $ sudo docker run --rm -t -i --volumes-from test_apt_cacher_ng eg_apt_cacher_ng bash
+
+    $$ /usr/lib/apt-cacher-ng/distkill.pl
+    Scanning /var/cache/apt-cacher-ng, please wait...
+    Found distributions:
+    bla, taggedcount: 0
+         1. precise-security (36 index files)
+         2. wheezy (25 index files)
+         3. precise-updates (36 index files)
+         4. precise (36 index files)
+         5. wheezy-updates (18 index files)
+
+    Found architectures:
+         6. amd64 (36 index files)
+         7. i386 (24 index files)
+
+    WARNING: The removal action may wipe out whole directories containing
+             index files. Select d to see detailed list.
+
+    (Number nn: tag distribution or architecture nn; 0: exit; d: show details; r: remove tagged; q: quit): q
+
+Finally, clean up after your test by stopping and removing the
+container, and then removing the image.
+
+    $ sudo docker stop test_apt_cacher_ng
+    $ sudo docker rm test_apt_cacher_ng
+    $ sudo docker rmi eg_apt_cacher_ng

+ 152 - 0
docs/sources/examples/cfengine_process_management.md

@@ -0,0 +1,152 @@
+page_title: Process Management with CFEngine
+page_description: Managing containerized processes with CFEngine
+page_keywords: cfengine, process, management, usage, docker, documentation
+
+# Process Management with CFEngine
+
+Create Docker containers with managed processes.
+
+Docker monitors one process in each running container and the container
+lives or dies with that process. By introducing CFEngine inside Docker
+containers, we can alleviate a few of the issues that may arise:
+
+-   It is possible to easily start multiple processes within a
+    container, all of which will be managed automatically, with the
+    normal `docker run` command.
+-   If a managed process dies or crashes, CFEngine will start it again
+    within 1 minute.
+-   The container itself will live as long as the CFEngine scheduling
+    daemon (cf-execd) lives. With CFEngine, we are able to decouple the
+    life of the container from the uptime of the service it provides.
+
+## How it works
+
+CFEngine, together with the cfe-docker integration policies, are
+installed as part of the Dockerfile. This builds CFEngine into our
+Docker image.
+
+The Dockerfile’s `ENTRYPOINT` takes an arbitrary
+amount of commands (with any desired arguments) as parameters. When we
+run the Docker container these parameters get written to CFEngine
+policies and CFEngine takes over to ensure that the desired processes
+are running in the container.
+
+CFEngine scans the process table for the `basename`
+of the commands given to the `ENTRYPOINT` and runs
+the command to start the process if the `basename`
+is not found. For example, if we start the container with
+`docker run "/path/to/my/application parameters"`,
+CFEngine will look for a process named `application`
+and run the command. If an entry for `application`
+is not found in the process table at any point in time, CFEngine will
+execute `/path/to/my/application parameters` to
+start the application once again. The check on the process table happens
+every minute.
+
+Note that it is therefore important that the command to start your
+application leaves a process with the basename of the command. This can
+be made more flexible by making some minor adjustments to the CFEngine
+policies, if desired.
+
+## Usage
+
+This example assumes you have Docker installed and working. We will
+install and manage `apache2` and `sshd`
+in a single container.
+
+There are three steps:
+
+1.  Install CFEngine into the container.
+2.  Copy the CFEngine Docker process management policy into the
+    containerized CFEngine installation.
+3.  Start your application processes as part of the
+    `docker run` command.
+
+### Building the container image
+
+The first two steps can be done as part of a Dockerfile, as follows.
+
+    FROM ubuntu
+    MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
+
+    RUN apt-get -y install wget lsb-release unzip ca-certificates
+
+    # install latest CFEngine
+    RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
+    RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
+    RUN apt-get update
+    RUN apt-get install cfengine-community
+
+    # install cfe-docker process management policy
+    RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
+    RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
+    RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
+    RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
+
+    # apache2 and openssh are just for testing purposes, install your own apps here
+    RUN apt-get -y install openssh-server apache2
+    RUN mkdir -p /var/run/sshd
+    RUN echo "root:password" | chpasswd  # need a password for ssh
+
+    ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
+
+By saving this file as `Dockerfile` to a working
+directory, you can then build your container with the docker build
+command, e.g. `docker build -t managed_image`.
+
+### Testing the container
+
+Start the container with `apache2` and
+`sshd` running and managed, forwarding a port to our
+SSH instance:
+
+    docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
+
+We now clearly see one of the benefits of the cfe-docker integration: it
+allows to start several processes as part of a normal
+`docker run` command.
+
+We can now log in to our new container and see that both
+`apache2` and `sshd` are
+running. We have set the root password to "password" in the Dockerfile
+above and can use that to log in with ssh:
+
+    ssh -p222 root@127.0.0.1
+
+    ps -ef
+    UID        PID  PPID  C STIME TTY          TIME CMD
+    root         1     0  0 07:48 ?        00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
+    root        18     1  0 07:48 ?        00:00:00 /var/cfengine/bin/cf-execd -F
+    root        20     1  0 07:48 ?        00:00:00 /usr/sbin/sshd
+    root        32     1  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
+    www-data    34    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
+    www-data    35    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
+    www-data    36    32  0 07:48 ?        00:00:00 /usr/sbin/apache2 -k start
+    root        93    20  0 07:48 ?        00:00:00 sshd: root@pts/0
+    root       105    93  0 07:48 pts/0    00:00:00 -bash
+    root       112   105  0 07:49 pts/0    00:00:00 ps -ef
+
+If we stop apache2, it will be started again within a minute by
+CFEngine.
+
+    service apache2 status
+     Apache2 is running (pid 32).
+    service apache2 stop
+             * Stopping web server apache2 ... waiting    [ OK ]
+    service apache2 status
+     Apache2 is NOT running.
+    # ... wait up to 1 minute...
+    service apache2 status
+     Apache2 is running (pid 173).
+
+## Adapting to your applications
+
+To make sure your applications get managed in the same manner, there are
+just two things you need to adjust from the above example:
+
+-   In the Dockerfile used above, install your applications instead of
+    `apache2` and `sshd`.
+-   When you start the container with `docker run`,
+    specify the command line arguments to your applications rather than
+    `apache2` and `sshd`.
+

+ 50 - 0
docs/sources/examples/couchdb_data_volumes.md

@@ -0,0 +1,50 @@
+page_title: Sharing data between 2 couchdb databases
+page_description: Sharing data between 2 couchdb databases
+page_keywords: docker, example, package installation, networking, couchdb, data volumes
+
+# CouchDB Service
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+Here’s an example of using data volumes to share the same data between
+two CouchDB containers. This could be used for hot upgrades, testing
+different versions of CouchDB on the same data, etc.
+
+## Create first database
+
+Note that we’re marking `/var/lib/couchdb` as a data
+volume.
+
+    COUCH1=$(sudo docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03)
+
+## Add data to the first database
+
+We’re assuming your Docker host is reachable at `localhost`
+.literal}. If not, replace `localhost` with the
+public IP of your Docker host.
+
+    HOST=localhost
+    URL="http://$HOST:$(sudo docker port $COUCH1 5984 | grep -Po '\d+$')/_utils/"
+    echo "Navigate to $URL in your browser, and use the couch interface to add data"
+
+## Create second database
+
+This time, we’re requesting shared access to `$COUCH1`
+.literal}‘s volumes.
+
+    COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03)
+
+## Browse data on the second database
+
+    HOST=localhost
+    URL="http://$HOST:$(sudo docker port $COUCH2 5984 | grep -Po '\d+$')/_utils/"
+    echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!'
+
+Congratulations, you are now running two Couchdb containers, completely
+isolated from each other *except* for their data.

+ 166 - 0
docs/sources/examples/hello_world.md

@@ -0,0 +1,166 @@
+page_title: Hello world example
+page_description: A simple hello world example with Docker
+page_keywords: docker, example, hello world
+
+# Check your Docker installation
+
+This guide assumes you have a working installation of Docker. To check
+your Docker install, run the following command:
+
+    # Check that you have a working install
+    $ sudo docker info
+
+If you get `docker: command not found` or something
+like `/var/lib/docker/repositories: permission denied`
+you may have an incomplete Docker installation or insufficient
+privileges to access docker on your machine.
+
+Please refer to [*Installation*](../../installation/#installation-list)
+for installation instructions.
+
+## Hello World
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+This is the most basic example available for using Docker.
+
+Download the small base image named `busybox`:
+
+    # Download a busybox image
+    $ sudo docker pull busybox
+
+The `busybox` image is a minimal Linux system. You
+can do the same with any number of other images, such as
+`debian`, `ubuntu` or
+`centos`. The images can be found and retrieved
+using the [Docker index](http://index.docker.io).
+
+    $ sudo docker run busybox /bin/echo hello world
+
+This command will run a simple `echo` command, that
+will echo `hello world` back to the console over
+standard out.
+
+**Explanation:**
+
+-   **"sudo"** execute the following commands as user *root*
+-   **"docker run"** run a command in a new container
+-   **"busybox"** is the image we are running the command in.
+-   **"/bin/echo"** is the command we want to run in the container
+-   **"hello world"** is the input for the echo command
+
+**Video:**
+
+See the example in action
+
+<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
+
+
+<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/7658.js&quot;id=&quot;asciicast-7658&quot; async></script></body>"></iframe>
+
+## Hello World Daemon
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+And now for the most boring daemon ever written!
+
+We will use the Ubuntu image to run a simple hello world daemon that
+will just print hello world to standard out every second. It will
+continue to do this until we stop it.
+
+**Steps:**
+
+    CONTAINER_ID=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done")
+
+We are going to run a simple hello world daemon in a new container made
+from the `ubuntu` image.
+
+-   **"sudo docker run -d "** run a command in a new container. We pass
+    "-d" so it runs as a daemon.
+-   **"ubuntu"** is the image we want to run the command inside of.
+-   **"/bin/sh -c"** is the command we want to run in the container
+-   **"while true; do echo hello world; sleep 1; done"** is the mini
+    script we want to run, that will just print hello world once a
+    second until we stop it.
+-   **$container_id** the output of the run command will return a
+    container id, we can use in future commands to see what is going on
+    with this process.
+
+<!-- -->
+
+    sudo docker logs $container_id
+
+Check the logs make sure it is working correctly.
+
+-   **"docker logs**" This will return the logs for a container
+-   **$container_id** The Id of the container we want the logs for.
+
+<!-- -->
+
+    sudo docker attach --sig-proxy=false $container_id
+
+Attach to the container to see the results in real-time.
+
+-   **"docker attach**" This will allow us to attach to a background
+    process to see what is going on.
+-   **"–sig-proxy=false"** Do not forward signals to the container;
+    allows us to exit the attachment using Control-C without stopping
+    the container.
+-   **$container_id** The Id of the container we want to attach to.
+
+Exit from the container attachment by pressing Control-C.
+
+    sudo docker ps
+
+Check the process list to make sure it is running.
+
+-   **"docker ps"** this shows all running process managed by docker
+
+<!-- -->
+
+    sudo docker stop $container_id
+
+Stop the container, since we don’t need it anymore.
+
+-   **"docker stop"** This stops a container
+-   **$container_id** The Id of the container we want to stop.
+
+<!-- -->
+
+    sudo docker ps
+
+Make sure it is really stopped.
+
+**Video:**
+
+See the example in action
+
+<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
+
+<iframe width="640" height="480" frameborder="0" sandbox="allow-same-origin allow-scripts" srcdoc="<body><script type=&quot;text/javascript&quot;src=&quot;https://asciinema.org/a/2562.js&quot;id=&quot;asciicast-2562&quot; async></script></body>"></iframe>
+
+The next example in the series is a [*Node.js Web
+App*](../nodejs_web_app/#nodejs-web-app) example, or you could skip to
+any of the other examples:
+
+-   [*Node.js Web App*](../nodejs_web_app/#nodejs-web-app)
+-   [*Redis Service*](../running_redis_service/#running-redis-service)
+-   [*SSH Daemon Service*](../running_ssh_service/#running-ssh-service)
+-   [*CouchDB
+    Service*](../couchdb_data_volumes/#running-couchdb-service)
+-   [*PostgreSQL Service*](../postgresql_service/#postgresql-service)
+-   [*Building an Image with MongoDB*](../mongodb/#mongodb-image)
+-   [*Python Web App*](../python_web_app/#python-web-app)
+

+ 109 - 0
docs/sources/examples/https.md

@@ -0,0 +1,109 @@
+page_title: Docker HTTPS Setup
+page_description: How to setup docker with https
+page_keywords: docker, example, https, daemon
+
+# Running Docker with https
+
+By default, Docker runs via a non-networked Unix socket. It can also
+optionally communicate using a HTTP socket.
+
+If you need Docker reachable via the network in a safe manner, you can
+enable TLS by specifying the tlsverify flag and pointing Docker’s
+tlscacert flag to a trusted CA certificate.
+
+In daemon mode, it will only allow connections from clients
+authenticated by a certificate signed by that CA. In client mode, it
+will only connect to servers with a certificate signed by that CA.
+
+Warning
+
+Using TLS and managing a CA is an advanced topic. Please make you self
+familiar with openssl, x509 and tls before using it in production.
+
+## Create a CA, server and client keys with OpenSSL
+
+First, initialize the CA serial file and generate CA private and public
+keys:
+
+    $ echo 01 > ca.srl
+    $ openssl genrsa -des3 -out ca-key.pem
+    $ openssl req -new -x509 -days 365 -key ca-key.pem -out ca.pem
+
+Now that we have a CA, you can create a server key and certificate
+signing request. Make sure that "Common Name (e.g. server FQDN or YOUR
+name)" matches the hostname you will use to connect to Docker or just
+use ‘\*’ for a certificate valid for any hostname:
+
+    $ openssl genrsa -des3 -out server-key.pem
+    $ openssl req -new -key server-key.pem -out server.csr
+
+Next we’re going to sign the key with our CA:
+
+    $ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \
+      -out server-cert.pem
+
+For client authentication, create a client key and certificate signing
+request:
+
+    $ openssl genrsa -des3 -out client-key.pem
+    $ openssl req -new -key client-key.pem -out client.csr
+
+To make the key suitable for client authentication, create a extensions
+config file:
+
+    $ echo extendedKeyUsage = clientAuth > extfile.cnf
+
+Now sign the key:
+
+    $ openssl x509 -req -days 365 -in client.csr -CA ca.pem -CAkey ca-key.pem \
+      -out client-cert.pem -extfile extfile.cnf
+
+Finally you need to remove the passphrase from the client and server
+key:
+
+    $ openssl rsa -in server-key.pem -out server-key.pem
+    $ openssl rsa -in client-key.pem -out client-key.pem
+
+Now you can make the Docker daemon only accept connections from clients
+providing a certificate trusted by our CA:
+
+    $ sudo docker -d --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
+      -H=0.0.0.0:4243
+
+To be able to connect to Docker and validate its certificate, you now
+need to provide your client keys, certificates and trusted CA:
+
+    $ docker --tlsverify --tlscacert=ca.pem --tlscert=client-cert.pem --tlskey=client-key.pem \
+      -H=dns-name-of-docker-host:4243
+
+Warning
+
+As shown in the example above, you don’t have to run the
+`docker` client with `sudo` or
+the `docker` group when you use certificate
+authentication. That means anyone with the keys can give any
+instructions to your Docker daemon, giving them root access to the
+machine hosting the daemon. Guard these keys as you would a root
+password!
+
+## Other modes
+
+If you don’t want to have complete two-way authentication, you can run
+Docker in various other modes by mixing the flags.
+
+### Daemon modes
+
+-   tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients
+-   tls, tlscert, tlskey: Do not authenticate clients
+
+### Client modes
+
+-   tls: Authenticate server based on public/default CA pool
+-   tlsverify, tlscacert: Authenticate server based on given CA
+-   tls, tlscert, tlskey: Authenticate with client certificate, do not
+    authenticate server based on given CA
+-   tlsverify, tlscacert, tlscert, tlskey: Authenticate with client
+    certificate, authenticate server based on given CA
+
+The client will send its client certificate if found, so you just need
+to drop your keys into \~/.docker/\<ca, cert or key\>.pem

+ 89 - 0
docs/sources/examples/mongodb.md

@@ -0,0 +1,89 @@
+page_title: Building a Docker Image with MongoDB
+page_description: How to build a Docker image with MongoDB pre-installed
+page_keywords: docker, example, package installation, networking, mongodb
+
+# Building an Image with MongoDB
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+The goal of this example is to show how you can build your own Docker
+images with MongoDB pre-installed. We will do that by constructing a
+`Dockerfile` that downloads a base image, adds an
+apt source and installs the database software on Ubuntu.
+
+## Creating a `Dockerfile`
+
+Create an empty file called `Dockerfile`:
+
+    touch Dockerfile
+
+Next, define the parent image you want to use to build your own image on
+top of. Here, we’ll use [Ubuntu](https://index.docker.io/_/ubuntu/)
+(tag: `latest`) available on the [docker
+index](http://index.docker.io):
+
+    FROM    ubuntu:latest
+
+Since we want to be running the latest version of MongoDB we’ll need to
+add the 10gen repo to our apt sources list.
+
+    # Add 10gen official apt source to the sources list
+    RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
+    RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
+
+Then, we don’t want Ubuntu to complain about init not being available so
+we’ll divert `/sbin/initctl` to
+`/bin/true` so it thinks everything is working.
+
+    # Hack for initctl not being available in Ubuntu
+    RUN dpkg-divert --local --rename --add /sbin/initctl
+    RUN ln -s /bin/true /sbin/initctl
+
+Afterwards we’ll be able to update our apt repositories and install
+MongoDB
+
+    # Install MongoDB
+    RUN apt-get update
+    RUN apt-get install mongodb-10gen
+
+To run MongoDB we’ll have to create the default data directory (because
+we want it to run without needing to provide a special configuration
+file)
+
+    # Create the MongoDB data directory
+    RUN mkdir -p /data/db
+
+Finally, we’ll expose the standard port that MongoDB runs on, 27107, as
+well as define an `ENTRYPOINT` instruction for the
+container.
+
+    EXPOSE 27017
+    ENTRYPOINT ["usr/bin/mongod"]
+
+Now, lets build the image which will go through the
+`Dockerfile` we made and run all of the commands.
+
+    sudo docker build -t <yourname>/mongodb .
+
+Now you should be able to run `mongod` as a daemon
+and be able to connect on the local port!
+
+    # Regular style
+    MONGO_ID=$(sudo docker run -d <yourname>/mongodb)
+
+    # Lean and mean
+    MONGO_ID=$(sudo docker run -d <yourname>/mongodb --noprealloc --smallfiles)
+
+    # Check the logs out
+    sudo docker logs $MONGO_ID
+
+    # Connect and play around
+    mongo --port <port you get from `docker ps`>
+
+Sweet!

+ 204 - 0
docs/sources/examples/nodejs_web_app.md

@@ -0,0 +1,204 @@
+page_title: Running a Node.js app on CentOS
+page_description: Installing and running a Node.js app on CentOS
+page_keywords: docker, example, package installation, node, centos
+
+# Node.js Web App
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+The goal of this example is to show you how you can build your own
+Docker images from a parent image using a `Dockerfile`
+. We will do that by making a simple Node.js hello world web
+application running on CentOS. You can get the full source code at
+[https://github.com/gasi/docker-node-hello](https://github.com/gasi/docker-node-hello).
+
+## Create Node.js app
+
+First, create a directory `src` where all the files
+would live. Then create a `package.json` file that
+describes your app and its dependencies:
+
+    {
+      "name": "docker-centos-hello",
+      "private": true,
+      "version": "0.0.1",
+      "description": "Node.js Hello World app on CentOS using docker",
+      "author": "Daniel Gasienica <daniel@gasienica.ch>",
+      "dependencies": {
+        "express": "3.2.4"
+      }
+    }
+
+Then, create an `index.js` file that defines a web
+app using the [Express.js](http://expressjs.com/) framework:
+
+    var express = require('express');
+
+    // Constants
+    var PORT = 8080;
+
+    // App
+    var app = express();
+    app.get('/', function (req, res) {
+      res.send('Hello World\n');
+    });
+
+    app.listen(PORT);
+    console.log('Running on http://localhost:' + PORT);
+
+In the next steps, we’ll look at how you can run this app inside a
+CentOS container using Docker. First, you’ll need to build a Docker
+image of your app.
+
+## Creating a `Dockerfile`
+
+Create an empty file called `Dockerfile`:
+
+    touch Dockerfile
+
+Open the `Dockerfile` in your favorite text editor
+and add the following line that defines the version of Docker the image
+requires to build (this example uses Docker 0.3.4):
+
+    # DOCKER-VERSION 0.3.4
+
+Next, define the parent image you want to use to build your own image on
+top of. Here, we’ll use [CentOS](https://index.docker.io/_/centos/)
+(tag: `6.4`) available on the [Docker
+index](https://index.docker.io/):
+
+    FROM    centos:6.4
+
+Since we’re building a Node.js app, you’ll have to install Node.js as
+well as npm on your CentOS image. Node.js is required to run your app
+and npm to install your app’s dependencies defined in
+`package.json`. To install the right package for
+CentOS, we’ll use the instructions from the [Node.js
+wiki](https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager#rhelcentosscientific-linux-6):
+
+    # Enable EPEL for Node.js
+    RUN     rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
+    # Install Node.js and npm
+    RUN     yum install -y npm
+
+To bundle your app’s source code inside the Docker image, use the
+`ADD` instruction:
+
+    # Bundle app source
+    ADD . /src
+
+Install your app dependencies using the `npm`
+binary:
+
+    # Install app dependencies
+    RUN cd /src; npm install
+
+Your app binds to port `8080` so you’ll use the
+`EXPOSE` instruction to have it mapped by the
+`docker` daemon:
+
+    EXPOSE  8080
+
+Last but not least, define the command to run your app using
+`CMD` which defines your runtime, i.e.
+`node`, and the path to our app, i.e.
+`src/index.js` (see the step where we added the
+source to the container):
+
+    CMD ["node", "/src/index.js"]
+
+Your `Dockerfile` should now look like this:
+
+    # DOCKER-VERSION 0.3.4
+    FROM    centos:6.4
+
+    # Enable EPEL for Node.js
+    RUN     rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
+    # Install Node.js and npm
+    RUN     yum install -y npm
+
+    # Bundle app source
+    ADD . /src
+    # Install app dependencies
+    RUN cd /src; npm install
+
+    EXPOSE  8080
+    CMD ["node", "/src/index.js"]
+
+## Building your image
+
+Go to the directory that has your `Dockerfile` and
+run the following command to build a Docker image. The `-t`
+flag let’s you tag your image so it’s easier to find later
+using the `docker images` command:
+
+    sudo docker build -t <your username>/centos-node-hello .
+
+Your image will now be listed by Docker:
+
+    sudo docker images
+
+    > # Example
+    > REPOSITORY                 TAG       ID              CREATED
+    > centos                     6.4       539c0211cd76    8 weeks ago
+    > gasi/centos-node-hello     latest    d64d3505b0d2    2 hours ago
+
+## Run the image
+
+Running your image with `-d` runs the container in
+detached mode, leaving the container running in the background. The
+`-p` flag redirects a public port to a private port
+in the container. Run the image you previously built:
+
+    sudo docker run -p 49160:8080 -d <your username>/centos-node-hello
+
+Print the output of your app:
+
+    # Get container ID
+    sudo docker ps
+
+    # Print app output
+    sudo docker logs <container id>
+
+    > # Example
+    > Running on http://localhost:8080
+
+## Test
+
+To test your app, get the the port of your app that Docker mapped:
+
+    sudo docker ps
+
+    > # Example
+    > ID            IMAGE                          COMMAND              ...   PORTS
+    > ecce33b30ebf  gasi/centos-node-hello:latest  node /src/index.js         49160->8080
+
+In the example above, Docker mapped the `8080` port
+of the container to `49160`.
+
+Now you can call your app using `curl` (install if
+needed via: `sudo apt-get install curl`):
+
+    curl -i localhost:49160
+
+    > HTTP/1.1 200 OK
+    > X-Powered-By: Express
+    > Content-Type: text/html; charset=utf-8
+    > Content-Length: 12
+    > Date: Sun, 02 Jun 2013 03:53:22 GMT
+    > Connection: keep-alive
+    >
+    > Hello World
+
+We hope this tutorial helped you get up and running with Node.js and
+CentOS on Docker. You can get the full source code at
+[https://github.com/gasi/docker-node-hello](https://github.com/gasi/docker-node-hello).
+
+Continue to [*Redis
+Service*](../running_redis_service/#running-redis-service).

+ 157 - 0
docs/sources/examples/postgresql_service.md

@@ -0,0 +1,157 @@
+page_title: PostgreSQL service How-To
+page_description: Running and installing a PostgreSQL service
+page_keywords: docker, example, package installation, postgresql
+
+# PostgreSQL Service
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+## Installing PostgreSQL on Docker
+
+Assuming there is no Docker image that suits your needs in [the
+index](http://index.docker.io), you can create one yourself.
+
+Start by creating a new Dockerfile:
+
+Note
+
+This PostgreSQL setup is for development only purposes. Refer to the
+PostgreSQL documentation to fine-tune these settings so that it is
+suitably secure.
+
+    #
+    # example Dockerfile for http://docs.docker.io/en/latest/examples/postgresql_service/
+    #
+
+    FROM ubuntu
+    MAINTAINER SvenDowideit@docker.com
+
+    # Add the PostgreSQL PGP key to verify their Debian packages.
+    # It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc 
+    RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
+
+    # Add PostgreSQL's repository. It contains the most recent stable release
+    #     of PostgreSQL, ``9.3``.
+    RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
+
+    # Update the Ubuntu and PostgreSQL repository indexes
+    RUN apt-get update
+
+    # Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
+    #  There are some warnings (in red) that show up during the build. You can hide
+    #  them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
+    RUN apt-get -y -q install python-software-properties software-properties-common
+    RUN apt-get -y -q install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
+
+    # Note: The official Debian and Ubuntu images automatically ``apt-get clean``
+    # after each ``apt-get`` 
+
+    # Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
+    USER postgres
+
+    # Create a PostgreSQL role named ``docker`` with ``docker`` as the password and
+    # then create a database `docker` owned by the ``docker`` role.
+    # Note: here we use ``&&\`` to run commands one after the other - the ``\``
+    #       allows the RUN command to span multiple lines.
+    RUN    /etc/init.d/postgresql start &&\
+        psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
+        createdb -O docker docker
+
+    # Adjust PostgreSQL configuration so that remote connections to the
+    # database are possible. 
+    RUN echo "host all  all    0.0.0.0/0  md5" >> /etc/postgresql/9.3/main/pg_hba.conf
+
+    # And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``
+    RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
+
+    # Expose the PostgreSQL port
+    EXPOSE 5432
+
+    # Add VOLUMEs to allow backup of config, logs and databases
+    VOLUME  ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
+
+    # Set the default command to run when starting the container
+    CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
+
+Build an image from the Dockerfile assign it a name.
+
+    $ sudo docker build -t eg_postgresql .
+
+And run the PostgreSQL server container (in the foreground):
+
+    $ sudo docker run -rm -P -name pg_test eg_postgresql
+
+There are 2 ways to connect to the PostgreSQL server. We can use [*Link
+Containers*](../../use/working_with_links_names/#working-with-links-names),
+or we can access it from our host (or the network).
+
+Note
+
+The `-rm` removes the container and its image when
+the container exists successfully.
+
+### Using container linking
+
+Containers can be linked to another container’s ports directly using
+`-link remote_name:local_alias` in the client’s
+`docker run`. This will set a number of environment
+variables that can then be used to connect:
+
+    $ sudo docker run -rm -t -i -link pg_test:pg eg_postgresql bash
+
+    postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password
+
+### Connecting from your host system
+
+Assuming you have the postgresql-client installed, you can use the
+host-mapped port to test as well. You need to use `docker ps`
+to find out what local host port the container is mapped to
+first:
+
+    $ docker ps
+    CONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                                      NAMES
+    5e24362f27f6        eg_postgresql:latest   /usr/lib/postgresql/   About an hour ago   Up About an hour    0.0.0.0:49153->5432/tcp                    pg_test
+    $ psql -h localhost -p 49153 -d docker -U docker --password
+
+### Testing the database
+
+Once you have authenticated and have a `docker =#`
+prompt, you can create a table and populate it.
+
+    psql (9.3.1)
+    Type "help" for help.
+
+    docker=# CREATE TABLE cities (
+    docker(#     name            varchar(80),
+    docker(#     location        point
+    docker(# );
+    CREATE TABLE
+    docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
+    INSERT 0 1
+    docker=# select * from cities;
+         name      | location
+    ---------------+-----------
+     San Francisco | (-194,53)
+    (1 row)
+
+### Using the container volumes
+
+You can use the defined volumes to inspect the PostgreSQL log files and
+to backup your configuration and data:
+
+    docker run -rm --volumes-from pg_test -t -i busybox sh
+
+    / # ls
+    bin      etc      lib      linuxrc  mnt      proc     run      sys      usr
+    dev      home     lib64    media    opt      root     sbin     tmp      var
+    / # ls /etc/postgresql/9.3/main/
+    environment      pg_hba.conf      postgresql.conf
+    pg_ctl.conf      pg_ident.conf    start.conf
+    /tmp # ls /var/log
+    ldconfig    postgresql

+ 130 - 0
docs/sources/examples/python_web_app.md

@@ -0,0 +1,130 @@
+page_title: Python Web app example
+page_description: Building your own python web app using docker
+page_keywords: docker, example, python, web app
+
+# Python Web App
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+While using Dockerfiles is the preferred way to create maintainable and
+repeatable images, its useful to know how you can try things out and
+then commit your live changes to an image.
+
+The goal of this example is to show you how you can modify your own
+Docker images by making changes to a running container, and then saving
+the results as a new image. We will do that by making a simple ‘hello
+world’ Flask web application image.
+
+## Download the initial image
+
+Download the `shykes/pybuilder` Docker image from
+the `http://index.docker.io` registry.
+
+This image contains a `buildapp` script to download
+the web app and then `pip install` any required
+modules, and a `runapp` script that finds the
+`app.py` and runs it.
+
+    $ sudo docker pull shykes/pybuilder
+
+Note
+
+This container was built with a very old version of docker (May 2013 -
+see [shykes/pybuilder](https://github.com/shykes/pybuilder) ), when the
+`Dockerfile` format was different, but the image can
+still be used now.
+
+## Interactively make some modifications
+
+We then start a new container running interactively using the image.
+First, we set a `URL` variable that points to a
+tarball of a simple helloflask web app, and then we run a command
+contained in the image called `buildapp`, passing it
+the `$URL` variable. The container is given a name
+`pybuilder_run` which we will use in the next steps.
+
+While this example is simple, you could run any number of interactive
+commands, try things out, and then exit when you’re done.
+
+    $ sudo docker run -i -t -name pybuilder_run shykes/pybuilder bash
+
+    $$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
+    $$ /usr/local/bin/buildapp $URL
+    [...]
+    $$ exit
+
+## Commit the container to create a new image
+
+Save the changes we just made in the container to a new image called
+`/builds/github.com/shykes/helloflask/master`. You
+now have 3 different ways to refer to the container: name
+`pybuilder_run`, short-id `c8b2e8228f11`
+.literal}, or long-id
+`c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9`
+.literal}.
+
+    $ sudo docker commit pybuilder_run /builds/github.com/shykes/helloflask/master
+    c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9
+
+## Run the new image to start the web worker
+
+Use the new image to create a new container with network port 5000
+mapped to a local port
+
+    $ sudo docker run -d -p 5000 --name web_worker /builds/github.com/shykes/helloflask/master /usr/local/bin/runapp
+
+-   **"docker run -d "** run a command in a new container. We pass "-d"
+    so it runs as a daemon.
+-   **"-p 5000"** the web app is going to listen on this port, so it
+    must be mapped from the container to the host system.
+-   **/usr/local/bin/runapp** is the command which starts the web app.
+
+## View the container logs
+
+View the logs for the new `web_worker` container and
+if everything worked as planned you should see the line
+`Running on http://0.0.0.0:5000/` in the log output.
+
+To exit the view without stopping the container, hit Ctrl-C, or open
+another terminal and continue with the example while watching the result
+in the logs.
+
+    $ sudo docker logs -f web_worker
+     * Running on http://0.0.0.0:5000/
+
+## See the webapp output
+
+Look up the public-facing port which is NAT-ed. Find the private port
+used by the container and store it inside of the `WEB_PORT`
+variable.
+
+Access the web app using the `curl` binary. If
+everything worked as planned you should see the line
+`Hello world!` inside of your console.
+
+    $ WEB_PORT=$(sudo docker port web_worker 5000 | awk -F: '{ print $2 }')
+
+    # install curl if necessary, then ...
+    $ curl http://127.0.0.1:$WEB_PORT
+    Hello world!
+
+## Clean up example containers and images
+
+    $ sudo docker ps --all
+
+List `--all` the Docker containers. If this
+container had already finished running, it will still be listed here
+with a status of ‘Exit 0’.
+
+    $ sudo docker stop web_worker
+    $ sudo docker rm web_worker pybuilder_run
+    $ sudo docker rmi /builds/github.com/shykes/helloflask/master shykes/pybuilder:latest
+
+And now stop the running web worker, and delete the containers, so that
+we can then delete the images that we used.

+ 95 - 0
docs/sources/examples/running_redis_service.md

@@ -0,0 +1,95 @@
+page_title: Running a Redis service
+page_description: Installing and running an redis service
+page_keywords: docker, example, package installation, networking, redis
+
+# Redis Service
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+Very simple, no frills, Redis service attached to a web application
+using a link.
+
+## Create a docker container for Redis
+
+Firstly, we create a `Dockerfile` for our new Redis
+image.
+
+    FROM        ubuntu:12.10
+    RUN         apt-get update
+    RUN         apt-get -y install redis-server
+    EXPOSE      6379
+    ENTRYPOINT  ["/usr/bin/redis-server"]
+
+Next we build an image from our `Dockerfile`.
+Replace `<your username>` with your own user name.
+
+    sudo docker build -t <your username>/redis .
+
+## Run the service
+
+Use the image we’ve just created and name your container
+`redis`.
+
+Running the service with `-d` runs the container in
+detached mode, leaving the container running in the background.
+
+Importantly, we’re not exposing any ports on our container. Instead
+we’re going to use a container link to provide access to our Redis
+database.
+
+    sudo docker run --name redis -d <your username>/redis
+
+## Create your web application container
+
+Next we can create a container for our application. We’re going to use
+the `-link` flag to create a link to the
+`redis` container we’ve just created with an alias
+of `db`. This will create a secure tunnel to the
+`redis` container and expose the Redis instance
+running inside that container to only this container.
+
+    sudo docker run --link redis:db -i -t ubuntu:12.10 /bin/bash
+
+Once inside our freshly created container we need to install Redis to
+get the `redis-cli` binary to test our connection.
+
+    apt-get update
+    apt-get -y install redis-server
+    service redis-server stop
+
+As we’ve used the `--link redis:db` option, Docker
+has created some environment variables in our web application container.
+
+    env | grep DB_
+
+    # Should return something similar to this with your values
+    DB_NAME=/violet_wolf/db
+    DB_PORT_6379_TCP_PORT=6379
+    DB_PORT=tcp://172.17.0.33:6379
+    DB_PORT_6379_TCP=tcp://172.17.0.33:6379
+    DB_PORT_6379_TCP_ADDR=172.17.0.33
+    DB_PORT_6379_TCP_PROTO=tcp
+
+We can see that we’ve got a small list of environment variables prefixed
+with `DB`. The `DB` comes from
+the link alias specified when we launched the container. Let’s use the
+`DB_PORT_6379_TCP_ADDR` variable to connect to our
+Redis container.
+
+    redis-cli -h $DB_PORT_6379_TCP_ADDR
+    redis 172.17.0.33:6379>
+    redis 172.17.0.33:6379> set docker awesome
+    OK
+    redis 172.17.0.33:6379> get docker
+    "awesome"
+    redis 172.17.0.33:6379> exit
+
+We could easily use this or other environment variables in our web
+application to make a connection to our `redis`
+container.

+ 138 - 0
docs/sources/examples/running_riak_service.md

@@ -0,0 +1,138 @@
+page_title: Running a Riak service
+page_description: Build a Docker image with Riak pre-installed
+page_keywords: docker, example, package installation, networking, riak
+
+# Riak Service
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+The goal of this example is to show you how to build a Docker image with
+Riak pre-installed.
+
+## Creating a `Dockerfile`
+
+Create an empty file called `Dockerfile`:
+
+    touch Dockerfile
+
+Next, define the parent image you want to use to build your image on top
+of. We’ll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag:
+`latest`), which is available on the [docker
+index](http://index.docker.io):
+
+    # Riak
+    #
+    # VERSION       0.1.0
+
+    # Use the Ubuntu base image provided by dotCloud
+    FROM ubuntu:latest
+    MAINTAINER Hector Castro hector@basho.com
+
+Next, we update the APT cache and apply any updates:
+
+    # Update the APT cache
+    RUN sed -i.bak 's/main$/main universe/' /etc/apt/sources.list
+    RUN apt-get update
+    RUN apt-get upgrade -y
+
+After that, we install and setup a few dependencies:
+
+-   `curl` is used to download Basho’s APT
+    repository key
+-   `lsb-release` helps us derive the Ubuntu release
+    codename
+-   `openssh-server` allows us to login to
+    containers remotely and join Riak nodes to form a cluster
+-   `supervisor` is used manage the OpenSSH and Riak
+    processes
+
+<!-- -->
+
+    # Install and setup project dependencies
+    RUN apt-get install -y curl lsb-release supervisor openssh-server
+
+    RUN mkdir -p /var/run/sshd
+    RUN mkdir -p /var/log/supervisor
+
+    RUN locale-gen en_US en_US.UTF-8
+
+    ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
+
+    RUN echo 'root:basho' | chpasswd
+
+Next, we add Basho’s APT repository:
+
+    RUN curl -s http://apt.basho.com/gpg/basho.apt.key | apt-key add --
+    RUN echo "deb http://apt.basho.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/basho.list
+    RUN apt-get update
+
+After that, we install Riak and alter a few defaults:
+
+    # Install Riak and prepare it to run
+    RUN apt-get install -y riak
+    RUN sed -i.bak 's/127.0.0.1/0.0.0.0/' /etc/riak/app.config
+    RUN echo "ulimit -n 4096" >> /etc/default/riak
+
+Almost there. Next, we add a hack to get us by the lack of
+`initctl`:
+
+    # Hack for initctl
+    # See: https://github.com/dotcloud/docker/issues/1024
+    RUN dpkg-divert --local --rename --add /sbin/initctl
+    RUN ln -s /bin/true /sbin/initctl
+
+Then, we expose the Riak Protocol Buffers and HTTP interfaces, along
+with SSH:
+
+    # Expose Riak Protocol Buffers and HTTP interfaces, along with SSH
+    EXPOSE 8087 8098 22
+
+Finally, run `supervisord` so that Riak and OpenSSH
+are started:
+
+    CMD ["/usr/bin/supervisord"]
+
+## Create a `supervisord` configuration file
+
+Create an empty file called `supervisord.conf`. Make
+sure it’s at the same directory level as your `Dockerfile`
+.literal}:
+
+    touch supervisord.conf
+
+Populate it with the following program definitions:
+
+    [supervisord]
+    nodaemon=true
+
+    [program:sshd]
+    command=/usr/sbin/sshd -D
+    stdout_logfile=/var/log/supervisor/%(program_name)s.log
+    stderr_logfile=/var/log/supervisor/%(program_name)s.log
+    autorestart=true
+
+    [program:riak]
+    command=bash -c ". /etc/default/riak && /usr/sbin/riak console"
+    pidfile=/var/log/riak/riak.pid
+    stdout_logfile=/var/log/supervisor/%(program_name)s.log
+    stderr_logfile=/var/log/supervisor/%(program_name)s.log
+
+## Build the Docker image for Riak
+
+Now you should be able to build a Docker image for Riak:
+
+    docker build -t "<yourname>/riak" .
+
+## Next steps
+
+Riak is a distributed database. Many production deployments consist of
+[at least five
+nodes](http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/).
+See the [docker-riak](https://github.com/hectcastro/docker-riak) project
+details on how to deploy a Riak cluster using Docker and Pipework.

+ 60 - 0
docs/sources/examples/running_ssh_service.md

@@ -0,0 +1,60 @@
+page_title: Running an SSH service
+page_description: Installing and running an sshd service
+page_keywords: docker, example, package installation, networking
+
+# SSH Daemon Service
+
+> **Note:** 
+> - This example assumes you have Docker running in daemon mode. For
+>   more information please see [*Check your Docker
+>   install*](../hello_world/#running-examples).
+> - **If you don’t like sudo** then see [*Giving non-root
+>   access*](../../installation/binaries/#dockergroup)
+
+The following Dockerfile sets up an sshd service in a container that you
+can use to connect to and inspect other container’s volumes, or to get
+quick access to a test container.
+
+    # sshd
+    #
+    # VERSION               0.0.1
+
+    FROM     ubuntu
+    MAINTAINER Thatcher R. Peskens "thatcher@dotcloud.com"
+
+    # make sure the package repository is up to date
+    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
+    RUN apt-get update
+
+    RUN apt-get install -y openssh-server
+    RUN mkdir /var/run/sshd 
+    RUN echo 'root:screencast' |chpasswd
+
+    EXPOSE 22
+    CMD    /usr/sbin/sshd -D
+
+Build the image using:
+
+    $ sudo docker build -rm -t eg_sshd .
+
+Then run it. You can then use `docker port` to find
+out what host port the container’s port 22 is mapped to:
+
+    $ sudo docker run -d -P -name test_sshd eg_sshd
+    $ sudo docker port test_sshd 22
+    0.0.0.0:49154
+
+And now you can ssh to port `49154` on the Docker
+daemon’s host IP address (`ip address` or
+`ifconfig` can tell you that):
+
+    $ ssh root@192.168.1.2 -p 49154
+    # The password is ``screencast``.
+    $$
+
+Finally, clean up after your test by stopping and removing the
+container, and then removing the image.
+
+    $ sudo docker stop test_sshd
+    $ sudo docker rm test_sshd
+    $ sudo docker rmi eg_sshd

+ 121 - 0
docs/sources/examples/using_supervisord.md

@@ -0,0 +1,121 @@
+page_title: Using Supervisor with Docker
+page_description: How to use Supervisor process management with Docker
+page_keywords: docker, supervisor, process management
+
+# Using Supervisor with Docker
+
+Note
+
+-   This example assumes you have Docker running in daemon mode. For
+    more information please see [*Check your Docker
+    install*](../hello_world/#running-examples).
+-   **If you don’t like sudo** then see [*Giving non-root
+    access*](../../installation/binaries/#dockergroup)
+
+Traditionally a Docker container runs a single process when it is
+launched, for example an Apache daemon or a SSH server daemon. Often
+though you want to run more than one process in a container. There are a
+number of ways you can achieve this ranging from using a simple Bash
+script as the value of your container’s `CMD`
+instruction to installing a process management tool.
+
+In this example we’re going to make use of the process management tool,
+[Supervisor](http://supervisord.org/), to manage multiple processes in
+our container. Using Supervisor allows us to better control, manage, and
+restart the processes we want to run. To demonstrate this we’re going to
+install and manage both an SSH daemon and an Apache daemon.
+
+## Creating a Dockerfile
+
+Let’s start by creating a basic `Dockerfile` for our
+new image.
+
+    FROM ubuntu:latest
+    MAINTAINER examples@docker.io
+    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
+    RUN apt-get update
+    RUN apt-get upgrade -y
+
+## Installing Supervisor
+
+We can now install our SSH and Apache daemons as well as Supervisor in
+our container.
+
+    RUN apt-get install -y openssh-server apache2 supervisor
+    RUN mkdir -p /var/run/sshd
+    RUN mkdir -p /var/log/supervisor
+
+Here we’re installing the `openssh-server`,
+`apache2` and `supervisor`
+(which provides the Supervisor daemon) packages. We’re also creating two
+new directories that are needed to run our SSH daemon and Supervisor.
+
+## Adding Supervisor’s configuration file
+
+Now let’s add a configuration file for Supervisor. The default file is
+called `supervisord.conf` and is located in
+`/etc/supervisor/conf.d/`.
+
+    ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
+
+Let’s see what is inside our `supervisord.conf`
+file.
+
+    [supervisord]
+    nodaemon=true
+
+    [program:sshd]
+    command=/usr/sbin/sshd -D
+
+    [program:apache2]
+    command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
+
+The `supervisord.conf` configuration file contains
+directives that configure Supervisor and the processes it manages. The
+first block `[supervisord]` provides configuration
+for Supervisor itself. We’re using one directive, `nodaemon`
+which tells Supervisor to run interactively rather than
+daemonize.
+
+The next two blocks manage the services we wish to control. Each block
+controls a separate process. The blocks contain a single directive,
+`command`, which specifies what command to run to
+start each process.
+
+## Exposing ports and running Supervisor
+
+Now let’s finish our `Dockerfile` by exposing some
+required ports and specifying the `CMD` instruction
+to start Supervisor when our container launches.
+
+    EXPOSE 22 80
+    CMD ["/usr/bin/supervisord"]
+
+Here we’ve exposed ports 22 and 80 on the container and we’re running
+the `/usr/bin/supervisord` binary when the container
+launches.
+
+## Building our container
+
+We can now build our new container.
+
+    sudo docker build -t <yourname>/supervisord .
+
+## Running our Supervisor container
+
+Once we’ve got a built image we can launch a container from it.
+
+    sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
+    2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
+    2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
+    2013-11-25 18:53:22,342 INFO supervisord started with pid 1
+    2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
+    2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
+    . . .
+
+We’ve launched a new container interactively using the
+`docker run` command. That container has run
+Supervisor and launched the SSH and Apache daemons with it. We’ve
+specified the `-p` flag to expose ports 22 and 80.
+From here we can now identify the exposed ports and connect to one or
+both of the SSH and Apache daemons.

+ 218 - 0
docs/sources/faq.md

@@ -0,0 +1,218 @@
+page_title: FAQ
+page_description: Most frequently asked questions.
+page_keywords: faq, questions, documentation, docker
+
+# FAQ
+
+## Most frequently asked questions.
+
+### How much does Docker cost?
+
+> Docker is 100% free, it is open source, so you can use it without
+> paying.
+
+### What open source license are you using?
+
+> We are using the Apache License Version 2.0, see it here:
+> [https://github.com/dotcloud/docker/blob/master/LICENSE](https://github.com/dotcloud/docker/blob/master/LICENSE)
+
+### Does Docker run on Mac OS X or Windows?
+
+> Not at this time, Docker currently only runs on Linux, but you can use
+> VirtualBox to run Docker in a virtual machine on your box, and get the
+> best of both worlds. Check out the [*Mac OS
+> X*](../installation/mac/#macosx) and [*Microsoft
+> Windows*](../installation/windows/#windows) installation guides. The
+> small Linux distribution boot2docker can be run inside virtual
+> machines on these two operating systems.
+
+### How do containers compare to virtual machines?
+
+> They are complementary. VMs are best used to allocate chunks of
+> hardware resources. Containers operate at the process level, which
+> makes them very lightweight and perfect as a unit of software
+> delivery.
+
+### What does Docker add to just plain LXC?
+
+> Docker is not a replacement for LXC. "LXC" refers to capabilities of
+> the Linux kernel (specifically namespaces and control groups) which
+> allow sandboxing processes from one another, and controlling their
+> resource allocations. On top of this low-level foundation of kernel
+> features, Docker offers a high-level tool with several powerful
+> functionalities:
+>
+> -   *Portable deployment across machines.*
+>     :   Docker defines a format for bundling an application and all
+>         its dependencies into a single object which can be transferred
+>         to any Docker-enabled machine, and executed there with the
+>         guarantee that the execution environment exposed to the
+>         application will be the same. LXC implements process
+>         sandboxing, which is an important pre-requisite for portable
+>         deployment, but that alone is not enough for portable
+>         deployment. If you sent me a copy of your application
+>         installed in a custom LXC configuration, it would almost
+>         certainly not run on my machine the way it does on yours,
+>         because it is tied to your machine’s specific configuration:
+>         networking, storage, logging, distro, etc. Docker defines an
+>         abstraction for these machine-specific settings, so that the
+>         exact same Docker container can run - unchanged - on many
+>         different machines, with many different configurations.
+>
+> -   *Application-centric.*
+>     :   Docker is optimized for the deployment of applications, as
+>         opposed to machines. This is reflected in its API, user
+>         interface, design philosophy and documentation. By contrast,
+>         the `lxc` helper scripts focus on
+>         containers as lightweight machines - basically servers that
+>         boot faster and need less RAM. We think there’s more to
+>         containers than just that.
+>
+> -   *Automatic build.*
+>     :   Docker includes [*a tool for developers to automatically
+>         assemble a container from their source
+>         code*](../reference/builder/#dockerbuilder), with full control
+>         over application dependencies, build tools, packaging etc.
+>         They are free to use
+>         `make, maven, chef, puppet, salt,` Debian
+>         packages, RPMs, source tarballs, or any combination of the
+>         above, regardless of the configuration of the machines.
+>
+> -   *Versioning.*
+>     :   Docker includes git-like capabilities for tracking successive
+>         versions of a container, inspecting the diff between versions,
+>         committing new versions, rolling back etc. The history also
+>         includes how a container was assembled and by whom, so you get
+>         full traceability from the production server all the way back
+>         to the upstream developer. Docker also implements incremental
+>         uploads and downloads, similar to `git pull`
+>         .literal}, so new versions of a container can be transferred
+>         by only sending diffs.
+>
+> -   *Component re-use.*
+>     :   Any container can be used as a [*"base
+>         image"*](../terms/image/#base-image-def) to create more
+>         specialized components. This can be done manually or as part
+>         of an automated build. For example you can prepare the ideal
+>         Python environment, and use it as a base for 10 different
+>         applications. Your ideal Postgresql setup can be re-used for
+>         all your future projects. And so on.
+>
+> -   *Sharing.*
+>     :   Docker has access to a [public
+>         registry](http://index.docker.io) where thousands of people
+>         have uploaded useful containers: anything from Redis, CouchDB,
+>         Postgres to IRC bouncers to Rails app servers to Hadoop to
+>         base images for various Linux distros. The
+>         [*registry*](../reference/api/registry_index_spec/#registryindexspec)
+>         also includes an official "standard library" of useful
+>         containers maintained by the Docker team. The registry itself
+>         is open-source, so anyone can deploy their own registry to
+>         store and transfer private containers, for internal server
+>         deployments for example.
+>
+> -   *Tool ecosystem.*
+>     :   Docker defines an API for automating and customizing the
+>         creation and deployment of containers. There are a huge number
+>         of tools integrating with Docker to extend its capabilities.
+>         PaaS-like deployment (Dokku, Deis, Flynn), multi-node
+>         orchestration (Maestro, Salt, Mesos, Openstack Nova),
+>         management dashboards (docker-ui, Openstack Horizon,
+>         Shipyard), configuration management (Chef, Puppet), continuous
+>         integration (Jenkins, Strider, Travis), etc. Docker is rapidly
+>         establishing itself as the standard for container-based
+>         tooling.
+>
+### What is different between a Docker container and a VM?
+
+There’s a great StackOverflow answer [showing the
+differences](http://stackoverflow.com/questions/16047306/how-is-docker-io-different-from-a-normal-virtual-machine).
+
+### Do I lose my data when the container exits?
+
+Not at all! Any data that your application writes to disk gets preserved
+in its container until you explicitly delete the container. The file
+system for the container persists even after the container halts.
+
+### How far do Docker containers scale?
+
+Some of the largest server farms in the world today are based on
+containers. Large web deployments like Google and Twitter, and platform
+providers such as Heroku and dotCloud all run on container technology,
+at a scale of hundreds of thousands or even millions of containers
+running in parallel.
+
+### How do I connect Docker containers?
+
+Currently the recommended way to link containers is via the link
+primitive. You can see details of how to [work with links
+here](http://docs.docker.io/en/latest/use/working_with_links_names/).
+
+Also of useful when enabling more flexible service portability is the
+[Ambassador linking
+pattern](http://docs.docker.io/en/latest/use/ambassador_pattern_linking/).
+
+### How do I run more than one process in a Docker container?
+
+Any capable process supervisor such as
+[http://supervisord.org/](http://supervisord.org/), runit, s6, or
+daemontools can do the trick. Docker will start up the process
+management daemon which will then fork to run additional processes. As
+long as the processor manager daemon continues to run, the container
+will continue to as well. You can see a more substantial example [that
+uses supervisord
+here](http://docs.docker.io/en/latest/examples/using_supervisord/).
+
+### What platforms does Docker run on?
+
+Linux:
+
+-   Ubuntu 12.04, 13.04 et al
+-   Fedora 19/20+
+-   RHEL 6.5+
+-   Centos 6+
+-   Gentoo
+-   ArchLinux
+-   openSUSE 12.3+
+-   CRUX 3.0+
+
+Cloud:
+
+-   Amazon EC2
+-   Google Compute Engine
+-   Rackspace
+
+### How do I report a security issue with Docker?
+
+You can learn about the project’s security policy
+[here](http://www.docker.io/security/) and report security issues to
+this [mailbox](mailto:security%40docker.com).
+
+### Why do I need to sign my commits to Docker with the DCO?
+
+Please read [our blog
+post](http://blog.docker.io/2014/01/docker-code-contributions-require-developer-certificate-of-origin/)
+on the introduction of the DCO.
+
+### Can I help by adding some questions and answers?
+
+Definitely! You can fork [the
+repo](http://www.github.com/dotcloud/docker) and edit the documentation
+sources.
+
+### Where can I find more answers?
+
+> You can find more answers on:
+>
+> -   [Docker user
+>     mailinglist](https://groups.google.com/d/forum/docker-user)
+> -   [Docker developer
+>     mailinglist](https://groups.google.com/d/forum/docker-dev)
+> -   [IRC, docker on freenode](irc://chat.freenode.net#docker)
+> -   [GitHub](http://www.github.com/dotcloud/docker)
+> -   [Ask questions on
+>     Stackoverflow](http://stackoverflow.com/search?q=docker)
+> -   [Join the conversation on Twitter](http://twitter.com/docker)
+
+Looking for something else to read? Checkout the [*Hello
+World*](../examples/hello_world/#hello-world) example.

+ 1 - 0
docs/sources/genindex.md

@@ -0,0 +1 @@
+# Index

+ 104 - 0
docs/sources/http-routingtable.md

@@ -0,0 +1,104 @@
+# HTTP Routing Table
+
+[**/api**](#cap-/api) | [**/auth**](#cap-/auth) |
+[**/build**](#cap-/build) | [**/commit**](#cap-/commit) |
+[**/containers**](#cap-/containers) | [**/events**](#cap-/events) |
+[**/events:**](#cap-/events:) | [**/images**](#cap-/images) |
+[**/info**](#cap-/info) | [**/v1**](#cap-/v1) |
+[**/version**](#cap-/version)
+
+  -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----
+                                                                                                                                                                          
+     **/api**                                                                                                                                                             
+     [`GET /api/v1.1/o/authorize/`](../reference/api/docker_io_oauth_api/#get--api-v1.1-o-authorize-)                                                              **
+     [`POST /api/v1.1/o/token/`](../reference/api/docker_io_oauth_api/#post--api-v1.1-o-token-)                                                                    **
+     [`GET /api/v1.1/users/:username/`](../reference/api/docker_io_accounts_api/#get--api-v1.1-users--username-)                                                   **
+     [`PATCH /api/v1.1/users/:username/`](../reference/api/docker_io_accounts_api/#patch--api-v1.1-users--username-)                                               **
+     [`GET /api/v1.1/users/:username/emails/`](../reference/api/docker_io_accounts_api/#get--api-v1.1-users--username-emails-)                                     **
+     [`PATCH /api/v1.1/users/:username/emails/`](../reference/api/docker_io_accounts_api/#patch--api-v1.1-users--username-emails-)                                 **
+     [`POST /api/v1.1/users/:username/emails/`](../reference/api/docker_io_accounts_api/#post--api-v1.1-users--username-emails-)                                   **
+     [`DELETE /api/v1.1/users/:username/emails/`](../reference/api/docker_io_accounts_api/#delete--api-v1.1-users--username-emails-)                               **
+                                                                                                                                                                          
+     **/auth**                                                                                                                                                            
+     [`GET /auth`](../reference/api/docker_remote_api/#get--auth)                                                                                                  **
+     [`POST /auth`](../reference/api/docker_remote_api_v1.9/#post--auth)                                                                                           **
+                                                                                                                                                                          
+     **/build**                                                                                                                                                           
+     [`POST /build`](../reference/api/docker_remote_api_v1.9/#post--build)                                                                                         **
+                                                                                                                                                                          
+     **/commit**                                                                                                                                                          
+     [`POST /commit`](../reference/api/docker_remote_api_v1.9/#post--commit)                                                                                       **
+                                                                                                                                                                          
+     **/containers**                                                                                                                                                      
+     [`DELETE /containers/(id)`](../reference/api/docker_remote_api_v1.9/#delete--containers-(id))                                                                 **
+     [`POST /containers/(id)/attach`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-attach)                                                       **
+     [`GET /containers/(id)/changes`](../reference/api/docker_remote_api_v1.9/#get--containers-(id)-changes)                                                       **
+     [`POST /containers/(id)/copy`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-copy)                                                           **
+     [`GET /containers/(id)/export`](../reference/api/docker_remote_api_v1.9/#get--containers-(id)-export)                                                         **
+     [`GET /containers/(id)/json`](../reference/api/docker_remote_api_v1.9/#get--containers-(id)-json)                                                             **
+     [`POST /containers/(id)/kill`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-kill)                                                           **
+     [`POST /containers/(id)/restart`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-restart)                                                     **
+     [`POST /containers/(id)/start`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-start)                                                         **
+     [`POST /containers/(id)/stop`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-stop)                                                           **
+     [`GET /containers/(id)/top`](../reference/api/docker_remote_api_v1.9/#get--containers-(id)-top)                                                               **
+     [`POST /containers/(id)/wait`](../reference/api/docker_remote_api_v1.9/#post--containers-(id)-wait)                                                           **
+     [`POST /containers/create`](../reference/api/docker_remote_api_v1.9/#post--containers-create)                                                                 **
+     [`GET /containers/json`](../reference/api/docker_remote_api_v1.9/#get--containers-json)                                                                       **
+                                                                                                                                                                          
+     **/events**                                                                                                                                                          
+     [`GET /events`](../reference/api/docker_remote_api_v1.9/#get--events)                                                                                         **
+                                                                                                                                                                          
+     **/events:**                                                                                                                                                         
+     [`GET /events:`](../reference/api/docker_remote_api/#get--events-)                                                                                            **
+                                                                                                                                                                          
+     **/images**                                                                                                                                                          
+     [`GET /images/(format)`](../reference/api/archive/docker_remote_api_v1.6/#get--images-(format))                                                               **
+     [`DELETE /images/(name)`](../reference/api/docker_remote_api_v1.9/#delete--images-(name))                                                                     **
+     [`GET /images/(name)/get`](../reference/api/docker_remote_api_v1.9/#get--images-(name)-get)                                                                   **
+     [`GET /images/(name)/history`](../reference/api/docker_remote_api_v1.9/#get--images-(name)-history)                                                           **
+     [`POST /images/(name)/insert`](../reference/api/docker_remote_api_v1.9/#post--images-(name)-insert)                                                           **
+     [`GET /images/(name)/json`](../reference/api/docker_remote_api_v1.9/#get--images-(name)-json)                                                                 **
+     [`POST /images/(name)/push`](../reference/api/docker_remote_api_v1.9/#post--images-(name)-push)                                                               **
+     [`POST /images/(name)/tag`](../reference/api/docker_remote_api_v1.9/#post--images-(name)-tag)                                                                 **
+     [`POST /images/<name>/delete`](../reference/api/docker_remote_api/#post--images--name--delete)                                                                **
+     [`POST /images/create`](../reference/api/docker_remote_api_v1.9/#post--images-create)                                                                         **
+     [`GET /images/json`](../reference/api/docker_remote_api_v1.9/#get--images-json)                                                                               **
+     [`POST /images/load`](../reference/api/docker_remote_api_v1.9/#post--images-load)                                                                             **
+     [`GET /images/search`](../reference/api/docker_remote_api_v1.9/#get--images-search)                                                                           **
+     [`GET /images/viz`](../reference/api/docker_remote_api/#get--images-viz)                                                                                      **
+                                                                                                                                                                          
+     **/info**                                                                                                                                                            
+     [`GET /info`](../reference/api/docker_remote_api_v1.9/#get--info)                                                                                             **
+                                                                                                                                                                          
+     **/v1**                                                                                                                                                              
+     [`GET /v1/_ping`](../reference/api/registry_api/#get--v1-_ping)                                                                                               **
+     [`GET /v1/images/(image_id)/ancestry`](../reference/api/registry_api/#get--v1-images-(image_id)-ancestry)                                                     **
+     [`GET /v1/images/(image_id)/json`](../reference/api/registry_api/#get--v1-images-(image_id)-json)                                                             **
+     [`PUT /v1/images/(image_id)/json`](../reference/api/registry_api/#put--v1-images-(image_id)-json)                                                             **
+     [`GET /v1/images/(image_id)/layer`](../reference/api/registry_api/#get--v1-images-(image_id)-layer)                                                           **
+     [`PUT /v1/images/(image_id)/layer`](../reference/api/registry_api/#put--v1-images-(image_id)-layer)                                                           **
+     [`PUT /v1/repositories/(namespace)/(repo_name)/`](../reference/api/index_api/#put--v1-repositories-(namespace)-(repo_name)-)                                  **
+     [`DELETE /v1/repositories/(namespace)/(repo_name)/`](../reference/api/index_api/#delete--v1-repositories-(namespace)-(repo_name)-)                            **
+     [`PUT /v1/repositories/(namespace)/(repo_name)/auth`](../reference/api/index_api/#put--v1-repositories-(namespace)-(repo_name)-auth)                          **
+     [`GET /v1/repositories/(namespace)/(repo_name)/images`](../reference/api/index_api/#get--v1-repositories-(namespace)-(repo_name)-images)                      **
+     [`PUT /v1/repositories/(namespace)/(repo_name)/images`](../reference/api/index_api/#put--v1-repositories-(namespace)-(repo_name)-images)                      **
+     [`DELETE /v1/repositories/(namespace)/(repository)/`](../reference/api/registry_api/#delete--v1-repositories-(namespace)-(repository)-)                       **
+     [`GET /v1/repositories/(namespace)/(repository)/tags`](../reference/api/registry_api/#get--v1-repositories-(namespace)-(repository)-tags)                     **
+     [`GET /v1/repositories/(namespace)/(repository)/tags/(tag)`](../reference/api/registry_api/#get--v1-repositories-(namespace)-(repository)-tags-(tag))         **
+     [`PUT /v1/repositories/(namespace)/(repository)/tags/(tag)`](../reference/api/registry_api/#put--v1-repositories-(namespace)-(repository)-tags-(tag))         **
+     [`DELETE /v1/repositories/(namespace)/(repository)/tags/(tag)`](../reference/api/registry_api/#delete--v1-repositories-(namespace)-(repository)-tags-(tag))   **
+     [`PUT /v1/repositories/(repo_name)/`](../reference/api/index_api/#put--v1-repositories-(repo_name)-)                                                          **
+     [`DELETE /v1/repositories/(repo_name)/`](../reference/api/index_api/#delete--v1-repositories-(repo_name)-)                                                    **
+     [`PUT /v1/repositories/(repo_name)/auth`](../reference/api/index_api/#put--v1-repositories-(repo_name)-auth)                                                  **
+     [`GET /v1/repositories/(repo_name)/images`](../reference/api/index_api/#get--v1-repositories-(repo_name)-images)                                              **
+     [`PUT /v1/repositories/(repo_name)/images`](../reference/api/index_api/#put--v1-repositories-(repo_name)-images)                                              **
+     [`GET /v1/search`](../reference/api/index_api/#get--v1-search)                                                                                                **
+     [`GET /v1/users`](../reference/api/index_api/#get--v1-users)                                                                                                  **
+     [`POST /v1/users`](../reference/api/index_api/#post--v1-users)                                                                                                **
+     [`PUT /v1/users/(username)/`](../reference/api/index_api/#put--v1-users-(username)-)                                                                          **
+                                                                                                                                                                          
+     **/version**                                                                                                                                                         
+     [`GET /version`](../reference/api/docker_remote_api_v1.9/#get--version)                                                                                       **
+  -- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----
+
+

+ 25 - 0
docs/sources/installation.md

@@ -0,0 +1,25 @@
+# Installation
+
+## Introduction
+
+There are a number of ways to install Docker, depending on where you
+want to run the daemon. The [*Ubuntu*](ubuntulinux/#ubuntu-linux)
+installation is the officially-tested version. The community adds more
+techniques for installing Docker all the time.
+
+## Contents:
+
+- [Ubuntu](ubuntulinux/)
+- [Red Hat Enterprise Linux](rhel/)
+- [Fedora](fedora/)
+- [Arch Linux](archlinux/)
+- [CRUX Linux](cruxlinux/)
+- [Gentoo](gentoolinux/)
+- [openSUSE](openSUSE/)
+- [FrugalWare](frugalware/)
+- [Mac OS X](mac/)
+- [Windows](windows/)
+- [Amazon EC2](amazon/)
+- [Rackspace Cloud](rackspace/)
+- [Google Cloud Platform](google/)
+- [Binaries](binaries/)

+ 106 - 0
docs/sources/installation/amazon.md

@@ -0,0 +1,106 @@
+page_title: Installation on Amazon EC2
+page_description: Please note this project is currently under heavy development. It should not be used in production. 
+page_keywords: amazon ec2, virtualization, cloud, docker, documentation, installation
+
+# Amazon EC2
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+There are several ways to install Docker on AWS EC2:
+
+-   [*Amazon QuickStart (Release Candidate - March
+    2014)*](#amazonquickstart-new) or
+-   [*Amazon QuickStart*](#amazonquickstart) or
+-   [*Standard Ubuntu Installation*](#amazonstandard)
+
+**You’ll need an** [AWS account](http://aws.amazon.com/) **first, of
+course.**
+
+## Amazon QuickStart
+
+1.  **Choose an image:**
+    -   Launch the [Create Instance
+        Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:)
+        menu on your AWS Console.
+    -   Click the `Select` button for a 64Bit Ubuntu
+        image. For example: Ubuntu Server 12.04.3 LTS
+    -   For testing you can use the default (possibly free)
+        `t1.micro` instance (more info on
+        [pricing](http://aws.amazon.com/en/ec2/pricing/)).
+    -   Click the `Next: Configure Instance Details`
+        button at the bottom right.
+
+2.  **Tell CloudInit to install Docker:**
+    -   When you’re on the "Configure Instance Details" step, expand the
+        "Advanced Details" section.
+    -   Under "User data", select "As text".
+    -   Enter `#include https://get.docker.io` into
+        the instance *User Data*.
+        [CloudInit](https://help.ubuntu.com/community/CloudInit) is part
+        of the Ubuntu image you chose; it will bootstrap Docker by
+        running the shell script located at this URL.
+
+3.  After a few more standard choices where defaults are probably ok,
+    your AWS Ubuntu instance with Docker should be running!
+
+**If this is your first AWS instance, you may need to set up your
+Security Group to allow SSH.** By default all incoming ports to your new
+instance will be blocked by the AWS Security Group, so you might just
+get timeouts when you try to connect.
+
+Installing with `get.docker.io` (as above) will
+create a service named `lxc-docker`. It will also
+set up a [*docker group*](../binaries/#dockergroup) and you may want to
+add the *ubuntu* user to it so that you don’t have to use
+`sudo` for every Docker command.
+
+Once you’ve got Docker installed, you’re ready to try it out – head on
+over to the [*First steps with Docker*](../../use/basics/) or
+[*Examples*](../../examples/) section.
+
+## Amazon QuickStart (Release Candidate - March 2014)
+
+Amazon just published new Docker-ready AMIs (2014.03 Release Candidate).
+Docker packages can now be installed from Amazon’s provided Software
+Repository.
+
+1.  **Choose an image:**
+    -   Launch the [Create Instance
+        Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:)
+        menu on your AWS Console.
+    -   Click the `Community AMI` menu option on the
+        left side
+    -   Search for ‘2014.03’ and select one of the Amazon provided AMI,
+        for example `amzn-ami-pv-2014.03.rc-0.x86_64-ebs`
+        .literal}
+    -   For testing you can use the default (possibly free)
+        `t1.micro` instance (more info on
+        [pricing](http://aws.amazon.com/en/ec2/pricing/)).
+    -   Click the `Next: Configure Instance Details`
+        button at the bottom right.
+
+2.  After a few more standard choices where defaults are probably ok,
+    your Amazon Linux instance should be running!
+3.  SSH to your instance to install Docker :
+    `ssh -i <path to your private key> ec2-user@<your public IP address>`
+
+4.  Once connected to the instance, type
+    `sudo yum install -y docker ; sudo service docker start`
+ to install and start Docker
+
+## Standard Ubuntu Installation
+
+If you want a more hands-on installation, then you can follow the
+[*Ubuntu*](../ubuntulinux/#ubuntu-linux) instructions installing Docker
+on any EC2 instance running Ubuntu. Just follow Step 1 from [*Amazon
+QuickStart*](#amazonquickstart) to pick an image (or use one of your
+own) and skip the step with the *User Data*. Then continue with the
+[*Ubuntu*](../ubuntulinux/#ubuntu-linux) instructions.
+
+Continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.

+ 69 - 0
docs/sources/installation/archlinux.md

@@ -0,0 +1,69 @@
+page_title: Installation on Arch Linux
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: arch linux, virtualization, docker, documentation, installation
+
+# Arch Linux
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Installing on Arch Linux can be handled via the package in community:
+
+-   [docker](https://www.archlinux.org/packages/community/x86_64/docker/)
+
+or the following AUR package:
+
+-   [docker-git](https://aur.archlinux.org/packages/docker-git/)
+
+The docker package will install the latest tagged version of docker. The
+docker-git package will build from the current master branch.
+
+## Dependencies
+
+Docker depends on several packages which are specified as dependencies
+in the packages. The core dependencies are:
+
+-   bridge-utils
+-   device-mapper
+-   iproute2
+-   lxc
+-   sqlite
+
+## Installation
+
+For the normal package a simple
+
+    pacman -S docker
+
+is all that is needed.
+
+For the AUR package execute:
+
+    yaourt -S docker-git
+
+The instructions here assume **yaourt** is installed. See [Arch User
+Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository#Installing_packages)
+for information on building and installing packages from the AUR if you
+have not done so before.
+
+## Starting Docker
+
+There is a systemd service unit created for docker. To start the docker
+service:
+
+    sudo systemctl start docker
+
+To start on system boot:
+
+    sudo systemctl enable docker

+ 104 - 0
docs/sources/installation/binaries.md

@@ -0,0 +1,104 @@
+page_title: Installation from Binaries
+page_description: This instruction set is meant for hackers who want to try out Docker on a variety of environments.
+page_keywords: binaries, installation, docker, documentation, linux
+
+# Binaries
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+**This instruction set is meant for hackers who want to try out Docker
+on a variety of environments.**
+
+Before following these directions, you should really check if a packaged
+version of Docker is already available for your distribution. We have
+packages for many distributions, and more keep showing up all the time!
+
+## Check runtime dependencies
+
+To run properly, docker needs the following software to be installed at
+runtime:
+
+-   iptables version 1.4 or later
+-   Git version 1.7 or later
+-   XZ Utils 4.9 or later
+-   a [properly
+    mounted](https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount)
+    cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount
+    point [is](https://github.com/dotcloud/docker/issues/2683)
+    [not](https://github.com/dotcloud/docker/issues/3485)
+    [sufficient](https://github.com/dotcloud/docker/issues/4568))
+
+## Check kernel dependencies
+
+Docker in daemon mode has specific kernel requirements. For details,
+check your distribution in [*Installation*](../#installation-list).
+
+In general, a 3.8 Linux kernel (or higher) is preferred, as some of the
+prior versions have known issues that are triggered by Docker.
+
+Note that Docker also has a client mode, which can run on virtually any
+Linux kernel (it even builds on OSX!).
+
+## Get the docker binary:
+
+    wget https://get.docker.io/builds/Linux/x86_64/docker-latest -O docker
+    chmod +x docker
+
+Note
+
+If you have trouble downloading the binary, you can also get the smaller
+compressed release file:
+[https://get.docker.io/builds/Linux/x86\_64/docker-latest.tgz](https://get.docker.io/builds/Linux/x86_64/docker-latest.tgz)
+
+## Run the docker daemon
+
+    # start the docker in daemon mode from the directory you unpacked
+    sudo ./docker -d &
+
+## Giving non-root access
+
+The `docker` daemon always runs as the root user,
+and since Docker version 0.5.2, the `docker` daemon
+binds to a Unix socket instead of a TCP port. By default that Unix
+socket is owned by the user *root*, and so, by default, you can access
+it with `sudo`.
+
+Starting in version 0.5.3, if you (or your Docker installer) create a
+Unix group called *docker* and add users to it, then the
+`docker` daemon will make the ownership of the Unix
+socket read/writable by the *docker* group when the daemon starts. The
+`docker` daemon must always run as the root user,
+but if you run the `docker` client as a user in the
+*docker* group then you don’t need to add `sudo` to
+all the client commands.
+
+Warning
+
+The *docker* group (or the group specified with `-G`
+.literal}) is root-equivalent; see [*Docker Daemon Attack
+Surface*](../../articles/security/#dockersecurity-daemon) details.
+
+## Upgrades
+
+To upgrade your manual installation of Docker, first kill the docker
+daemon:
+
+    killall docker
+
+Then follow the regular installation steps.
+
+## Run your first container!
+
+    # check your docker version
+    sudo ./docker version
+
+    # run a container and open an interactive shell in the container
+    sudo ./docker run -i -t ubuntu /bin/bash
+
+Continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.

+ 95 - 0
docs/sources/installation/cruxlinux.md

@@ -0,0 +1,95 @@
+page_title: Installation on CRUX Linux
+page_description: Docker installation on CRUX Linux.
+page_keywords: crux linux, virtualization, Docker, documentation, installation
+
+# CRUX Linux
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Installing on CRUX Linux can be handled via the ports from [James
+Mills](http://prologic.shortcircuit.net.au/):
+
+-   [docker](https://bitbucket.org/prologic/ports/src/tip/docker/)
+-   [docker-bin](https://bitbucket.org/prologic/ports/src/tip/docker-bin/)
+-   [docker-git](https://bitbucket.org/prologic/ports/src/tip/docker-git/)
+
+The `docker` port will install the latest tagged
+version of Docker. The `docker-bin` port will
+install the latest tagged versin of Docker from upstream built binaries.
+The `docker-git` package will build from the current
+master branch.
+
+## Installation
+
+For the time being (*until the CRUX Docker port(s) get into the official
+contrib repository*) you will need to install [James
+Mills’](https://bitbucket.org/prologic/ports) ports repository. You can
+do so via:
+
+Download the `httpup` file to
+`/etc/ports/`:
+
+    curl -q -o - http://crux.nu/portdb/?a=getup&q=prologic > /etc/ports/prologic.httpup
+
+Add `prtdir /usr/ports/prologic` to
+`/etc/prt-get.conf`:
+
+    vim /etc/prt-get.conf
+
+    # or:
+    echo "prtdir /usr/ports/prologic" >> /etc/prt-get.conf
+
+Update ports and prt-get cache:
+
+    ports -u
+    prt-get cache
+
+To install (*and its dependencies*):
+
+    prt-get depinst docker
+
+Use `docker-bin` for the upstream binary or
+`docker-git` to build and install from the master
+branch from git.
+
+## Kernel Requirements
+
+To have a working **CRUX+Docker** Host you must ensure your Kernel has
+the necessary modules enabled for LXC containers to function correctly
+and Docker Daemon to work properly.
+
+Please read the `README.rst`:
+
+    prt-get readme docker
+
+There is a `test_kernel_config.sh` script in the
+above ports which you can use to test your Kernel configuration:
+
+    cd /usr/ports/prologic/docker
+    ./test_kernel_config.sh /usr/src/linux/.config
+
+## Starting Docker
+
+There is a rc script created for Docker. To start the Docker service:
+
+    sudo su -
+    /etc/rc.d/docker start
+
+To start on system boot:
+
+-   Edit `/etc/rc.conf`
+-   Put `docker` into the `SERVICES=(...)`
+ array after `net`.
+

+ 67 - 0
docs/sources/installation/fedora.md

@@ -0,0 +1,67 @@
+page_title: Installation on Fedora
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, Fedora, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux
+
+# Fedora
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Docker is available in **Fedora 19 and later**. Please note that due to
+the current Docker limitations Docker is able to run only on the **64
+bit** architecture.
+
+## Installation
+
+The `docker-io` package provides Docker on Fedora.
+
+If you have the (unrelated) `docker` package
+installed already, it will conflict with `docker-io`
+.literal}. There’s a [bug
+report](https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for
+it. To proceed with `docker-io` installation on
+Fedora 19, please remove `docker` first.
+
+    sudo yum -y remove docker
+
+For Fedora 20 and later, the `wmdocker` package will
+provide the same functionality as `docker` and will
+also not conflict with `docker-io`.
+
+    sudo yum -y install wmdocker
+    sudo yum -y remove docker
+
+Install the `docker-io` package which will install
+Docker on our host.
+
+    sudo yum -y install docker-io
+
+To update the `docker-io` package:
+
+    sudo yum -y update docker-io
+
+Now that it’s installed, let’s start the Docker daemon.
+
+    sudo systemctl start docker
+
+If we want Docker to start at boot, we should also:
+
+    sudo systemctl enable docker
+
+Now let’s verify that Docker is working.
+
+    sudo docker run -i -t fedora /bin/bash
+
+**Done!**, now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.

+ 58 - 0
docs/sources/installation/frugalware.md

@@ -0,0 +1,58 @@
+page_title: Installation on FrugalWare
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: frugalware linux, virtualization, docker, documentation, installation
+
+# FrugalWare
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Installing on FrugalWare is handled via the official packages:
+
+-   [lxc-docker i686](http://www.frugalware.org/packages/200141)
+-   [lxc-docker x86\_64](http://www.frugalware.org/packages/200130)
+
+The lxc-docker package will install the latest tagged version of Docker.
+
+## Dependencies
+
+Docker depends on several packages which are specified as dependencies
+in the packages. The core dependencies are:
+
+-   systemd
+-   lvm2
+-   sqlite3
+-   libguestfs
+-   lxc
+-   iproute2
+-   bridge-utils
+
+## Installation
+
+A simple
+
+    pacman -S lxc-docker
+
+is all that is needed.
+
+## Starting Docker
+
+There is a systemd service unit created for Docker. To start Docker as
+service:
+
+    sudo systemctl start lxc-docker
+
+To start on system boot:
+
+    sudo systemctl enable lxc-docker

+ 80 - 0
docs/sources/installation/gentoolinux.md

@@ -0,0 +1,80 @@
+page_title: Installation on Gentoo
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: gentoo linux, virtualization, docker, documentation, installation
+
+# Gentoo
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Installing Docker on Gentoo Linux can be accomplished using one of two
+methods. The first and best way if you’re looking for a stable
+experience is to use the official app-emulation/docker package directly
+in the portage tree.
+
+If you’re looking for a `-bin` ebuild, a live
+ebuild, or bleeding edge ebuild changes/fixes, the second installation
+method is to use the overlay provided at
+[https://github.com/tianon/docker-overlay](https://github.com/tianon/docker-overlay)
+which can be added using `app-portage/layman`. The
+most accurate and up-to-date documentation for properly installing and
+using the overlay can be found in [the overlay
+README](https://github.com/tianon/docker-overlay/blob/master/README.md#using-this-overlay).
+
+Note that sometimes there is a disparity between the latest version and
+what’s in the overlay, and between the latest version in the overlay and
+what’s in the portage tree. Please be patient, and the latest version
+should propagate shortly.
+
+## Installation
+
+The package should properly pull in all the necessary dependencies and
+prompt for all necessary kernel options. The ebuilds for 0.7+ include
+use flags to pull in the proper dependencies of the major storage
+drivers, with the "device-mapper" use flag being enabled by default,
+since that is the simplest installation path.
+
+    sudo emerge -av app-emulation/docker
+
+If any issues arise from this ebuild or the resulting binary, including
+and especially missing kernel configuration flags and/or dependencies,
+[open an issue on the docker-overlay
+repository](https://github.com/tianon/docker-overlay/issues) or ping
+tianon directly in the \#docker IRC channel on the freenode network.
+
+## Starting Docker
+
+Ensure that you are running a kernel that includes all the necessary
+modules and/or configuration for LXC (and optionally for device-mapper
+and/or AUFS, depending on the storage driver you’ve decided to use).
+
+### OpenRC
+
+To start the docker daemon:
+
+    sudo /etc/init.d/docker start
+
+To start on system boot:
+
+    sudo rc-update add docker default
+
+### systemd
+
+To start the docker daemon:
+
+    sudo systemctl start docker.service
+
+To start on system boot:
+
+    sudo systemctl enable docker.service

+ 64 - 0
docs/sources/installation/google.md

@@ -0,0 +1,64 @@
+page_title: Installation on Google Cloud Platform
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, installation, google, Google Compute Engine, Google Cloud Platform
+
+# [Google Cloud Platform](https://cloud.google.com/)
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+## [Compute Engine](https://developers.google.com/compute) QuickStart for [Debian](https://www.debian.org)
+
+1.  Go to [Google Cloud Console](https://cloud.google.com/console) and
+    create a new Cloud Project with [Compute Engine
+    enabled](https://developers.google.com/compute/docs/signup).
+2.  Download and configure the [Google Cloud
+    SDK](https://developers.google.com/cloud/sdk/) to use your project
+    with the following commands:
+
+<!-- -->
+
+    $ curl https://dl.google.com/dl/cloudsdk/release/install_google_cloud_sdk.bash | bash
+    $ gcloud auth login
+    Enter a cloud project id (or leave blank to not set): <google-cloud-project-id>
+
+3.  Start a new instance, select a zone close to you and the desired
+    instance size:
+
+<!-- -->
+
+    $ gcutil addinstance docker-playground --image=backports-debian-7
+    1: europe-west1-a
+    ...
+    4: us-central1-b
+    >>> <zone-index>
+    1: machineTypes/n1-standard-1
+    ...
+    12: machineTypes/g1-small
+    >>> <machine-type-index>
+
+4.  Connect to the instance using SSH:
+
+<!-- -->
+
+    $ gcutil ssh docker-playground
+    docker-playground:~$
+
+5.  Install the latest Docker release and configure it to start when the
+    instance boots:
+
+<!-- -->
+
+    docker-playground:~$ curl get.docker.io | bash
+    docker-playground:~$ sudo update-rc.d docker defaults
+
+6.  Start a new container:
+
+<!-- -->
+
+    docker-playground:~$ sudo docker run busybox echo 'docker on GCE \o/'
+    docker on GCE \o/

+ 180 - 0
docs/sources/installation/mac.md

@@ -0,0 +1,180 @@
+page_title: Installation on Mac OS X 10.6 Snow Leopard
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, requirements, virtualbox, ssh, linux, os x, osx, mac
+
+# Mac OS X
+
+Note
+
+These instructions are available with the new release of Docker (version
+0.8). However, they are subject to change.
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Docker is supported on Mac OS X 10.6 "Snow Leopard" or newer.
+
+## How To Install Docker On Mac OS X
+
+### VirtualBox
+
+Docker on OS X needs VirtualBox to run. To begin with, head over to
+[VirtualBox Download Page](https://www.virtualbox.org/wiki/Downloads)
+and get the tool for `OS X hosts x86/amd64`.
+
+Once the download is complete, open the disk image, run the set up file
+(i.e. `VirtualBox.pkg`) and install VirtualBox. Do
+not simply copy the package without running the installer.
+
+### boot2docker
+
+[boot2docker](https://github.com/boot2docker/boot2docker) provides a
+handy script to easily manage the VM running the `docker`
+daemon. It also takes care of the installation for the OS
+image that is used for the job.
+
+#### With Homebrew
+
+If you are using Homebrew on your machine, simply run the following
+command to install `boot2docker`:
+
+    brew install boot2docker
+
+#### Manual installation
+
+Open up a new terminal window, if you have not already.
+
+Run the following commands to get boot2docker:
+
+    # Enter the installation directory
+    cd ~/bin
+
+    # Get the file
+    curl https://raw.github.com/boot2docker/boot2docker/master/boot2docker > boot2docker
+
+    # Mark it executable
+    chmod +x boot2docker
+
+### Docker OS X Client
+
+The `docker` daemon is accessed using the
+`docker` client.
+
+#### With Homebrew
+
+Run the following command to install the `docker`
+client:
+
+    brew install docker
+
+#### Manual installation
+
+Run the following commands to get it downloaded and set up:
+
+    # Get the docker client file
+    DIR=$(mktemp -d ${TMPDIR:-/tmp}/dockerdl.XXXXXXX) && \
+    curl -f -o $DIR/ld.tgz https://get.docker.io/builds/Darwin/x86_64/docker-latest.tgz && \
+    gunzip $DIR/ld.tgz && \
+    tar xvf $DIR/ld.tar -C $DIR/ && \
+    cp $DIR/usr/local/bin/docker ./docker
+
+    # Set the environment variable for the docker daemon
+    export DOCKER_HOST=tcp://127.0.0.1:4243
+
+    # Copy the executable file
+    sudo cp docker /usr/local/bin/
+
+And that’s it! Let’s check out how to use it.
+
+## How To Use Docker On Mac OS X
+
+### The `docker` daemon (via boot2docker)
+
+Inside the `~/bin` directory, run the following
+commands:
+
+    # Initiate the VM
+    ./boot2docker init
+
+    # Run the VM (the docker daemon)
+    ./boot2docker up
+
+    # To see all available commands:
+    ./boot2docker
+
+    # Usage ./boot2docker {init|start|up|pause|stop|restart|status|info|delete|ssh|download}
+
+### The `docker` client
+
+Once the VM with the `docker` daemon is up, you can
+use the `docker` client just like any other
+application.
+
+    docker version
+    # Client version: 0.7.6
+    # Go version (client): go1.2
+    # Git commit (client): bc3b2ec
+    # Server version: 0.7.5
+    # Git commit (server): c348c04
+    # Go version (server): go1.2
+
+### Forwarding VM Port Range to Host
+
+If we take the port range that docker uses by default with the -P option
+(49000-49900), and forward same range from host to vm, we’ll be able to
+interact with our containers as if they were running locally:
+
+    # vm must be powered off
+    for i in {49000..49900}; do
+     VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
+     VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port$i,udp,,$i,,$i";
+    done
+
+### SSH-ing The VM
+
+If you feel the need to connect to the VM, you can simply run:
+
+    ./boot2docker ssh
+
+    # User: docker
+    # Pwd:  tcuser
+
+You can now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.
+
+## Learn More
+
+### boot2docker:
+
+See the GitHub page for
+[boot2docker](https://github.com/boot2docker/boot2docker).
+
+### If SSH complains about keys:
+
+    ssh-keygen -R '[localhost]:2022'
+
+### Upgrading to a newer release of boot2docker
+
+To upgrade an initialised VM, you can use the following 3 commands. Your
+persistence disk will not be changed, so you won’t lose your images and
+containers:
+
+    ./boot2docker stop
+    ./boot2docker download
+    ./boot2docker start
+
+### About the way Docker works on Mac OS X:
+
+Docker has two key components: the `docker` daemon
+and the `docker` client. The tool works by client
+commanding the daemon. In order to work and do its magic, the daemon
+makes use of some Linux Kernel features (e.g. LXC, name spaces etc.),
+which are not supported by OS X. Therefore, the solution of getting
+Docker to run on OS X consists of running it inside a lightweight
+virtual machine. In order to simplify things, Docker comes with a bash
+script to make this whole process as easy as possible (i.e.
+boot2docker).

+ 65 - 0
docs/sources/installation/openSUSE.md

@@ -0,0 +1,65 @@
+page_title: Installation on openSUSE
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: openSUSE, virtualbox, docker, documentation, installation
+
+# openSUSE
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Docker is available in **openSUSE 12.3 and later**. Please note that due
+to the current Docker limitations Docker is able to run only on the **64
+bit** architecture.
+
+## Installation
+
+The `docker` package from the [Virtualization
+project](https://build.opensuse.org/project/show/Virtualization) on
+[OBS](https://build.opensuse.org/) provides Docker on openSUSE.
+
+To proceed with Docker installation please add the right Virtualization
+repository.
+
+    # openSUSE 12.3
+    sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_12.3/ Virtualization
+
+    # openSUSE 13.1
+    sudo zypper ar -f http://download.opensuse.org/repositories/Virtualization/openSUSE_13.1/ Virtualization
+
+Install the Docker package.
+
+    sudo zypper in docker
+
+It’s also possible to install Docker using openSUSE’s 1-click install.
+Just visit [this](http://software.opensuse.org/package/docker) page,
+select your openSUSE version and click on the installation link. This
+will add the right repository to your system and it will also install
+the docker package.
+
+Now that it’s installed, let’s start the Docker daemon.
+
+    sudo systemctl start docker
+
+If we want Docker to start at boot, we should also:
+
+    sudo systemctl enable docker
+
+The docker package creates a new group named docker. Users, other than
+root user, need to be part of this group in order to interact with the
+Docker daemon.
+
+    sudo usermod -G docker <username>
+
+**Done!**, now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.

+ 88 - 0
docs/sources/installation/rackspace.md

@@ -0,0 +1,88 @@
+page_title: Installation on Rackspace Cloud
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Rackspace Cloud, installation, docker, linux, ubuntu
+
+# Rackspace Cloud
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Installing Docker on Ubuntu provided by Rackspace is pretty
+straightforward, and you should mostly be able to follow the
+[*Ubuntu*](../ubuntulinux/#ubuntu-linux) installation guide.
+
+**However, there is one caveat:**
+
+If you are using any Linux not already shipping with the 3.8 kernel you
+will need to install it. And this is a little more difficult on
+Rackspace.
+
+Rackspace boots their servers using grub’s `menu.lst`
+and does not like non ‘virtual’ packages (e.g. Xen compatible)
+kernels there, although they do work. This results in
+`update-grub` not having the expected result, and
+you will need to set the kernel manually.
+
+**Do not attempt this on a production machine!**
+
+    # update apt
+    apt-get update
+
+    # install the new kernel
+    apt-get install linux-generic-lts-raring
+
+Great, now you have the kernel installed in `/boot/`
+.literal}, next you need to make it boot next time.
+
+    # find the exact names
+    find /boot/ -name '*3.8*'
+
+    # this should return some results
+
+Now you need to manually edit `/boot/grub/menu.lst`,
+you will find a section at the bottom with the existing options. Copy
+the top one and substitute the new kernel into that. Make sure the new
+kernel is on top, and double check the kernel and initrd lines point to
+the right files.
+
+Take special care to double check the kernel and initrd entries.
+
+    # now edit /boot/grub/menu.lst
+    vi /boot/grub/menu.lst
+
+It will probably look something like this:
+
+    ## ## End Default Options ##
+
+    title              Ubuntu 12.04.2 LTS, kernel 3.8.x generic
+    root               (hd0)
+    kernel             /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0
+    initrd             /boot/initrd.img-3.8.0-19-generic
+
+    title              Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual
+    root               (hd0)
+    kernel             /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0
+    initrd             /boot/initrd.img-3.2.0-38-virtual
+
+    title              Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode)
+    root               (hd0)
+    kernel             /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash  single
+    initrd             /boot/initrd.img-3.2.0-38-virtual
+
+Reboot the server (either via command line or console)
+
+    # reboot
+
+Verify the kernel was updated
+
+    uname -a
+    # Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
+
+    # nice! 3.8.
+
+Now you can finish with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+instructions.

+ 80 - 0
docs/sources/installation/rhel.md

@@ -0,0 +1,80 @@
+page_title: Installation on Red Hat Enterprise Linux
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, requirements, linux, rhel, centos
+
+# Red Hat Enterprise Linux
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Note
+
+This is a community contributed installation path. The only ‘official’
+installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+installation path. This version may be out of date because it depends on
+some binaries to be updated and published
+
+Docker is available for **RHEL** on EPEL. These instructions should work
+for both RHEL and CentOS. They will likely work for other binary
+compatible EL6 distributions as well, but they haven’t been tested.
+
+Please note that this package is part of [Extra Packages for Enterprise
+Linux (EPEL)](https://fedoraproject.org/wiki/EPEL), a community effort
+to create and maintain additional packages for the RHEL distribution.
+
+Also note that due to the current Docker limitations, Docker is able to
+run only on the **64 bit** architecture.
+
+You will need [RHEL
+6.5](https://access.redhat.com/site/articles/3078#RHEL6) or higher, with
+a RHEL 6 kernel version 2.6.32-431 or higher as this has specific kernel
+fixes to allow Docker to work.
+
+## Installation
+
+Firstly, you need to install the EPEL repository. Please follow the
+[EPEL installation
+instructions](https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F).
+
+The `docker-io` package provides Docker on EPEL.
+
+If you already have the (unrelated) `docker` package
+installed, it will conflict with `docker-io`.
+There’s a [bug
+report](https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for
+it. To proceed with `docker-io` installation, please
+remove `docker` first.
+
+Next, let’s install the `docker-io` package which
+will install Docker on our host.
+
+    sudo yum -y install docker-io
+
+To update the `docker-io` package
+
+    sudo yum -y update docker-io
+
+Now that it’s installed, let’s start the Docker daemon.
+
+    sudo service docker start
+
+If we want Docker to start at boot, we should also:
+
+    sudo chkconfig docker on
+
+Now let’s verify that Docker is working.
+
+    sudo docker run -i -t fedora /bin/bash
+
+**Done!**, now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.
+
+## Issues?
+
+If you have any issues - please report them directly in the [Red Hat
+Bugzilla for docker-io
+component](https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL&component=docker-io).

+ 37 - 0
docs/sources/installation/softlayer.md

@@ -0,0 +1,37 @@
+page_title: Installation on IBM SoftLayer 
+page_description: Please note this project is currently under heavy development. It should not be used in production. 
+page_keywords: IBM SoftLayer, virtualization, cloud, docker, documentation, installation
+
+# IBM SoftLayer
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+## IBM SoftLayer QuickStart
+
+1.  Create an [IBM SoftLayer
+    account](https://www.softlayer.com/cloudlayer/).
+2.  Log in to the [SoftLayer
+    Console](https://control.softlayer.com/devices/).
+3.  Go to [Order Hourly Computing Instance
+    Wizard](https://manage.softlayer.com/Sales/orderHourlyComputingInstance)
+    on your SoftLayer Console.
+4.  Create a new *CloudLayer Computing Instance* (CCI) using the default
+    values for all the fields and choose:
+
+-   *First Available* as `Datacenter` and
+-   *Ubuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit)*
+    as `Operating System`.
+
+5.  Click the *Continue Your Order* button at the bottom right and
+    select *Go to checkout*.
+6.  Insert the required *User Metadata* and place the order.
+7.  Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux)
+    instructions.
+
+Continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.

+ 330 - 0
docs/sources/installation/ubuntulinux.md

@@ -0,0 +1,330 @@
+page_title: Installation on Ubuntu
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux
+
+# Ubuntu
+
+Warning
+
+These instructions have changed for 0.6. If you are upgrading from an
+earlier version, you will need to follow them again.
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+Docker is supported on the following versions of Ubuntu:
+
+-   [*Ubuntu Precise 12.04 (LTS) (64-bit)*](#ubuntu-precise)
+-   [*Ubuntu Raring 13.04 and Saucy 13.10 (64
+    bit)*](#ubuntu-raring-saucy)
+
+Please read [*Docker and UFW*](#ufw), if you plan to use [UFW
+(Uncomplicated Firewall)](https://help.ubuntu.com/community/UFW)
+
+## Ubuntu Precise 12.04 (LTS) (64-bit)
+
+This installation path should work at all times.
+
+### Dependencies
+
+**Linux kernel 3.8**
+
+Due to a bug in LXC, Docker works best on the 3.8 kernel. Precise comes
+with a 3.2 kernel, so we need to upgrade it. The kernel you’ll install
+when following these steps comes with AUFS built in. We also include the
+generic headers to enable packages that depend on them, like ZFS and the
+VirtualBox guest additions. If you didn’t install the headers for your
+"precise" kernel, then you can skip these headers for the "raring"
+kernel. But it is safer to include them if you’re not sure.
+
+    # install the backported kernel
+    sudo apt-get update
+    sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
+
+    # reboot
+    sudo reboot
+
+### Installation
+
+Warning
+
+These instructions have changed for 0.6. If you are upgrading from an
+earlier version, you will need to follow them again.
+
+Docker is available as a Debian package, which makes installation easy.
+**See the** [*Mirrors*](#installmirrors) **section below if you are not
+in the United States.** Other sources of the Debian packages may be
+faster for you to install.
+
+First, check that your APT system can deal with `https`
+URLs: the file `/usr/lib/apt/methods/https`
+should exist. If it doesn’t, you need to install the package
+`apt-transport-https`.
+
+    [ -e /usr/lib/apt/methods/https ] || {
+      apt-get update
+      apt-get install apt-transport-https
+    }
+
+Then, add the Docker repository key to your local keychain.
+
+    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
+
+Add the Docker repository to your apt sources list, update and install
+the `lxc-docker` package.
+
+*You may receive a warning that the package isn’t trusted. Answer yes to
+continue installation.*
+
+    sudo sh -c "echo deb https://get.docker.io/ubuntu docker main\
+    > /etc/apt/sources.list.d/docker.list"
+    sudo apt-get update
+    sudo apt-get install lxc-docker
+
+Note
+
+There is also a simple `curl` script available to
+help with this process.
+
+    curl -s https://get.docker.io/ubuntu/ | sudo sh
+
+Now verify that the installation has worked by downloading the
+`ubuntu` image and launching a container.
+
+    sudo docker run -i -t ubuntu /bin/bash
+
+Type `exit` to exit
+
+**Done!**, now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.
+
+## Ubuntu Raring 13.04 and Saucy 13.10 (64 bit)
+
+These instructions cover both Ubuntu Raring 13.04 and Saucy 13.10.
+
+### Dependencies
+
+**Optional AUFS filesystem support**
+
+Ubuntu Raring already comes with the 3.8 kernel, so we don’t need to
+install it. However, not all systems have AUFS filesystem support
+enabled. AUFS support is optional as of version 0.7, but it’s still
+available as a driver and we recommend using it if you can.
+
+To make sure AUFS is installed, run the following commands:
+
+    sudo apt-get update
+    sudo apt-get install linux-image-extra-`uname -r`
+
+### Installation
+
+Docker is available as a Debian package, which makes installation easy.
+
+Warning
+
+Please note that these instructions have changed for 0.6. If you are
+upgrading from an earlier version, you will need to follow them again.
+
+First add the Docker repository key to your local keychain.
+
+    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
+
+Add the Docker repository to your apt sources list, update and install
+the `lxc-docker` package.
+
+    sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\
+    > /etc/apt/sources.list.d/docker.list"
+    sudo apt-get update
+    sudo apt-get install lxc-docker
+
+Now verify that the installation has worked by downloading the
+`ubuntu` image and launching a container.
+
+    sudo docker run -i -t ubuntu /bin/bash
+
+Type `exit` to exit
+
+**Done!**, now continue with the [*Hello
+World*](../../examples/hello_world/#hello-world) example.
+
+### Giving non-root access
+
+The `docker` daemon always runs as the root user,
+and since Docker version 0.5.2, the `docker` daemon
+binds to a Unix socket instead of a TCP port. By default that Unix
+socket is owned by the user *root*, and so, by default, you can access
+it with `sudo`.
+
+Starting in version 0.5.3, if you (or your Docker installer) create a
+Unix group called *docker* and add users to it, then the
+`docker` daemon will make the ownership of the Unix
+socket read/writable by the *docker* group when the daemon starts. The
+`docker` daemon must always run as the root user,
+but if you run the `docker` client as a user in the
+*docker* group then you don’t need to add `sudo` to
+all the client commands. As of 0.9.0, you can specify that a group other
+than `docker` should own the Unix socket with the
+`-G` option.
+
+Warning
+
+The *docker* group (or the group specified with `-G`
+.literal}) is root-equivalent; see [*Docker Daemon Attack
+Surface*](../../articles/security/#dockersecurity-daemon) details.
+
+**Example:**
+
+    # Add the docker group if it doesn't already exist.
+    sudo groupadd docker
+
+    # Add the connected user "${USER}" to the docker group.
+    # Change the user name to match your preferred user.
+    # You may have to logout and log back in again for
+    # this to take effect.
+    sudo gpasswd -a ${USER} docker
+
+    # Restart the Docker daemon.
+    sudo service docker restart
+
+### Upgrade
+
+To install the latest version of docker, use the standard
+`apt-get` method:
+
+    # update your sources list
+    sudo apt-get update
+
+    # install the latest
+    sudo apt-get install lxc-docker
+
+## Memory and Swap Accounting
+
+If you want to enable memory and swap accounting, you must add the
+following command-line parameters to your kernel:
+
+    cgroup_enable=memory swapaccount=1
+
+On systems using GRUB (which is the default for Ubuntu), you can add
+those parameters by editing `/etc/default/grub` and
+extending `GRUB_CMDLINE_LINUX`. Look for the
+following line:
+
+    GRUB_CMDLINE_LINUX=""
+
+And replace it by the following one:
+
+    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
+
+Then run `sudo update-grub`, and reboot.
+
+These parameters will help you get rid of the following warnings:
+
+    WARNING: Your kernel does not support cgroup swap limit.
+    WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.
+
+## Troubleshooting
+
+On Linux Mint, the `cgroup-lite` package is not
+installed by default. Before Docker will work correctly, you will need
+to install this via:
+
+    sudo apt-get update && sudo apt-get install cgroup-lite
+
+## Docker and UFW
+
+Docker uses a bridge to manage container networking. By default, UFW
+drops all forwarding traffic. As a result you will need to enable UFW
+forwarding:
+
+    sudo nano /etc/default/ufw
+    ----
+    # Change:
+    # DEFAULT_FORWARD_POLICY="DROP"
+    # to
+    DEFAULT_FORWARD_POLICY="ACCEPT"
+
+Then reload UFW:
+
+    sudo ufw reload
+
+UFW’s default set of rules denies all incoming traffic. If you want to
+be able to reach your containers from another host then you should allow
+incoming connections on the Docker port (default 4243):
+
+    sudo ufw allow 4243/tcp
+
+## Docker and local DNS server warnings
+
+Systems which are running Ubuntu or an Ubuntu derivative on the desktop
+will use 127.0.0.1 as the default nameserver in /etc/resolv.conf.
+NetworkManager sets up dnsmasq to use the real DNS servers of the
+connection and sets up nameserver 127.0.0.1 in /etc/resolv.conf.
+
+When starting containers on these desktop machines, users will see a
+warning:
+
+    WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
+
+This warning is shown because the containers can’t use the local DNS
+nameserver and Docker will default to using an external nameserver.
+
+This can be worked around by specifying a DNS server to be used by the
+Docker daemon for the containers:
+
+    sudo nano /etc/default/docker
+    ---
+    # Add:
+    DOCKER_OPTS="--dns 8.8.8.8"
+    # 8.8.8.8 could be replaced with a local DNS server, such as 192.168.1.1
+    # multiple DNS servers can be specified: --dns 8.8.8.8 --dns 192.168.1.1
+
+The Docker daemon has to be restarted:
+
+    sudo restart docker
+
+Warning
+
+If you’re doing this on a laptop which connects to various networks,
+make sure to choose a public DNS server.
+
+An alternative solution involves disabling dnsmasq in NetworkManager by
+following these steps:
+
+    sudo nano /etc/NetworkManager/NetworkManager.conf
+    ----
+    # Change:
+    dns=dnsmasq
+    # to
+    #dns=dnsmasq
+
+NetworkManager and Docker need to be restarted afterwards:
+
+    sudo restart network-manager
+    sudo restart docker
+
+Warning
+
+This might make DNS resolution slower on some networks.
+
+## Mirrors
+
+You should `ping get.docker.io` and compare the
+latency to the following mirrors, and pick whichever one is best for
+you.
+
+### Yandex
+
+[Yandex](http://yandex.ru/) in Russia is mirroring the Docker Debian
+packages, updating every 6 hours. Substitute
+`http://mirror.yandex.ru/mirrors/docker/` for
+`http://get.docker.io/ubuntu` in the instructions
+above. For example:
+
+    sudo sh -c "echo deb http://mirror.yandex.ru/mirrors/docker/ docker main\
+    > /etc/apt/sources.list.d/docker.list"
+    sudo apt-get update
+    sudo apt-get install lxc-docker

+ 72 - 0
docs/sources/installation/windows.md

@@ -0,0 +1,72 @@
+page_title: Installation on Windows
+page_description: Please note this project is currently under heavy development. It should not be used in production.
+page_keywords: Docker, Docker documentation, Windows, requirements, virtualbox, boot2docker
+
+# Windows
+
+Docker can run on Windows using a virtualization platform like
+VirtualBox. A Linux distribution is run inside a virtual machine and
+that’s where Docker will run.
+
+## Installation
+
+Note
+
+Docker is still under heavy development! We don’t recommend using it in
+production yet, but we’re getting closer with each release. Please see
+our blog post, ["Getting to Docker
+1.0"](http://blog.docker.io/2013/08/getting-to-docker-1-0/)
+
+1.  Install virtualbox from
+    [https://www.virtualbox.org](https://www.virtualbox.org) - or follow
+    this
+    [tutorial](http://www.slideshare.net/julienbarbier42/install-virtualbox-on-windows-7).
+2.  Download the latest boot2docker.iso from
+    [https://github.com/boot2docker/boot2docker/releases](https://github.com/boot2docker/boot2docker/releases).
+3.  Start VirtualBox.
+4.  Create a new Virtual machine with the following settings:
+
+> -   Name: boot2docker
+> -   Type: Linux
+> -   Version: Linux 2.6 (64 bit)
+> -   Memory size: 1024 MB
+> -   Hard drive: Do not add a virtual hard drive
+
+5.  Open the settings of the virtual machine:
+
+    5.1. go to Storage
+
+    5.2. click the empty slot below Controller: IDE
+
+    5.3. click the disc icon on the right of IDE Secondary Master
+
+    5.4. click Choose a virtual CD/DVD disk file
+
+6.  Browse to the path where you’ve saved the boot2docker.iso, select
+    the boot2docker.iso and click open.
+
+7.  Click OK on the Settings dialog to save the changes and close the
+    window.
+
+8.  Start the virtual machine by clicking the green start button.
+
+9.  The boot2docker virtual machine should boot now.
+
+## Running Docker
+
+boot2docker will log you in automatically so you can start using Docker
+right away.
+
+Let’s try the “hello world” example. Run
+
+    docker run busybox echo hello world
+
+This will download the small busybox image and print hello world.
+
+## Observations
+
+### Persistent storage
+
+The virtual machine created above lacks any persistent data storage. All
+images and containers will be lost when shutting down or rebooting the
+VM.

+ 9 - 0
docs/sources/reference.md

@@ -0,0 +1,9 @@
+# Reference Manual
+
+## Contents:
+
+- [Commands](commandline/)
+- [Dockerfile Reference](builder/)
+- [Docker Run Reference](run/)
+- [APIs](api/)
+

+ 100 - 0
docs/sources/reference/api.md

@@ -0,0 +1,100 @@
+# APIs
+
+Your programs and scripts can access Docker’s functionality via these
+interfaces:
+
+-   [Registry & Index Spec](registry_index_spec/)
+    -   [1. The 3 roles](registry_index_spec/#the-3-roles)
+        -   [1.1 Index](registry_index_spec/#index)
+        -   [1.2 Registry](registry_index_spec/#registry)
+        -   [1.3 Docker](registry_index_spec/#docker)
+
+    -   [2. Workflow](registry_index_spec/#workflow)
+        -   [2.1 Pull](registry_index_spec/#pull)
+        -   [2.2 Push](registry_index_spec/#push)
+        -   [2.3 Delete](registry_index_spec/#delete)
+
+    -   [3. How to use the Registry in standalone
+        mode](registry_index_spec/#how-to-use-the-registry-in-standalone-mode)
+        -   [3.1 Without an
+            Index](registry_index_spec/#without-an-index)
+        -   [3.2 With an Index](registry_index_spec/#with-an-index)
+
+    -   [4. The API](registry_index_spec/#the-api)
+        -   [4.1 Images](registry_index_spec/#images)
+        -   [4.2 Users](registry_index_spec/#users)
+        -   [4.3 Tags (Registry)](registry_index_spec/#tags-registry)
+        -   [4.4 Images (Index)](registry_index_spec/#images-index)
+        -   [4.5 Repositories](registry_index_spec/#repositories)
+
+    -   [5. Chaining
+        Registries](registry_index_spec/#chaining-registries)
+    -   [6. Authentication &
+        Authorization](registry_index_spec/#authentication-authorization)
+        -   [6.1 On the Index](registry_index_spec/#on-the-index)
+        -   [6.2 On the Registry](registry_index_spec/#on-the-registry)
+
+    -   [7 Document Version](registry_index_spec/#document-version)
+
+-   [Docker Registry API](registry_api/)
+    -   [1. Brief introduction](registry_api/#brief-introduction)
+    -   [2. Endpoints](registry_api/#endpoints)
+        -   [2.1 Images](registry_api/#images)
+        -   [2.2 Tags](registry_api/#tags)
+        -   [2.3 Repositories](registry_api/#repositories)
+        -   [2.4 Status](registry_api/#status)
+
+    -   [3 Authorization](registry_api/#authorization)
+
+-   [Docker Index API](index_api/)
+    -   [1. Brief introduction](index_api/#brief-introduction)
+    -   [2. Endpoints](index_api/#endpoints)
+        -   [2.1 Repository](index_api/#repository)
+        -   [2.2 Users](index_api/#users)
+        -   [2.3 Search](index_api/#search)
+
+-   [Docker Remote API](docker_remote_api/)
+    -   [1. Brief introduction](docker_remote_api/#brief-introduction)
+    -   [2. Versions](docker_remote_api/#versions)
+        -   [v1.11](docker_remote_api/#v1-11)
+        -   [v1.10](docker_remote_api/#v1-10)
+        -   [v1.9](docker_remote_api/#v1-9)
+        -   [v1.8](docker_remote_api/#v1-8)
+        -   [v1.7](docker_remote_api/#v1-7)
+        -   [v1.6](docker_remote_api/#v1-6)
+        -   [v1.5](docker_remote_api/#v1-5)
+        -   [v1.4](docker_remote_api/#v1-4)
+        -   [v1.3](docker_remote_api/#v1-3)
+        -   [v1.2](docker_remote_api/#v1-2)
+        -   [v1.1](docker_remote_api/#v1-1)
+        -   [v1.0](docker_remote_api/#v1-0)
+
+-   [Docker Remote API Client Libraries](remote_api_client_libraries/)
+-   [docker.io OAuth API](docker_io_oauth_api/)
+    -   [1. Brief introduction](docker_io_oauth_api/#brief-introduction)
+    -   [2. Register Your
+        Application](docker_io_oauth_api/#register-your-application)
+    -   [3. Endpoints](docker_io_oauth_api/#endpoints)
+        -   [3.1 Get an Authorization
+            Code](docker_io_oauth_api/#get-an-authorization-code)
+        -   [3.2 Get an Access
+            Token](docker_io_oauth_api/#get-an-access-token)
+        -   [3.3 Refresh a Token](docker_io_oauth_api/#refresh-a-token)
+
+    -   [4. Use an Access Token with the
+        API](docker_io_oauth_api/#use-an-access-token-with-the-api)
+
+-   [docker.io Accounts API](docker_io_accounts_api/)
+    -   [1. Endpoints](docker_io_accounts_api/#endpoints)
+        -   [1.1 Get a single
+            user](docker_io_accounts_api/#get-a-single-user)
+        -   [1.2 Update a single
+            user](docker_io_accounts_api/#update-a-single-user)
+        -   [1.3 List email addresses for a
+            user](docker_io_accounts_api/#list-email-addresses-for-a-user)
+        -   [1.4 Add email address for a
+            user](docker_io_accounts_api/#add-email-address-for-a-user)
+        -   [1.5 Update an email address for a
+            user](docker_io_accounts_api/#update-an-email-address-for-a-user)
+        -   [1.6 Delete email address for a
+            user](docker_io_accounts_api/#delete-email-address-for-a-user)

+ 355 - 0
docs/sources/reference/api/docker_io_accounts_api.md

@@ -0,0 +1,355 @@
+page_title: docker.io Accounts API
+page_description: API Documentation for docker.io accounts.
+page_keywords: API, Docker, accounts, REST, documentation
+
+# docker.io Accounts API
+
+## 1. Endpoints
+
+### 1.1 Get a single user
+
+ `GET /api/v1.1/users/:username/`
+:   Get profile info for the specified user.
+
+    Parameters:
+
+    -   **username** – username of the user whose profile info is being
+        requested.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token.
+
+    Status Codes:
+
+    -   **200** – success, user data returned.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being requested, OAuth access tokens must have
+        `profile_read` scope.
+    -   **404** – the specified username does not exist.
+
+    **Example request**:
+
+        GET /api/v1.1/users/janedoe/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "id": 2,
+            "username": "janedoe",
+            "url": "https://www.docker.io/api/v1.1/users/janedoe/",
+            "date_joined": "2014-02-12T17:58:01.431312Z",
+            "type": "User",
+            "full_name": "Jane Doe",
+            "location": "San Francisco, CA",
+            "company": "Success, Inc.",
+            "profile_url": "https://docker.io/",
+            "gravatar_url": "https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80&r=g&d=mm"
+            "email": "jane.doe@example.com",
+            "is_active": true
+        }
+
+### 1.2 Update a single user
+
+ `PATCH /api/v1.1/users/:username/`
+:   Update profile info for the specified user.
+
+    Parameters:
+
+    -   **username** – username of the user whose profile info is being
+        updated.
+
+    Json Parameters:
+
+     
+
+    -   **full\_name** (*string*) – (optional) the new name of the user.
+    -   **location** (*string*) – (optional) the new location.
+    -   **company** (*string*) – (optional) the new company of the user.
+    -   **profile\_url** (*string*) – (optional) the new profile url.
+    -   **gravatar\_email** (*string*) – (optional) the new Gravatar
+        email address.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token.
+    -   **Content-Type** – MIME Type of post data. JSON, url-encoded
+        form data, etc.
+
+    Status Codes:
+
+    -   **200** – success, user data updated.
+    -   **400** – post data validation error.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being updated, OAuth access tokens must have
+        `profile_write` scope.
+    -   **404** – the specified username does not exist.
+
+    **Example request**:
+
+        PATCH /api/v1.1/users/janedoe/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
+
+        {
+            "location": "Private Island",
+            "profile_url": "http://janedoe.com/",
+            "company": "Retired",
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "id": 2,
+            "username": "janedoe",
+            "url": "https://www.docker.io/api/v1.1/users/janedoe/",
+            "date_joined": "2014-02-12T17:58:01.431312Z",
+            "type": "User",
+            "full_name": "Jane Doe",
+            "location": "Private Island",
+            "company": "Retired",
+            "profile_url": "http://janedoe.com/",
+            "gravatar_url": "https://secure.gravatar.com/avatar/0212b397124be4acd4e7dea9aa357.jpg?s=80&r=g&d=mm"
+            "email": "jane.doe@example.com",
+            "is_active": true
+        }
+
+### 1.3 List email addresses for a user
+
+ `GET /api/v1.1/users/:username/emails/`
+:   List email info for the specified user.
+
+    Parameters:
+
+    -   **username** – username of the user whose profile info is being
+        updated.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token
+
+    Status Codes:
+
+    -   **200** – success, user data updated.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being requested, OAuth access tokens must have
+        `email_read` scope.
+    -   **404** – the specified username does not exist.
+
+    **Example request**:
+
+        GET /api/v1.1/users/janedoe/emails/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+            {
+                "email": "jane.doe@example.com",
+                "verified": true,
+                "primary": true
+            }
+        ]
+
+### 1.4 Add email address for a user
+
+ `POST /api/v1.1/users/:username/emails/`
+:   Add a new email address to the specified user’s account. The email
+    address must be verified separately, a confirmation email is not
+    automatically sent.
+
+    Json Parameters:
+
+     
+
+    -   **email** (*string*) – email address to be added.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token.
+    -   **Content-Type** – MIME Type of post data. JSON, url-encoded
+        form data, etc.
+
+    Status Codes:
+
+    -   **201** – success, new email added.
+    -   **400** – data validation error.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being requested, OAuth access tokens must have
+        `email_write` scope.
+    -   **404** – the specified username does not exist.
+
+    **Example request**:
+
+        POST /api/v1.1/users/janedoe/emails/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
+
+        {
+            "email": "jane.doe+other@example.com"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 201 Created
+        Content-Type: application/json
+
+        {
+            "email": "jane.doe+other@example.com",
+            "verified": false,
+            "primary": false
+        }
+
+### 1.5 Update an email address for a user
+
+ `PATCH /api/v1.1/users/:username/emails/`
+:   Update an email address for the specified user to either verify an
+    email address or set it as the primary email for the user. You
+    cannot use this endpoint to un-verify an email address. You cannot
+    use this endpoint to unset the primary email, only set another as
+    the primary.
+
+    Parameters:
+
+    -   **username** – username of the user whose email info is being
+        updated.
+
+    Json Parameters:
+
+     
+
+    -   **email** (*string*) – the email address to be updated.
+    -   **verified** (*boolean*) – (optional) whether the email address
+        is verified, must be `true` or absent.
+    -   **primary** (*boolean*) – (optional) whether to set the email
+        address as the primary email, must be `true`
+        or absent.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token.
+    -   **Content-Type** – MIME Type of post data. JSON, url-encoded
+        form data, etc.
+
+    Status Codes:
+
+    -   **200** – success, user’s email updated.
+    -   **400** – data validation error.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being updated, OAuth access tokens must have
+        `email_write` scope.
+    -   **404** – the specified username or email address does not
+        exist.
+
+    **Example request**:
+
+    Once you have independently verified an email address.
+
+        PATCH /api/v1.1/users/janedoe/emails/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
+
+        {
+            "email": "jane.doe+other@example.com",
+            "verified": true,
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "email": "jane.doe+other@example.com",
+            "verified": true,
+            "primary": false
+        }
+
+### 1.6 Delete email address for a user
+
+ `DELETE /api/v1.1/users/:username/emails/`
+:   Delete an email address from the specified user’s account. You
+    cannot delete a user’s primary email address.
+
+    Json Parameters:
+
+     
+
+    -   **email** (*string*) – email address to be deleted.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – required authentication credentials of
+        either type HTTP Basic or OAuth Bearer Token.
+    -   **Content-Type** – MIME Type of post data. JSON, url-encoded
+        form data, etc.
+
+    Status Codes:
+
+    -   **204** – success, email address removed.
+    -   **400** – validation error.
+    -   **401** – authentication error.
+    -   **403** – permission error, authenticated user must be the user
+        whose data is being requested, OAuth access tokens must have
+        `email_write` scope.
+    -   **404** – the specified username or email address does not
+        exist.
+
+    **Example request**:
+
+        DELETE /api/v1.1/users/janedoe/emails/ HTTP/1.1
+        Host: www.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
+
+        {
+            "email": "jane.doe+other@example.com"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 204 NO CONTENT
+        Content-Length: 0
+
+

+ 256 - 0
docs/sources/reference/api/docker_io_oauth_api.md

@@ -0,0 +1,256 @@
+page_title: docker.io OAuth API
+page_description: API Documentation for docker.io's OAuth flow.
+page_keywords: API, Docker, oauth, REST, documentation
+
+# docker.io OAuth API
+
+## 1. Brief introduction
+
+Some docker.io API requests will require an access token to
+authenticate. To get an access token for a user, that user must first
+grant your application access to their docker.io account. In order for
+them to grant your application access you must first register your
+application.
+
+Before continuing, we encourage you to familiarize yourself with [The
+OAuth 2.0 Authorization Framework](http://tools.ietf.org/html/rfc6749).
+
+*Also note that all OAuth interactions must take place over https
+connections*
+
+## 2. Register Your Application
+
+You will need to register your application with docker.io before users
+will be able to grant your application access to their account
+information. We are currently only allowing applications selectively. To
+request registration of your application send an email to
+[support-accounts@docker.com](mailto:support-accounts%40docker.com) with
+the following information:
+
+-   The name of your application
+-   A description of your application and the service it will provide to
+    docker.io users.
+-   A callback URI that we will use for redirecting authorization
+    requests to your application. These are used in the step of getting
+    an Authorization Code. The domain name of the callback URI will be
+    visible to the user when they are requested to authorize your
+    application.
+
+When your application is approved you will receive a response from the
+docker.io team with your `client_id` and
+`client_secret` which your application will use in
+the steps of getting an Authorization Code and getting an Access Token.
+
+## 3. Endpoints
+
+### 3.1 Get an Authorization Code
+
+Once You have registered you are ready to start integrating docker.io
+accounts into your application! The process is usually started by a user
+following a link in your application to an OAuth Authorization endpoint.
+
+ `GET /api/v1.1/o/authorize/`
+:   Request that a docker.io user authorize your application. If the
+    user is not already logged in, they will be prompted to login. The
+    user is then presented with a form to authorize your application for
+    the requested access scope. On submission, the user will be
+    redirected to the specified `redirect_uri` with
+    an Authorization Code.
+
+    Query Parameters:
+
+     
+
+    -   **client\_id** – The `client_id` given to
+        your application at registration.
+    -   **response\_type** – MUST be set to `code`.
+        This specifies that you would like an Authorization Code
+        returned.
+    -   **redirect\_uri** – The URI to redirect back to after the user
+        has authorized your application. If omitted, the first of your
+        registered `response_uris` is used. If
+        included, it must be one of the URIs which were submitted when
+        registering your application.
+    -   **scope** – The extent of access permissions you are requesting.
+        Currently, the scope options are `profile_read`
+        .literal}, `profile_write`,
+        `email_read`, and `email_write`
+        .literal}. Scopes must be separated by a space. If omitted, the
+        default scopes `profile_read email_read` are
+        used.
+    -   **state** – (Recommended) Used by your application to maintain
+        state between the authorization request and callback to protect
+        against CSRF attacks.
+
+    **Example Request**
+
+    Asking the user for authorization.
+
+        GET /api/v1.1/o/authorize/?client_id=TestClientID&response_type=code&redirect_uri=https%3A//my.app/auth_complete/&scope=profile_read%20email_read&state=abc123 HTTP/1.1
+        Host: www.docker.io
+
+    **Authorization Page**
+
+    When the user follows a link, making the above GET request, they
+    will be asked to login to their docker.io account if they are not
+    already and then be presented with the following authorization
+    prompt which asks the user to authorize your application with a
+    description of the requested scopes.
+
+    ![](../../../_images/io_oauth_authorization_page.png)
+
+    Once the user allows or denies your Authorization Request the user
+    will be redirected back to your application. Included in that
+    request will be the following query parameters:
+
+    `code`
+    :   The Authorization code generated by the docker.io authorization
+        server. Present it again to request an Access Token. This code
+        expires in 60 seconds.
+    `state`
+    :   If the `state` parameter was present in the
+        authorization request this will be the exact value received from
+        that request.
+    `error`
+    :   An error message in the event of the user denying the
+        authorization or some other kind of error with the request.
+
+### 3.2 Get an Access Token
+
+Once the user has authorized your application, a request will be made to
+your application’s specified `redirect_uri` which
+includes a `code` parameter that you must then use
+to get an Access Token.
+
+ `POST /api/v1.1/o/token/`
+:   Submit your newly granted Authorization Code and your application’s
+    credentials to receive an Access Token and Refresh Token. The code
+    is valid for 60 seconds and cannot be used more than once.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – HTTP basic authentication using your
+        application’s `client_id` and
+        `client_secret`
+
+    Form Parameters:
+
+     
+
+    -   **grant\_type** – MUST be set to `authorization_code`
+        .literal}
+    -   **code** – The authorization code received from the user’s
+        redirect request.
+    -   **redirect\_uri** – The same `redirect_uri`
+        used in the authentication request.
+
+    **Example Request**
+
+    Using an authorization code to get an access token.
+
+        POST /api/v1.1/o/token/ HTTP/1.1
+        Host: www.docker.io
+        Authorization: Basic VGVzdENsaWVudElEOlRlc3RDbGllbnRTZWNyZXQ=
+        Accept: application/json
+        Content-Type: application/json
+
+        {
+            "grant_type": "code",
+            "code": "YXV0aG9yaXphdGlvbl9jb2Rl",
+            "redirect_uri": "https://my.app/auth_complete/"
+        }
+
+    **Example Response**
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json;charset=UTF-8
+
+        {
+            "username": "janedoe",
+            "user_id": 42,
+            "access_token": "t6k2BqgRw59hphQBsbBoPPWLqu6FmS",
+            "expires_in": 15552000,
+            "token_type": "Bearer",
+            "scope": "profile_read email_read",
+            "refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc"
+        }
+
+    In the case of an error, there will be a non-200 HTTP Status and and
+    data detailing the error.
+
+### 3.3 Refresh a Token
+
+Once the Access Token expires you can use your `refresh_token`
+to have docker.io issue your application a new Access Token,
+if the user has not revoked access from your application.
+
+ `POST /api/v1.1/o/token/`
+:   Submit your `refresh_token` and application’s
+    credentials to receive a new Access Token and Refresh Token. The
+    `refresh_token` can be used only once.
+
+    Request Headers:
+
+     
+
+    -   **Authorization** – HTTP basic authentication using your
+        application’s `client_id` and
+        `client_secret`
+
+    Form Parameters:
+
+     
+
+    -   **grant\_type** – MUST be set to `refresh_token`
+        .literal}
+    -   **refresh\_token** – The `refresh_token`
+        which was issued to your application.
+    -   **scope** – (optional) The scope of the access token to be
+        returned. Must not include any scope not originally granted by
+        the user and if omitted is treated as equal to the scope
+        originally granted.
+
+    **Example Request**
+
+    Refreshing an access token.
+
+        POST /api/v1.1/o/token/ HTTP/1.1
+        Host: www.docker.io
+        Authorization: Basic VGVzdENsaWVudElEOlRlc3RDbGllbnRTZWNyZXQ=
+        Accept: application/json
+        Content-Type: application/json
+
+        {
+            "grant_type": "refresh_token",
+            "refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc",
+        }
+
+    **Example Response**
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json;charset=UTF-8
+
+        {
+            "username": "janedoe",
+            "user_id": 42,
+            "access_token": "t6k2BqgRw59hphQBsbBoPPWLqu6FmS",
+            "expires_in": 15552000,
+            "token_type": "Bearer",
+            "scope": "profile_read email_read",
+            "refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc"
+        }
+
+    In the case of an error, there will be a non-200 HTTP Status and and
+    data detailing the error.
+
+## 4. Use an Access Token with the API
+
+Many of the docker.io API requests will require a Authorization request
+header field. Simply ensure you add this header with "Bearer
+\<`access_token`\>":
+
+    GET /api/v1.1/resource HTTP/1.1
+    Host: docker.io
+    Authorization: Bearer 2YotnFZFEjr1zCsicMWpAA

+ 348 - 0
docs/sources/reference/api/docker_remote_api.md

@@ -0,0 +1,348 @@
+page_title: Remote API
+page_description: API Documentation for Docker
+page_keywords: API, Docker, rcli, REST, documentation
+
+# Docker Remote API
+
+## 1. Brief introduction
+
+-   The Remote API is replacing rcli
+-   By default the Docker daemon listens on unix:///var/run/docker.sock
+    and the client must have root access to interact with the daemon
+-   If a group named *docker* exists on your system, docker will apply
+    ownership of the socket to the group
+-   The API tends to be REST, but for some complex commands, like attach
+    or pull, the HTTP connection is hijacked to transport stdout stdin
+    and stderr
+-   Since API version 1.2, the auth configuration is now handled client
+    side, so the client has to send the authConfig as POST in
+    /images/(name)/push
+-   authConfig, set as the `X-Registry-Auth` header,
+    is currently a Base64 encoded (json) string with credentials:
+    `{'username': string, 'password': string, 'email': string, 'serveraddress' : string}`
+
+
+## 2. Versions
+
+The current version of the API is 1.11
+
+Calling /images/\<name\>/insert is the same as calling
+/v1.11/images/\<name\>/insert
+
+You can still call an old version of the api using
+/v1.11/images/\<name\>/insert
+
+### v1.11
+
+#### Full Documentation
+
+[*Docker Remote API v1.11*](../docker_remote_api_v1.11/)
+
+#### What’s new
+
+ `GET /events`
+:   **New!** You can now use the `-until` parameter
+    to close connection after timestamp.
+
+### v1.10
+
+#### Full Documentation
+
+[*Docker Remote API v1.10*](../docker_remote_api_v1.10/)
+
+#### What’s new
+
+ `DELETE /images/`(*name*)
+:   **New!** You can now use the force parameter to force delete of an
+    image, even if it’s tagged in multiple repositories. **New!** You
+    can now use the noprune parameter to prevent the deletion of parent
+    images
+
+ `DELETE /containers/`(*id*)
+:   **New!** You can now use the force paramter to force delete a
+    container, even if it is currently running
+
+### v1.9
+
+#### Full Documentation
+
+[*Docker Remote API v1.9*](../docker_remote_api_v1.9/)
+
+#### What’s new
+
+ `POST /build`
+:   **New!** This endpoint now takes a serialized ConfigFile which it
+    uses to resolve the proper registry auth credentials for pulling the
+    base image. Clients which previously implemented the version
+    accepting an AuthConfig object must be updated.
+
+### v1.8
+
+#### Full Documentation
+
+#### What’s new
+
+ `POST /build`
+:   **New!** This endpoint now returns build status as json stream. In
+    case of a build error, it returns the exit status of the failed
+    command.
+
+ `GET /containers/`(*id*)`/json`
+:   **New!** This endpoint now returns the host config for the
+    container.
+
+ `POST /images/create`
+:   
+
+ `POST /images/`(*name*)`/insert`
+:   
+
+ `POST /images/`(*name*)`/push`
+:   **New!** progressDetail object was added in the JSON. It’s now
+    possible to get the current value and the total of the progress
+    without having to parse the string.
+
+### v1.7
+
+#### Full Documentation
+
+#### What’s new
+
+ `GET /images/json`
+:   The format of the json returned from this uri changed. Instead of an
+    entry for each repo/tag on an image, each image is only represented
+    once, with a nested attribute indicating the repo/tags that apply to
+    that image.
+
+    Instead of:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+          {
+            "VirtualSize": 131506275,
+            "Size": 131506275,
+            "Created": 1365714795,
+            "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+            "Tag": "12.04",
+            "Repository": "ubuntu"
+          },
+          {
+            "VirtualSize": 131506275,
+            "Size": 131506275,
+            "Created": 1365714795,
+            "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+            "Tag": "latest",
+            "Repository": "ubuntu"
+          },
+          {
+            "VirtualSize": 131506275,
+            "Size": 131506275,
+            "Created": 1365714795,
+            "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+            "Tag": "precise",
+            "Repository": "ubuntu"
+          },
+          {
+            "VirtualSize": 180116135,
+            "Size": 24653,
+            "Created": 1364102658,
+            "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+            "Tag": "12.10",
+            "Repository": "ubuntu"
+          },
+          {
+            "VirtualSize": 180116135,
+            "Size": 24653,
+            "Created": 1364102658,
+            "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+            "Tag": "quantal",
+            "Repository": "ubuntu"
+          }
+        ]
+
+    The returned json looks like this:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+          {
+             "RepoTags": [
+               "ubuntu:12.04",
+               "ubuntu:precise",
+               "ubuntu:latest"
+             ],
+             "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+             "Created": 1365714795,
+             "Size": 131506275,
+             "VirtualSize": 131506275
+          },
+          {
+             "RepoTags": [
+               "ubuntu:12.10",
+               "ubuntu:quantal"
+             ],
+             "ParentId": "27cf784147099545",
+             "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "Created": 1364102658,
+             "Size": 24653,
+             "VirtualSize": 180116135
+          }
+        ]
+
+ `GET /images/viz`
+:   This URI no longer exists. The `images --viz`
+    output is now generated in the client, using the
+    `/images/json` data.
+
+### v1.6
+
+#### Full Documentation
+
+#### What’s new
+
+ `POST /containers/`(*id*)`/attach`
+:   **New!** You can now split stderr from stdout. This is done by
+    prefixing a header to each transmition. See
+    [`POST /containers/(id)/attach`
+](../docker_remote_api_v1.9/#post--containers-(id)-attach "POST /containers/(id)/attach").
+    The WebSocket attach is unchanged. Note that attach calls on the
+    previous API version didn’t change. Stdout and stderr are merged.
+
+### v1.5
+
+#### Full Documentation
+
+#### What’s new
+
+ `POST /images/create`
+:   **New!** You can now pass registry credentials (via an AuthConfig
+    object) through the X-Registry-Auth header
+
+ `POST /images/`(*name*)`/push`
+:   **New!** The AuthConfig object now needs to be passed through the
+    X-Registry-Auth header
+
+ `GET /containers/json`
+:   **New!** The format of the Ports entry has been changed to a list of
+    dicts each containing PublicPort, PrivatePort and Type describing a
+    port mapping.
+
+### v1.4
+
+#### Full Documentation
+
+#### What’s new
+
+ `POST /images/create`
+:   **New!** When pulling a repo, all images are now downloaded in
+    parallel.
+
+ `GET /containers/`(*id*)`/top`
+:   **New!** You can now use ps args with docker top, like docker top
+    \<container\_id\> aux
+
+ `GET /events:`
+:   **New!** Image’s name added in the events
+
+### v1.3
+
+docker v0.5.0
+[51f6c4a](https://github.com/dotcloud/docker/commit/51f6c4a7372450d164c61e0054daf0223ddbd909)
+
+#### Full Documentation
+
+#### What’s new
+
+ `GET /containers/`(*id*)`/top`
+:   List the processes running inside a container.
+
+ `GET /events:`
+:   **New!** Monitor docker’s events via streaming or via polling
+
+Builder (/build):
+
+-   Simplify the upload of the build context
+-   Simply stream a tarball instead of multipart upload with 4
+    intermediary buffers
+-   Simpler, less memory usage, less disk usage and faster
+
+Warning
+
+The /build improvements are not reverse-compatible. Pre 1.3 clients will
+break on /build.
+
+List containers (/containers/json):
+
+-   You can use size=1 to get the size of the containers
+
+Start containers (/containers/\<id\>/start):
+
+-   You can now pass host-specific configuration (e.g. bind mounts) in
+    the POST body for start calls
+
+### v1.2
+
+docker v0.4.2
+[2e7649b](https://github.com/dotcloud/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168)
+
+#### Full Documentation
+
+#### What’s new
+
+The auth configuration is now handled by the client.
+
+The client should send it’s authConfig as POST on each call of
+/images/(name)/push
+
+ `GET /auth`
+:   **Deprecated.**
+
+ `POST /auth`
+:   Only checks the configuration but doesn’t store it on the server
+
+    Deleting an image is now improved, will only untag the image if it
+    has children and remove all the untagged parents if has any.
+
+ `POST /images/<name>/delete`
+:   Now returns a JSON structure with the list of images
+    deleted/untagged.
+
+### v1.1
+
+docker v0.4.0
+[a8ae398](https://github.com/dotcloud/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f)
+
+#### Full Documentation
+
+#### What’s new
+
+ `POST /images/create`
+:   
+
+ `POST /images/`(*name*)`/insert`
+:   
+
+ `POST /images/`(*name*)`/push`
+:   Uses json stream instead of HTML hijack, it looks like this:
+
+    >     HTTP/1.1 200 OK
+    >     Content-Type: application/json
+    >
+    >     {"status":"Pushing..."}
+    >     {"status":"Pushing", "progress":"1/? (n/a)"}
+    >     {"error":"Invalid..."}
+    >     ...
+
+### v1.0
+
+docker v0.3.4
+[8d73740](https://github.com/dotcloud/docker/commit/8d73740343778651c09160cde9661f5f387b36f4)
+
+#### Full Documentation
+
+#### What’s new
+
+Initial version

+ 1238 - 0
docs/sources/reference/api/docker_remote_api_v1.10.md

@@ -0,0 +1,1238 @@
+page_title: Remote API v1.10
+page_description: API Documentation for Docker
+page_keywords: API, Docker, rcli, REST, documentation
+
+# Docker Remote API v1.10
+
+## 1. Brief introduction
+
+-   The Remote API has replaced rcli
+-   The daemon listens on `unix:///var/run/docker.sock`
+, but you can [*Bind Docker to another host/port or a Unix
+    socket*](../../../use/basics/#bind-docker).
+-   The API tends to be REST, but for some complex commands, like
+    `attach` or `pull`, the HTTP
+    connection is hijacked to transport `stdout, stdin`
+ and `stderr`
+
+## 2. Endpoints
+
+### 2.1 Containers
+
+#### List containers
+
+ `GET /containers/json`
+:   List containers
+
+    **Example request**:
+
+        GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id": "8dfafdbc3a40",
+                     "Image": "base:latest",
+                     "Command": "echo 1",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "9cd87474be90",
+                     "Image": "base:latest",
+                     "Command": "echo 222222",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "3176a2479c92",
+                     "Image": "base:latest",
+                     "Command": "echo 3333333333333333",
+                     "Created": 1367854154,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "4cb07b47f9fb",
+                     "Image": "base:latest",
+                     "Command": "echo 444444444444444444444444444444444",
+                     "Created": 1367854152,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             }
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **all** – 1/True/true or 0/False/false, Show all containers.
+        Only running containers are shown by default
+    -   **limit** – Show `limit` last created
+        containers, include non-running ones.
+    -   **since** – Show only containers created since Id, include
+        non-running ones.
+    -   **before** – Show only containers created before Id, include
+        non-running ones.
+    -   **size** – 1/True/true or 0/False/false, Show the containers
+        sizes
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **500** – server error
+
+#### Create a container
+
+ `POST /containers/create`
+:   Create a container
+
+    **Example request**:
+
+        POST /containers/create HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Hostname":"",
+             "User":"",
+             "Memory":0,
+             "MemorySwap":0,
+             "AttachStdin":false,
+             "AttachStdout":true,
+             "AttachStderr":true,
+             "PortSpecs":null,
+             "Tty":false,
+             "OpenStdin":false,
+             "StdinOnce":false,
+             "Env":null,
+             "Cmd":[
+                     "date"
+             ],
+             "Image":"base",
+             "Volumes":{
+                     "/tmp": {}
+             },
+             "WorkingDir":"",
+             "DisableNetwork": false,
+             "ExposedPorts":{
+             "DisableNetwork": false,
+                     "22/tcp": {}
+             }
+        }
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+        Content-Type: application/json
+
+        {
+             "Id":"e90e34656806"
+             "Warnings":[]
+        }
+
+    Json Parameters:
+
+     
+
+    -   **config** – the container’s configuration
+
+    Query Parameters:
+
+     
+
+    -   **name** – Assign the specified name to the container. Must
+        match `/?[a-zA-Z0-9_-]+`.
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **406** – impossible to attach (container not running)
+    -   **500** – server error
+
+#### Inspect a container
+
+ `GET /containers/`(*id*)`/json`
+:   Return low-level information on the container `id`
+
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+                     "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+                     "Created": "2013-05-07T14:51:42.041847+02:00",
+                     "Path": "date",
+                     "Args": [],
+                     "Config": {
+                             "Hostname": "4fa6e0f0c678",
+                             "User": "",
+                             "Memory": 0,
+                             "MemorySwap": 0,
+                             "AttachStdin": false,
+                             "AttachStdout": true,
+                             "AttachStderr": true,
+                             "PortSpecs": null,
+                             "Tty": false,
+                             "OpenStdin": false,
+                             "StdinOnce": false,
+                             "Env": null,
+                             "Cmd": [
+                                     "date"
+                             ],
+                             "Image": "base",
+                             "Volumes": {},
+                             "WorkingDir":""
+
+                     },
+                     "State": {
+                             "Running": false,
+                             "Pid": 0,
+                             "ExitCode": 0,
+                             "StartedAt": "2013-05-07T14:51:42.087658+02:01360",
+                             "Ghost": false
+                     },
+                     "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+                     "NetworkSettings": {
+                             "IpAddress": "",
+                             "IpPrefixLen": 0,
+                             "Gateway": "",
+                             "Bridge": "",
+                             "PortMapping": null
+                     },
+                     "SysInitPath": "/home/kitty/go/src/github.com/dotcloud/docker/bin/docker",
+                     "ResolvConfPath": "/etc/resolv.conf",
+                     "Volumes": {},
+                     "HostConfig": {
+                         "Binds": null,
+                         "ContainerIDFile": "",
+                         "LxcConf": [],
+                         "Privileged": false,
+                         "PortBindings": {
+                            "80/tcp": [
+                                {
+                                    "HostIp": "0.0.0.0",
+                                    "HostPort": "49153"
+                                }
+                            ]
+                         },
+                         "Links": null,
+                         "PublishAllPorts": false
+                     }
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### List processes running inside a container
+
+ `GET /containers/`(*id*)`/top`
+:   List processes running inside the container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/top HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Titles":[
+                     "USER",
+                     "PID",
+                     "%CPU",
+                     "%MEM",
+                     "VSZ",
+                     "RSS",
+                     "TTY",
+                     "STAT",
+                     "START",
+                     "TIME",
+                     "COMMAND"
+                     ],
+             "Processes":[
+                     ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"],
+                     ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"]
+             ]
+        }
+
+    Query Parameters:
+
+     
+
+    -   **ps\_args** – ps arguments to use (eg. aux)
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Inspect changes on a container’s filesystem
+
+ `GET /containers/`(*id*)`/changes`
+:   Inspect changes on container `id` ‘s filesystem
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/changes HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Path":"/dev",
+                     "Kind":0
+             },
+             {
+                     "Path":"/dev/kmsg",
+                     "Kind":1
+             },
+             {
+                     "Path":"/test",
+                     "Kind":1
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Export a container
+
+ `GET /containers/`(*id*)`/export`
+:   Export the contents of container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/export HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Start a container
+
+ `POST /containers/`(*id*)`/start`
+:   Start the container `id`
+
+    **Example request**:
+
+        POST /containers/(id)/start HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Binds":["/tmp:/tmp"],
+             "LxcConf":{"lxc.utsname":"docker"},
+             "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] },
+             "PublishAllPorts":false,
+             "Privileged":false
+             "Dns": ["8.8.8.8"],
+             "VolumesFrom: ["parent", "other:ro"]
+        }
+
+    **Example response**:
+
+        HTTP/1.1 204 No Content
+        Content-Type: text/plain
+
+    Json Parameters:
+
+     
+
+    -   **hostConfig** – the container’s host configuration (optional)
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Stop a container
+
+ `POST /containers/`(*id*)`/stop`
+:   Stop the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/stop?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Restart a container
+
+ `POST /containers/`(*id*)`/restart`
+:   Restart the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/restart?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Kill a container
+
+ `POST /containers/`(*id*)`/kill`
+:   Kill the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/kill HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Attach to a container
+
+ `POST /containers/`(*id*)`/attach`
+:   Attach to the container `id`
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/vnd.docker.raw-stream
+
+        {{ STREAM }}
+
+    Query Parameters:
+
+     
+
+    -   **logs** – 1/True/true or 0/False/false, return logs. Default
+        false
+    -   **stream** – 1/True/true or 0/False/false, return stream.
+        Default false
+    -   **stdin** – 1/True/true or 0/False/false, if stream=true, attach
+        to stdin. Default false
+    -   **stdout** – 1/True/true or 0/False/false, if logs=true, return
+        stdout log, if stream=true, attach to stdout. Default false
+    -   **stderr** – 1/True/true or 0/False/false, if logs=true, return
+        stderr log, if stream=true, attach to stderr. Default false
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+    **Stream details**:
+
+    When using the TTY setting is enabled in
+    [`POST /containers/create`
+](../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"),
+    the stream is the raw data from the process PTY and client’s stdin.
+    When the TTY is disabled, then the stream is multiplexed to separate
+    stdout and stderr.
+
+    The format is a **Header** and a **Payload** (frame).
+
+    **HEADER**
+
+    The header will contain the information on which stream write the
+    stream (stdout or stderr). It also contain the size of the
+    associated frame encoded on the last 4 bytes (uint32).
+
+    It is encoded on the first 8 bytes like this:
+
+        header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
+
+    `STREAM_TYPE` can be:
+
+    -   0: stdin (will be writen on stdout)
+    -   1: stdout
+    -   2: stderr
+
+    `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of
+    the uint32 size encoded as big endian.
+
+    **PAYLOAD**
+
+    The payload is the raw stream.
+
+    **IMPLEMENTATION**
+
+    The simplest way to implement the Attach protocol is the following:
+
+    1.  Read 8 bytes
+    2.  chose stdout or stderr depending on the first byte
+    3.  Extract the frame size from the last 4 byets
+    4.  Read the extracted size and output it on the correct output
+    5.  Goto 1)
+
+#### Wait a container
+
+ `POST /containers/`(*id*)`/wait`
+:   Block until container `id` stops, then returns
+    the exit code
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/wait HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"StatusCode":0}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Remove a container
+
+ `DELETE /containers/`(*id*)
+:   Remove the container `id` from the filesystem
+
+    **Example request**:
+
+        DELETE /containers/16253994b7c4?v=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **v** – 1/True/true or 0/False/false, Remove the volumes
+        associated to the container. Default false
+    -   **force** – 1/True/true or 0/False/false, Removes the container
+        even if it was running. Default false
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Copy files or folders from a container
+
+ `POST /containers/`(*id*)`/copy`
+:   Copy files or folders of container `id`
+
+    **Example request**:
+
+        POST /containers/4fa6e0f0c678/copy HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Resource":"test.txt"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+### 2.2 Images
+
+#### List Images
+
+ `GET /images/json`
+:   **Example request**:
+
+        GET /images/json?all=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+          {
+             "RepoTags": [
+               "ubuntu:12.04",
+               "ubuntu:precise",
+               "ubuntu:latest"
+             ],
+             "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+             "Created": 1365714795,
+             "Size": 131506275,
+             "VirtualSize": 131506275
+          },
+          {
+             "RepoTags": [
+               "ubuntu:12.10",
+               "ubuntu:quantal"
+             ],
+             "ParentId": "27cf784147099545",
+             "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "Created": 1364102658,
+             "Size": 24653,
+             "VirtualSize": 180116135
+          }
+        ]
+
+#### Create an image
+
+ `POST /images/create`
+:   Create an image, either by pull it from the registry or by importing
+    it
+
+    **Example request**:
+
+        POST /images/create?fromImage=base HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pulling..."}
+        {"status":"Pulling", "progress":"1 B/ 100 B", "progressDetail":{"current":1, "total":100}}
+        {"error":"Invalid..."}
+        ...
+
+    When using this endpoint to pull an image from the registry, the
+    `X-Registry-Auth` header can be used to include
+    a base64-encoded AuthConfig object.
+
+    Query Parameters:
+
+     
+
+    -   **fromImage** – name of the image to pull
+    -   **fromSrc** – source to import, - means stdin
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **registry** – the registry to pull from
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – base64-encoded AuthConfig object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Insert a file in an image
+
+ `POST /images/`(*name*)`/insert`
+:   Insert a file from `url` in the image
+    `name` at `path`
+
+    **Example request**:
+
+        POST /images/test/insert?path=/usr&url=myurl HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Inserting..."}
+        {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}}
+        {"error":"Invalid..."}
+        ...
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Inspect an image
+
+ `GET /images/`(*name*)`/json`
+:   Return low-level information on the image `name`
+
+    **Example request**:
+
+        GET /images/base/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "parent":"27cf784147099545",
+             "created":"2013-03-23T22:24:18.818426-07:00",
+             "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0",
+             "container_config":
+                     {
+                             "Hostname":"",
+                             "User":"",
+                             "Memory":0,
+                             "MemorySwap":0,
+                             "AttachStdin":false,
+                             "AttachStdout":false,
+                             "AttachStderr":false,
+                             "PortSpecs":null,
+                             "Tty":true,
+                             "OpenStdin":true,
+                             "StdinOnce":false,
+                             "Env":null,
+                             "Cmd": ["/bin/bash"]
+                             "Image":"base",
+                             "Volumes":null,
+                             "WorkingDir":""
+                     },
+             "Size": 6824592
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Get the history of an image
+
+ `GET /images/`(*name*)`/history`
+:   Return the history of the image `name`
+
+    **Example request**:
+
+        GET /images/base/history HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id":"b750fe79269d",
+                     "Created":1364102658,
+                     "CreatedBy":"/bin/bash"
+             },
+             {
+                     "Id":"27cf78414709",
+                     "Created":1364068391,
+                     "CreatedBy":""
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Push an image on the registry
+
+ `POST /images/`(*name*)`/push`
+:   Push the image `name` on the registry
+
+    **Example request**:
+
+        POST /images/test/push HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pushing..."}
+        {"status":"Pushing", "progress":"1/? (n/a)", "progressDetail":{"current":1}}}
+        {"error":"Invalid..."}
+        ...
+
+    Query Parameters:
+
+     
+
+    -   **registry** – the registry you wan to push, optional
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – include a base64-encoded AuthConfig
+        object.
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Tag an image into a repository
+
+ `POST /images/`(*name*)`/tag`
+:   Tag the image `name` into a repository
+
+    **Example request**:
+
+        POST /images/test/tag?repo=myrepo&force=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+
+    Query Parameters:
+
+     
+
+    -   **repo** – The repository to tag in
+    -   **force** – 1/True/true or 0/False/false, default false
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **400** – bad parameter
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Remove an image
+
+ `DELETE /images/`(*name*)
+:   Remove the image `name` from the filesystem
+
+    **Example request**:
+
+        DELETE /images/test HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-type: application/json
+
+        [
+         {"Untagged":"3e2f21a89f"},
+         {"Deleted":"3e2f21a89f"},
+         {"Deleted":"53b4f83ac9"}
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **force** – 1/True/true or 0/False/false, default false
+    -   **noprune** – 1/True/true or 0/False/false, default false
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Search images
+
+ `GET /images/search`
+:   Search for an image in the docker index.
+
+    Note
+
+    The response keys have changed from API v1.6 to reflect the JSON
+    sent by the registry server to the docker daemon’s request.
+
+    **Example request**:
+
+        GET /images/search?term=sshd HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "wma55/u1210sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "jdswinbank/sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "vgauthier/sshd",
+                    "star_count": 0
+                }
+        ...
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **term** – term to search
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+### 2.3 Misc
+
+#### Build an image from Dockerfile via stdin
+
+ `POST /build`
+:   Build an image from Dockerfile via stdin
+
+    **Example request**:
+
+        POST /build HTTP/1.1
+
+        {{ STREAM }}
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"stream":"Step 1..."}
+        {"stream":"..."}
+        {"error":"Error...", "errorDetail":{"code": 123, "message": "Error..."}}
+
+    The stream must be a tar archive compressed with one of the
+    following algorithms: identity (no compression), gzip, bzip2, xz.
+
+    The archive must include a file called `Dockerfile`
+ at its root. It may include any number of other files,
+    which will be accessible in the build context (See the [*ADD build
+    command*](../../builder/#dockerbuilder)).
+
+    Query Parameters:
+
+     
+
+    -   **t** – repository name (and optionally a tag) to be applied to
+        the resulting image in case of success
+    -   **q** – suppress verbose build output
+    -   **nocache** – do not use the cache when building the image
+
+    Request Headers:
+
+     
+
+    -   **Content-type** – should be set to
+        `"application/tar"`.
+    -   **X-Registry-Config** – base64-encoded ConfigFile object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Check auth configuration
+
+ `POST /auth`
+:   Get the default username and email
+
+    **Example request**:
+
+        POST /auth HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "username":"hannibal",
+             "password:"xxxx",
+             "email":"hannibal@a-team.com",
+             "serveraddress":"https://index.docker.io/v1/"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **204** – no error
+    -   **500** – server error
+
+#### Display system-wide information
+
+ `GET /info`
+:   Display system-wide information
+
+    **Example request**:
+
+        GET /info HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Containers":11,
+             "Images":16,
+             "Debug":false,
+             "NFd": 11,
+             "NGoroutines":21,
+             "MemoryLimit":true,
+             "SwapLimit":false,
+             "IPv4Forwarding":true
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Show the docker version information
+
+ `GET /version`
+:   Show the docker version information
+
+    **Example request**:
+
+        GET /version HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Version":"0.2.2",
+             "GitCommit":"5a2a5cc+CHANGES",
+             "GoVersion":"go1.0.3"
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Create a new image from a container’s changes
+
+ `POST /commit`
+:   Create a new image from a container’s changes
+
+    **Example request**:
+
+        POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+            Content-Type: application/vnd.docker.raw-stream
+
+        {"Id":"596069db4bf5"}
+
+    Query Parameters:
+
+     
+
+    -   **container** – source container
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **m** – commit message
+    -   **author** – author (eg. "John Hannibal Smith
+        \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>")
+    -   **run** – config automatically applied when the image is run.
+        (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]})
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Monitor Docker’s events
+
+ `GET /events`
+:   Get events from docker, either in real time via streaming, or via
+    polling (using since)
+
+    **Example request**:
+
+        GET /events?since=1374067924
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"create","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"start","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"stop","id":"dfdf82bd3881","from":"base:latest","time":1374067966}
+        {"status":"destroy","id":"dfdf82bd3881","from":"base:latest","time":1374067970}
+
+    Query Parameters:
+
+     
+
+    -   **since** – timestamp used for polling
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Get a tarball containing all images and tags in a repository
+
+ `GET /images/`(*name*)`/get`
+:   Get a tarball containing all images and metadata for the repository
+    specified by `name`.
+
+    **Example request**
+
+        GET /images/ubuntu/get
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/x-tar
+
+        Binary data stream
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Load a tarball with a set of images and tags into docker
+
+ `POST /images/load`
+:   Load a set of images and tags into the docker repository.
+
+    **Example request**
+
+        POST /images/load
+
+        Tarball in body
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+## 3. Going further
+
+### 3.1 Inside ‘docker run’
+
+Here are the steps of ‘docker run’ :
+
+-   Create the container
+
+-   If the status code is 404, it means the image doesn’t exists:
+    :   -   Try to pull it
+        -   Then retry to create the container
+
+-   Start the container
+
+-   If you are not in detached mode:
+    :   -   Attach to the container, using logs=1 (to have stdout and
+            stderr from the container’s start) and stream=1
+
+-   If in detached mode or only stdin is attached:
+    :   -   Display the container’s id
+
+### 3.2 Hijacking
+
+In this version of the API, /attach, uses hijacking to transport stdin,
+stdout and stderr on the same socket. This might change in the future.
+
+### 3.3 CORS Requests
+
+To enable cross origin requests to the remote api add the flag
+"–api-enable-cors" when running docker in daemon mode.
+
+    docker -d -H="192.168.1.9:4243" --api-enable-cors

+ 1242 - 0
docs/sources/reference/api/docker_remote_api_v1.11.md

@@ -0,0 +1,1242 @@
+page_title: Remote API v1.11
+page_description: API Documentation for Docker
+page_keywords: API, Docker, rcli, REST, documentation
+
+# Docker Remote API v1.11
+
+## 1. Brief introduction
+
+-   The Remote API has replaced rcli
+-   The daemon listens on `unix:///var/run/docker.sock`
+, but you can [*Bind Docker to another host/port or a Unix
+    socket*](../../../use/basics/#bind-docker).
+-   The API tends to be REST, but for some complex commands, like
+    `attach` or `pull`, the HTTP
+    connection is hijacked to transport `stdout, stdin`
+ and `stderr`
+
+## 2. Endpoints
+
+### 2.1 Containers
+
+#### List containers
+
+ `GET /containers/json`
+:   List containers
+
+    **Example request**:
+
+        GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id": "8dfafdbc3a40",
+                     "Image": "base:latest",
+                     "Command": "echo 1",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "9cd87474be90",
+                     "Image": "base:latest",
+                     "Command": "echo 222222",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "3176a2479c92",
+                     "Image": "base:latest",
+                     "Command": "echo 3333333333333333",
+                     "Created": 1367854154,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "4cb07b47f9fb",
+                     "Image": "base:latest",
+                     "Command": "echo 444444444444444444444444444444444",
+                     "Created": 1367854152,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             }
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **all** – 1/True/true or 0/False/false, Show all containers.
+        Only running containers are shown by default
+    -   **limit** – Show `limit` last created
+        containers, include non-running ones.
+    -   **since** – Show only containers created since Id, include
+        non-running ones.
+    -   **before** – Show only containers created before Id, include
+        non-running ones.
+    -   **size** – 1/True/true or 0/False/false, Show the containers
+        sizes
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **500** – server error
+
+#### Create a container
+
+ `POST /containers/create`
+:   Create a container
+
+    **Example request**:
+
+        POST /containers/create HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Hostname":"",
+             "User":"",
+             "Memory":0,
+             "MemorySwap":0,
+             "AttachStdin":false,
+             "AttachStdout":true,
+             "AttachStderr":true,
+             "PortSpecs":null,
+             "Tty":false,
+             "OpenStdin":false,
+             "StdinOnce":false,
+             "Env":null,
+             "Cmd":[
+                     "date"
+             ],
+             "Dns":null,
+             "Image":"base",
+             "Volumes":{
+                     "/tmp": {}
+             },
+             "VolumesFrom":"",
+             "WorkingDir":"",
+             "DisableNetwork": false,
+             "ExposedPorts":{
+                     "22/tcp": {}
+             }
+        }
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+        Content-Type: application/json
+
+        {
+             "Id":"e90e34656806"
+             "Warnings":[]
+        }
+
+    Json Parameters:
+
+     
+
+    -   **config** – the container’s configuration
+
+    Query Parameters:
+
+     
+
+    -   **name** – Assign the specified name to the container. Must
+        match `/?[a-zA-Z0-9_-]+`.
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **406** – impossible to attach (container not running)
+    -   **500** – server error
+
+#### Inspect a container
+
+ `GET /containers/`(*id*)`/json`
+:   Return low-level information on the container `id`
+
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+                     "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+                     "Created": "2013-05-07T14:51:42.041847+02:00",
+                     "Path": "date",
+                     "Args": [],
+                     "Config": {
+                             "Hostname": "4fa6e0f0c678",
+                             "User": "",
+                             "Memory": 0,
+                             "MemorySwap": 0,
+                             "AttachStdin": false,
+                             "AttachStdout": true,
+                             "AttachStderr": true,
+                             "PortSpecs": null,
+                             "Tty": false,
+                             "OpenStdin": false,
+                             "StdinOnce": false,
+                             "Env": null,
+                             "Cmd": [
+                                     "date"
+                             ],
+                             "Dns": null,
+                             "Image": "base",
+                             "Volumes": {},
+                             "VolumesFrom": "",
+                             "WorkingDir":""
+
+                     },
+                     "State": {
+                             "Running": false,
+                             "Pid": 0,
+                             "ExitCode": 0,
+                             "StartedAt": "2013-05-07T14:51:42.087658+02:01360",
+                             "Ghost": false
+                     },
+                     "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+                     "NetworkSettings": {
+                             "IpAddress": "",
+                             "IpPrefixLen": 0,
+                             "Gateway": "",
+                             "Bridge": "",
+                             "PortMapping": null
+                     },
+                     "SysInitPath": "/home/kitty/go/src/github.com/dotcloud/docker/bin/docker",
+                     "ResolvConfPath": "/etc/resolv.conf",
+                     "Volumes": {},
+                     "HostConfig": {
+                         "Binds": null,
+                         "ContainerIDFile": "",
+                         "LxcConf": [],
+                         "Privileged": false,
+                         "PortBindings": {
+                            "80/tcp": [
+                                {
+                                    "HostIp": "0.0.0.0",
+                                    "HostPort": "49153"
+                                }
+                            ]
+                         },
+                         "Links": null,
+                         "PublishAllPorts": false
+                     }
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### List processes running inside a container
+
+ `GET /containers/`(*id*)`/top`
+:   List processes running inside the container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/top HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Titles":[
+                     "USER",
+                     "PID",
+                     "%CPU",
+                     "%MEM",
+                     "VSZ",
+                     "RSS",
+                     "TTY",
+                     "STAT",
+                     "START",
+                     "TIME",
+                     "COMMAND"
+                     ],
+             "Processes":[
+                     ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"],
+                     ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"]
+             ]
+        }
+
+    Query Parameters:
+
+     
+
+    -   **ps\_args** – ps arguments to use (eg. aux)
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Inspect changes on a container’s filesystem
+
+ `GET /containers/`(*id*)`/changes`
+:   Inspect changes on container `id` ‘s filesystem
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/changes HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Path":"/dev",
+                     "Kind":0
+             },
+             {
+                     "Path":"/dev/kmsg",
+                     "Kind":1
+             },
+             {
+                     "Path":"/test",
+                     "Kind":1
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Export a container
+
+ `GET /containers/`(*id*)`/export`
+:   Export the contents of container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/export HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Start a container
+
+ `POST /containers/`(*id*)`/start`
+:   Start the container `id`
+
+    **Example request**:
+
+        POST /containers/(id)/start HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Binds":["/tmp:/tmp"],
+             "LxcConf":{"lxc.utsname":"docker"},
+             "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] },
+             "PublishAllPorts":false,
+             "Privileged":false
+        }
+
+    **Example response**:
+
+        HTTP/1.1 204 No Content
+        Content-Type: text/plain
+
+    Json Parameters:
+
+     
+
+    -   **hostConfig** – the container’s host configuration (optional)
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Stop a container
+
+ `POST /containers/`(*id*)`/stop`
+:   Stop the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/stop?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Restart a container
+
+ `POST /containers/`(*id*)`/restart`
+:   Restart the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/restart?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Kill a container
+
+ `POST /containers/`(*id*)`/kill`
+:   Kill the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/kill HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Attach to a container
+
+ `POST /containers/`(*id*)`/attach`
+:   Attach to the container `id`
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/vnd.docker.raw-stream
+
+        {{ STREAM }}
+
+    Query Parameters:
+
+     
+
+    -   **logs** – 1/True/true or 0/False/false, return logs. Default
+        false
+    -   **stream** – 1/True/true or 0/False/false, return stream.
+        Default false
+    -   **stdin** – 1/True/true or 0/False/false, if stream=true, attach
+        to stdin. Default false
+    -   **stdout** – 1/True/true or 0/False/false, if logs=true, return
+        stdout log, if stream=true, attach to stdout. Default false
+    -   **stderr** – 1/True/true or 0/False/false, if logs=true, return
+        stderr log, if stream=true, attach to stderr. Default false
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+    **Stream details**:
+
+    When using the TTY setting is enabled in
+    [`POST /containers/create`
+](../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"),
+    the stream is the raw data from the process PTY and client’s stdin.
+    When the TTY is disabled, then the stream is multiplexed to separate
+    stdout and stderr.
+
+    The format is a **Header** and a **Payload** (frame).
+
+    **HEADER**
+
+    The header will contain the information on which stream write the
+    stream (stdout or stderr). It also contain the size of the
+    associated frame encoded on the last 4 bytes (uint32).
+
+    It is encoded on the first 8 bytes like this:
+
+        header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
+
+    `STREAM_TYPE` can be:
+
+    -   0: stdin (will be writen on stdout)
+    -   1: stdout
+    -   2: stderr
+
+    `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of
+    the uint32 size encoded as big endian.
+
+    **PAYLOAD**
+
+    The payload is the raw stream.
+
+    **IMPLEMENTATION**
+
+    The simplest way to implement the Attach protocol is the following:
+
+    1.  Read 8 bytes
+    2.  chose stdout or stderr depending on the first byte
+    3.  Extract the frame size from the last 4 byets
+    4.  Read the extracted size and output it on the correct output
+    5.  Goto 1)
+
+#### Wait a container
+
+ `POST /containers/`(*id*)`/wait`
+:   Block until container `id` stops, then returns
+    the exit code
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/wait HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"StatusCode":0}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Remove a container
+
+ `DELETE /containers/`(*id*)
+:   Remove the container `id` from the filesystem
+
+    **Example request**:
+
+        DELETE /containers/16253994b7c4?v=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **v** – 1/True/true or 0/False/false, Remove the volumes
+        associated to the container. Default false
+    -   **force** – 1/True/true or 0/False/false, Removes the container
+        even if it was running. Default false
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Copy files or folders from a container
+
+ `POST /containers/`(*id*)`/copy`
+:   Copy files or folders of container `id`
+
+    **Example request**:
+
+        POST /containers/4fa6e0f0c678/copy HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Resource":"test.txt"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+### 2.2 Images
+
+#### List Images
+
+ `GET /images/json`
+:   **Example request**:
+
+        GET /images/json?all=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+          {
+             "RepoTags": [
+               "ubuntu:12.04",
+               "ubuntu:precise",
+               "ubuntu:latest"
+             ],
+             "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+             "Created": 1365714795,
+             "Size": 131506275,
+             "VirtualSize": 131506275
+          },
+          {
+             "RepoTags": [
+               "ubuntu:12.10",
+               "ubuntu:quantal"
+             ],
+             "ParentId": "27cf784147099545",
+             "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "Created": 1364102658,
+             "Size": 24653,
+             "VirtualSize": 180116135
+          }
+        ]
+
+#### Create an image
+
+ `POST /images/create`
+:   Create an image, either by pull it from the registry or by importing
+    it
+
+    **Example request**:
+
+        POST /images/create?fromImage=base HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pulling..."}
+        {"status":"Pulling", "progress":"1 B/ 100 B", "progressDetail":{"current":1, "total":100}}
+        {"error":"Invalid..."}
+        ...
+
+    When using this endpoint to pull an image from the registry, the
+    `X-Registry-Auth` header can be used to include
+    a base64-encoded AuthConfig object.
+
+    Query Parameters:
+
+     
+
+    -   **fromImage** – name of the image to pull
+    -   **fromSrc** – source to import, - means stdin
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **registry** – the registry to pull from
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – base64-encoded AuthConfig object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Insert a file in an image
+
+ `POST /images/`(*name*)`/insert`
+:   Insert a file from `url` in the image
+    `name` at `path`
+
+    **Example request**:
+
+        POST /images/test/insert?path=/usr&url=myurl HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Inserting..."}
+        {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}}
+        {"error":"Invalid..."}
+        ...
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Inspect an image
+
+ `GET /images/`(*name*)`/json`
+:   Return low-level information on the image `name`
+
+    **Example request**:
+
+        GET /images/base/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "parent":"27cf784147099545",
+             "created":"2013-03-23T22:24:18.818426-07:00",
+             "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0",
+             "container_config":
+                     {
+                             "Hostname":"",
+                             "User":"",
+                             "Memory":0,
+                             "MemorySwap":0,
+                             "AttachStdin":false,
+                             "AttachStdout":false,
+                             "AttachStderr":false,
+                             "PortSpecs":null,
+                             "Tty":true,
+                             "OpenStdin":true,
+                             "StdinOnce":false,
+                             "Env":null,
+                             "Cmd": ["/bin/bash"]
+                             ,"Dns":null,
+                             "Image":"base",
+                             "Volumes":null,
+                             "VolumesFrom":"",
+                             "WorkingDir":""
+                     },
+             "Size": 6824592
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Get the history of an image
+
+ `GET /images/`(*name*)`/history`
+:   Return the history of the image `name`
+
+    **Example request**:
+
+        GET /images/base/history HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id":"b750fe79269d",
+                     "Created":1364102658,
+                     "CreatedBy":"/bin/bash"
+             },
+             {
+                     "Id":"27cf78414709",
+                     "Created":1364068391,
+                     "CreatedBy":""
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Push an image on the registry
+
+ `POST /images/`(*name*)`/push`
+:   Push the image `name` on the registry
+
+    **Example request**:
+
+        POST /images/test/push HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pushing..."}
+        {"status":"Pushing", "progress":"1/? (n/a)", "progressDetail":{"current":1}}}
+        {"error":"Invalid..."}
+        ...
+
+    Query Parameters:
+
+     
+
+    -   **registry** – the registry you wan to push, optional
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – include a base64-encoded AuthConfig
+        object.
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Tag an image into a repository
+
+ `POST /images/`(*name*)`/tag`
+:   Tag the image `name` into a repository
+
+    **Example request**:
+
+        POST /images/test/tag?repo=myrepo&force=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+
+    Query Parameters:
+
+     
+
+    -   **repo** – The repository to tag in
+    -   **force** – 1/True/true or 0/False/false, default false
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **400** – bad parameter
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Remove an image
+
+ `DELETE /images/`(*name*)
+:   Remove the image `name` from the filesystem
+
+    **Example request**:
+
+        DELETE /images/test HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-type: application/json
+
+        [
+         {"Untagged":"3e2f21a89f"},
+         {"Deleted":"3e2f21a89f"},
+         {"Deleted":"53b4f83ac9"}
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **force** – 1/True/true or 0/False/false, default false
+    -   **noprune** – 1/True/true or 0/False/false, default false
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Search images
+
+ `GET /images/search`
+:   Search for an image in the docker index.
+
+    Note
+
+    The response keys have changed from API v1.6 to reflect the JSON
+    sent by the registry server to the docker daemon’s request.
+
+    **Example request**:
+
+        GET /images/search?term=sshd HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "wma55/u1210sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "jdswinbank/sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "vgauthier/sshd",
+                    "star_count": 0
+                }
+        ...
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **term** – term to search
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+### 2.3 Misc
+
+#### Build an image from Dockerfile via stdin
+
+ `POST /build`
+:   Build an image from Dockerfile via stdin
+
+    **Example request**:
+
+        POST /build HTTP/1.1
+
+        {{ STREAM }}
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"stream":"Step 1..."}
+        {"stream":"..."}
+        {"error":"Error...", "errorDetail":{"code": 123, "message": "Error..."}}
+
+    The stream must be a tar archive compressed with one of the
+    following algorithms: identity (no compression), gzip, bzip2, xz.
+
+    The archive must include a file called `Dockerfile`
+ at its root. It may include any number of other files,
+    which will be accessible in the build context (See the [*ADD build
+    command*](../../builder/#dockerbuilder)).
+
+    Query Parameters:
+
+     
+
+    -   **t** – repository name (and optionally a tag) to be applied to
+        the resulting image in case of success
+    -   **q** – suppress verbose build output
+    -   **nocache** – do not use the cache when building the image
+
+    Request Headers:
+
+     
+
+    -   **Content-type** – should be set to
+        `"application/tar"`.
+    -   **X-Registry-Config** – base64-encoded ConfigFile object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Check auth configuration
+
+ `POST /auth`
+:   Get the default username and email
+
+    **Example request**:
+
+        POST /auth HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "username":"hannibal",
+             "password:"xxxx",
+             "email":"hannibal@a-team.com",
+             "serveraddress":"https://index.docker.io/v1/"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **204** – no error
+    -   **500** – server error
+
+#### Display system-wide information
+
+ `GET /info`
+:   Display system-wide information
+
+    **Example request**:
+
+        GET /info HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Containers":11,
+             "Images":16,
+             "Debug":false,
+             "NFd": 11,
+             "NGoroutines":21,
+             "MemoryLimit":true,
+             "SwapLimit":false,
+             "IPv4Forwarding":true
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Show the docker version information
+
+ `GET /version`
+:   Show the docker version information
+
+    **Example request**:
+
+        GET /version HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Version":"0.2.2",
+             "GitCommit":"5a2a5cc+CHANGES",
+             "GoVersion":"go1.0.3"
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Create a new image from a container’s changes
+
+ `POST /commit`
+:   Create a new image from a container’s changes
+
+    **Example request**:
+
+        POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+            Content-Type: application/vnd.docker.raw-stream
+
+        {"Id":"596069db4bf5"}
+
+    Query Parameters:
+
+     
+
+    -   **container** – source container
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **m** – commit message
+    -   **author** – author (eg. "John Hannibal Smith
+        \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>")
+    -   **run** – config automatically applied when the image is run.
+        (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]})
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Monitor Docker’s events
+
+ `GET /events`
+:   Get events from docker, either in real time via streaming, or via
+    polling (using since)
+
+    **Example request**:
+
+        GET /events?since=1374067924
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"create","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"start","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"stop","id":"dfdf82bd3881","from":"base:latest","time":1374067966}
+        {"status":"destroy","id":"dfdf82bd3881","from":"base:latest","time":1374067970}
+
+    Query Parameters:
+
+     
+
+    -   **since** – timestamp used for polling
+    -   **until** – timestamp used for polling
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Get a tarball containing all images and tags in a repository
+
+ `GET /images/`(*name*)`/get`
+:   Get a tarball containing all images and metadata for the repository
+    specified by `name`.
+
+    **Example request**
+
+        GET /images/ubuntu/get
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/x-tar
+
+        Binary data stream
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Load a tarball with a set of images and tags into docker
+
+ `POST /images/load`
+:   Load a set of images and tags into the docker repository.
+
+    **Example request**
+
+        POST /images/load
+
+        Tarball in body
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+## 3. Going further
+
+### 3.1 Inside ‘docker run’
+
+Here are the steps of ‘docker run’ :
+
+-   Create the container
+
+-   If the status code is 404, it means the image doesn’t exists:
+    :   -   Try to pull it
+        -   Then retry to create the container
+
+-   Start the container
+
+-   If you are not in detached mode:
+    :   -   Attach to the container, using logs=1 (to have stdout and
+            stderr from the container’s start) and stream=1
+
+-   If in detached mode or only stdin is attached:
+    :   -   Display the container’s id
+
+### 3.2 Hijacking
+
+In this version of the API, /attach, uses hijacking to transport stdin,
+stdout and stderr on the same socket. This might change in the future.
+
+### 3.3 CORS Requests
+
+To enable cross origin requests to the remote api add the flag
+"–api-enable-cors" when running docker in daemon mode.
+
+    docker -d -H="192.168.1.9:4243" --api-enable-cors

+ 1255 - 0
docs/sources/reference/api/docker_remote_api_v1.9.md

@@ -0,0 +1,1255 @@
+page_title: Remote API v1.9
+page_description: API Documentation for Docker
+page_keywords: API, Docker, rcli, REST, documentation
+
+# Docker Remote API v1.9
+
+## 1. Brief introduction
+
+-   The Remote API has replaced rcli
+-   The daemon listens on `unix:///var/run/docker.sock`
+, but you can [*Bind Docker to another host/port or a Unix
+    socket*](../../../use/basics/#bind-docker).
+-   The API tends to be REST, but for some complex commands, like
+    `attach` or `pull`, the HTTP
+    connection is hijacked to transport `stdout, stdin`
+ and `stderr`
+
+## 2. Endpoints
+
+### 2.1 Containers
+
+#### List containers
+
+ `GET /containers/json`
+:   List containers
+
+    **Example request**:
+
+        GET /containers/json?all=1&before=8dfafdbc3a40&size=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id": "8dfafdbc3a40",
+                     "Image": "base:latest",
+                     "Command": "echo 1",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "9cd87474be90",
+                     "Image": "base:latest",
+                     "Command": "echo 222222",
+                     "Created": 1367854155,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "3176a2479c92",
+                     "Image": "base:latest",
+                     "Command": "echo 3333333333333333",
+                     "Created": 1367854154,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             },
+             {
+                     "Id": "4cb07b47f9fb",
+                     "Image": "base:latest",
+                     "Command": "echo 444444444444444444444444444444444",
+                     "Created": 1367854152,
+                     "Status": "Exit 0",
+                     "Ports":[],
+                     "SizeRw":12288,
+                     "SizeRootFs":0
+             }
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **all** – 1/True/true or 0/False/false, Show all containers.
+        Only running containers are shown by default
+    -   **limit** – Show `limit` last created
+        containers, include non-running ones.
+    -   **since** – Show only containers created since Id, include
+        non-running ones.
+    -   **before** – Show only containers created before Id, include
+        non-running ones.
+    -   **size** – 1/True/true or 0/False/false, Show the containers
+        sizes
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **500** – server error
+
+#### Create a container
+
+ `POST /containers/create`
+:   Create a container
+
+    **Example request**:
+
+        POST /containers/create HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Hostname":"",
+             "User":"",
+             "Memory":0,
+             "MemorySwap":0,
+             "CpuShares":0,
+             "AttachStdin":false,
+             "AttachStdout":true,
+             "AttachStderr":true,
+             "PortSpecs":null,
+             "Tty":false,
+             "OpenStdin":false,
+             "StdinOnce":false,
+             "Env":null,
+             "Cmd":[
+                     "date"
+             ],
+             "Dns":null,
+             "Image":"base",
+             "Volumes":{
+                     "/tmp": {}
+             },
+             "VolumesFrom":"",
+             "WorkingDir":"",
+             "ExposedPorts":{
+                     "22/tcp": {}
+             }
+        }
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+        Content-Type: application/json
+
+        {
+             "Id":"e90e34656806"
+             "Warnings":[]
+        }
+
+    Json Parameters:
+
+     
+
+    -   **Hostname** – Container host name
+    -   **User** – Username or UID
+    -   **Memory** – Memory Limit in bytes
+    -   **CpuShares** – CPU shares (relative weight)
+    -   **AttachStdin** – 1/True/true or 0/False/false, attach to
+        standard input. Default false
+    -   **AttachStdout** – 1/True/true or 0/False/false, attach to
+        standard output. Default false
+    -   **AttachStderr** – 1/True/true or 0/False/false, attach to
+        standard error. Default false
+    -   **Tty** – 1/True/true or 0/False/false, allocate a pseudo-tty.
+        Default false
+    -   **OpenStdin** – 1/True/true or 0/False/false, keep stdin open
+        even if not attached. Default false
+
+    Query Parameters:
+
+     
+
+    -   **name** – Assign the specified name to the container. Must
+        match `/?[a-zA-Z0-9_-]+`.
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **406** – impossible to attach (container not running)
+    -   **500** – server error
+
+#### Inspect a container
+
+ `GET /containers/`(*id*)`/json`
+:   Return low-level information on the container `id`
+
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+                     "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+                     "Created": "2013-05-07T14:51:42.041847+02:00",
+                     "Path": "date",
+                     "Args": [],
+                     "Config": {
+                             "Hostname": "4fa6e0f0c678",
+                             "User": "",
+                             "Memory": 0,
+                             "MemorySwap": 0,
+                             "AttachStdin": false,
+                             "AttachStdout": true,
+                             "AttachStderr": true,
+                             "PortSpecs": null,
+                             "Tty": false,
+                             "OpenStdin": false,
+                             "StdinOnce": false,
+                             "Env": null,
+                             "Cmd": [
+                                     "date"
+                             ],
+                             "Dns": null,
+                             "Image": "base",
+                             "Volumes": {},
+                             "VolumesFrom": "",
+                             "WorkingDir":""
+
+                     },
+                     "State": {
+                             "Running": false,
+                             "Pid": 0,
+                             "ExitCode": 0,
+                             "StartedAt": "2013-05-07T14:51:42.087658+02:01360",
+                             "Ghost": false
+                     },
+                     "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+                     "NetworkSettings": {
+                             "IpAddress": "",
+                             "IpPrefixLen": 0,
+                             "Gateway": "",
+                             "Bridge": "",
+                             "PortMapping": null
+                     },
+                     "SysInitPath": "/home/kitty/go/src/github.com/dotcloud/docker/bin/docker",
+                     "ResolvConfPath": "/etc/resolv.conf",
+                     "Volumes": {},
+                     "HostConfig": {
+                         "Binds": null,
+                         "ContainerIDFile": "",
+                         "LxcConf": [],
+                         "Privileged": false,
+                         "PortBindings": {
+                            "80/tcp": [
+                                {
+                                    "HostIp": "0.0.0.0",
+                                    "HostPort": "49153"
+                                }
+                            ]
+                         },
+                         "Links": null,
+                         "PublishAllPorts": false
+                     }
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### List processes running inside a container
+
+ `GET /containers/`(*id*)`/top`
+:   List processes running inside the container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/top HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Titles":[
+                     "USER",
+                     "PID",
+                     "%CPU",
+                     "%MEM",
+                     "VSZ",
+                     "RSS",
+                     "TTY",
+                     "STAT",
+                     "START",
+                     "TIME",
+                     "COMMAND"
+                     ],
+             "Processes":[
+                     ["root","20147","0.0","0.1","18060","1864","pts/4","S","10:06","0:00","bash"],
+                     ["root","20271","0.0","0.0","4312","352","pts/4","S+","10:07","0:00","sleep","10"]
+             ]
+        }
+
+    Query Parameters:
+
+     
+
+    -   **ps\_args** – ps arguments to use (eg. aux)
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Inspect changes on a container’s filesystem
+
+ `GET /containers/`(*id*)`/changes`
+:   Inspect changes on container `id` ‘s filesystem
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/changes HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Path":"/dev",
+                     "Kind":0
+             },
+             {
+                     "Path":"/dev/kmsg",
+                     "Kind":1
+             },
+             {
+                     "Path":"/test",
+                     "Kind":1
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Export a container
+
+ `GET /containers/`(*id*)`/export`
+:   Export the contents of container `id`
+
+    **Example request**:
+
+        GET /containers/4fa6e0f0c678/export HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Start a container
+
+ `POST /containers/`(*id*)`/start`
+:   Start the container `id`
+
+    **Example request**:
+
+        POST /containers/(id)/start HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Binds":["/tmp:/tmp"],
+             "LxcConf":{"lxc.utsname":"docker"},
+             "PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] },
+             "PublishAllPorts":false,
+             "Privileged":false
+        }
+
+    **Example response**:
+
+        HTTP/1.1 204 No Content
+        Content-Type: text/plain
+
+    Json Parameters:
+
+     
+
+    -   **Binds** – Create a bind mount to a directory or file with
+        [host-path]:[container-path]:[rw|ro]. If a directory
+        "container-path" is missing, then docker creates a new volume.
+    -   **LxcConf** – Map of custom lxc options
+    -   **PortBindings** – Expose ports from the container, optionally
+        publishing them via the HostPort flag
+    -   **PublishAllPorts** – 1/True/true or 0/False/false, publish all
+        exposed ports to the host interfaces. Default false
+    -   **Privileged** – 1/True/true or 0/False/false, give extended
+        privileges to this container. Default false
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Stop a container
+
+ `POST /containers/`(*id*)`/stop`
+:   Stop the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/stop?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Restart a container
+
+ `POST /containers/`(*id*)`/restart`
+:   Restart the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/restart?t=5 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **t** – number of seconds to wait before killing the container
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Kill a container
+
+ `POST /containers/`(*id*)`/kill`
+:   Kill the container `id`
+
+    **Example request**:
+
+        POST /containers/e90e34656806/kill HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Attach to a container
+
+ `POST /containers/`(*id*)`/attach`
+:   Attach to the container `id`
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/attach?logs=1&stream=0&stdout=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/vnd.docker.raw-stream
+
+        {{ STREAM }}
+
+    Query Parameters:
+
+     
+
+    -   **logs** – 1/True/true or 0/False/false, return logs. Default
+        false
+    -   **stream** – 1/True/true or 0/False/false, return stream.
+        Default false
+    -   **stdin** – 1/True/true or 0/False/false, if stream=true, attach
+        to stdin. Default false
+    -   **stdout** – 1/True/true or 0/False/false, if logs=true, return
+        stdout log, if stream=true, attach to stdout. Default false
+    -   **stderr** – 1/True/true or 0/False/false, if logs=true, return
+        stderr log, if stream=true, attach to stderr. Default false
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+    **Stream details**:
+
+    When using the TTY setting is enabled in
+    [`POST /containers/create`
+](#post--containers-create "POST /containers/create"), the
+    stream is the raw data from the process PTY and client’s stdin. When
+    the TTY is disabled, then the stream is multiplexed to separate
+    stdout and stderr.
+
+    The format is a **Header** and a **Payload** (frame).
+
+    **HEADER**
+
+    The header will contain the information on which stream write the
+    stream (stdout or stderr). It also contain the size of the
+    associated frame encoded on the last 4 bytes (uint32).
+
+    It is encoded on the first 8 bytes like this:
+
+        header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
+
+    `STREAM_TYPE` can be:
+
+    -   0: stdin (will be writen on stdout)
+    -   1: stdout
+    -   2: stderr
+
+    `SIZE1, SIZE2, SIZE3, SIZE4` are the 4 bytes of
+    the uint32 size encoded as big endian.
+
+    **PAYLOAD**
+
+    The payload is the raw stream.
+
+    **IMPLEMENTATION**
+
+    The simplest way to implement the Attach protocol is the following:
+
+    1.  Read 8 bytes
+    2.  chose stdout or stderr depending on the first byte
+    3.  Extract the frame size from the last 4 byets
+    4.  Read the extracted size and output it on the correct output
+    5.  Goto 1)
+
+#### Wait a container
+
+ `POST /containers/`(*id*)`/wait`
+:   Block until container `id` stops, then returns
+    the exit code
+
+    **Example request**:
+
+        POST /containers/16253994b7c4/wait HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"StatusCode":0}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Remove a container
+
+ `DELETE /containers/`(*id*)
+:   Remove the container `id` from the filesystem
+
+    **Example request**:
+
+        DELETE /containers/16253994b7c4?v=1 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 204 OK
+
+    Query Parameters:
+
+     
+
+    -   **v** – 1/True/true or 0/False/false, Remove the volumes
+        associated to the container. Default false
+
+    Status Codes:
+
+    -   **204** – no error
+    -   **400** – bad parameter
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Copy files or folders from a container
+
+ `POST /containers/`(*id*)`/copy`
+:   Copy files or folders of container `id`
+
+    **Example request**:
+
+        POST /containers/4fa6e0f0c678/copy HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "Resource":"test.txt"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/octet-stream
+
+        {{ STREAM }}
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+### 2.2 Images
+
+#### List Images
+
+ `GET /images/json`
+:   **Example request**:
+
+        GET /images/json?all=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+          {
+             "RepoTags": [
+               "ubuntu:12.04",
+               "ubuntu:precise",
+               "ubuntu:latest"
+             ],
+             "Id": "8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c",
+             "Created": 1365714795,
+             "Size": 131506275,
+             "VirtualSize": 131506275
+          },
+          {
+             "RepoTags": [
+               "ubuntu:12.10",
+               "ubuntu:quantal"
+             ],
+             "ParentId": "27cf784147099545",
+             "Id": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "Created": 1364102658,
+             "Size": 24653,
+             "VirtualSize": 180116135
+          }
+        ]
+
+#### Create an image
+
+ `POST /images/create`
+:   Create an image, either by pull it from the registry or by importing
+    it
+
+    **Example request**:
+
+        POST /images/create?fromImage=base HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pulling..."}
+        {"status":"Pulling", "progress":"1 B/ 100 B", "progressDetail":{"current":1, "total":100}}
+        {"error":"Invalid..."}
+        ...
+
+    When using this endpoint to pull an image from the registry, the
+    `X-Registry-Auth` header can be used to include
+    a base64-encoded AuthConfig object.
+
+    Query Parameters:
+
+     
+
+    -   **fromImage** – name of the image to pull
+    -   **fromSrc** – source to import, - means stdin
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **registry** – the registry to pull from
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – base64-encoded AuthConfig object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Insert a file in an image
+
+ `POST /images/`(*name*)`/insert`
+:   Insert a file from `url` in the image
+    `name` at `path`
+
+    **Example request**:
+
+        POST /images/test/insert?path=/usr&url=myurl HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Inserting..."}
+        {"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}}
+        {"error":"Invalid..."}
+        ...
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Inspect an image
+
+ `GET /images/`(*name*)`/json`
+:   Return low-level information on the image `name`
+
+    **Example request**:
+
+        GET /images/base/json HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+             "parent":"27cf784147099545",
+             "created":"2013-03-23T22:24:18.818426-07:00",
+             "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0",
+             "container_config":
+                     {
+                             "Hostname":"",
+                             "User":"",
+                             "Memory":0,
+                             "MemorySwap":0,
+                             "AttachStdin":false,
+                             "AttachStdout":false,
+                             "AttachStderr":false,
+                             "PortSpecs":null,
+                             "Tty":true,
+                             "OpenStdin":true,
+                             "StdinOnce":false,
+                             "Env":null,
+                             "Cmd": ["/bin/bash"]
+                             ,"Dns":null,
+                             "Image":"base",
+                             "Volumes":null,
+                             "VolumesFrom":"",
+                             "WorkingDir":""
+                     },
+             "Size": 6824592
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Get the history of an image
+
+ `GET /images/`(*name*)`/history`
+:   Return the history of the image `name`
+
+    **Example request**:
+
+        GET /images/base/history HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+             {
+                     "Id":"b750fe79269d",
+                     "Created":1364102658,
+                     "CreatedBy":"/bin/bash"
+             },
+             {
+                     "Id":"27cf78414709",
+                     "Created":1364068391,
+                     "CreatedBy":""
+             }
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Push an image on the registry
+
+ `POST /images/`(*name*)`/push`
+:   Push the image `name` on the registry
+
+    **Example request**:
+
+        POST /images/test/push HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"Pushing..."}
+        {"status":"Pushing", "progress":"1/? (n/a)", "progressDetail":{"current":1}}}
+        {"error":"Invalid..."}
+        ...
+
+    Query Parameters:
+
+     
+
+    -   **registry** – the registry you wan to push, optional
+
+    Request Headers:
+
+     
+
+    -   **X-Registry-Auth** – include a base64-encoded AuthConfig
+        object.
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **500** – server error
+
+#### Tag an image into a repository
+
+ `POST /images/`(*name*)`/tag`
+:   Tag the image `name` into a repository
+
+    **Example request**:
+
+        POST /images/test/tag?repo=myrepo&force=0 HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+
+    Query Parameters:
+
+     
+
+    -   **repo** – The repository to tag in
+    -   **force** – 1/True/true or 0/False/false, default false
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **400** – bad parameter
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Remove an image
+
+ `DELETE /images/`(*name*)
+:   Remove the image `name` from the filesystem
+
+    **Example request**:
+
+        DELETE /images/test HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-type: application/json
+
+        [
+         {"Untagged":"3e2f21a89f"},
+         {"Deleted":"3e2f21a89f"},
+         {"Deleted":"53b4f83ac9"}
+        ]
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **404** – no such image
+    -   **409** – conflict
+    -   **500** – server error
+
+#### Search images
+
+ `GET /images/search`
+:   Search for an image in the docker index.
+
+    Note
+
+    The response keys have changed from API v1.6 to reflect the JSON
+    sent by the registry server to the docker daemon’s request.
+
+    **Example request**:
+
+        GET /images/search?term=sshd HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        [
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "wma55/u1210sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "jdswinbank/sshd",
+                    "star_count": 0
+                },
+                {
+                    "description": "",
+                    "is_official": false,
+                    "is_trusted": false,
+                    "name": "vgauthier/sshd",
+                    "star_count": 0
+                }
+        ...
+        ]
+
+    Query Parameters:
+
+     
+
+    -   **term** – term to search
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+### 2.3 Misc
+
+#### Build an image from Dockerfile
+
+ `POST /build`
+:   Build an image from Dockerfile using a POST body.
+
+    **Example request**:
+
+        POST /build HTTP/1.1
+
+        {{ STREAM }}
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"stream":"Step 1..."}
+        {"stream":"..."}
+        {"error":"Error...", "errorDetail":{"code": 123, "message": "Error..."}}
+
+    The stream must be a tar archive compressed with one of the
+    following algorithms: identity (no compression), gzip, bzip2, xz.
+
+    The archive must include a file called `Dockerfile`
+ at its root. It may include any number of other files,
+    which will be accessible in the build context (See the [*ADD build
+    command*](../../builder/#dockerbuilder)).
+
+    Query Parameters:
+
+     
+
+    -   **t** – repository name (and optionally a tag) to be applied to
+        the resulting image in case of success
+    -   **q** – suppress verbose build output
+    -   **nocache** – do not use the cache when building the image
+    -   **rm** – Remove intermediate containers after a successful build
+
+    Request Headers:
+
+     
+
+    -   **Content-type** – should be set to
+        `"application/tar"`.
+    -   **X-Registry-Config** – base64-encoded ConfigFile object
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Check auth configuration
+
+ `POST /auth`
+:   Get the default username and email
+
+    **Example request**:
+
+        POST /auth HTTP/1.1
+        Content-Type: application/json
+
+        {
+             "username":"hannibal",
+             "password:"xxxx",
+             "email":"hannibal@a-team.com",
+             "serveraddress":"https://index.docker.io/v1/"
+        }
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **204** – no error
+    -   **500** – server error
+
+#### Display system-wide information
+
+ `GET /info`
+:   Display system-wide information
+
+    **Example request**:
+
+        GET /info HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Containers":11,
+             "Images":16,
+             "Debug":false,
+             "NFd": 11,
+             "NGoroutines":21,
+             "MemoryLimit":true,
+             "SwapLimit":false,
+             "IPv4Forwarding":true
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Show the docker version information
+
+ `GET /version`
+:   Show the docker version information
+
+    **Example request**:
+
+        GET /version HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+             "Version":"0.2.2",
+             "GitCommit":"5a2a5cc+CHANGES",
+             "GoVersion":"go1.0.3"
+        }
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Create a new image from a container’s changes
+
+ `POST /commit`
+:   Create a new image from a container’s changes
+
+    **Example request**:
+
+        POST /commit?container=44c004db4b17&m=message&repo=myrepo HTTP/1.1
+
+    **Example response**:
+
+        HTTP/1.1 201 OK
+            Content-Type: application/vnd.docker.raw-stream
+
+        {"Id":"596069db4bf5"}
+
+    Query Parameters:
+
+     
+
+    -   **container** – source container
+    -   **repo** – repository
+    -   **tag** – tag
+    -   **m** – commit message
+    -   **author** – author (eg. "John Hannibal Smith
+        \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>")
+    -   **run** – config automatically applied when the image is run.
+        (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]})
+
+    Status Codes:
+
+    -   **201** – no error
+    -   **404** – no such container
+    -   **500** – server error
+
+#### Monitor Docker’s events
+
+ `GET /events`
+:   Get events from docker, either in real time via streaming, or via
+    polling (using since)
+
+    **Example request**:
+
+        GET /events?since=1374067924
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {"status":"create","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"start","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+        {"status":"stop","id":"dfdf82bd3881","from":"base:latest","time":1374067966}
+        {"status":"destroy","id":"dfdf82bd3881","from":"base:latest","time":1374067970}
+
+    Query Parameters:
+
+     
+
+    -   **since** – timestamp used for polling
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Get a tarball containing all images and tags in a repository
+
+ `GET /images/`(*name*)`/get`
+:   Get a tarball containing all images and metadata for the repository
+    specified by `name`.
+
+    **Example request**
+
+        GET /images/ubuntu/get
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Content-Type: application/x-tar
+
+        Binary data stream
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+#### Load a tarball with a set of images and tags into docker
+
+ `POST /images/load`
+:   Load a set of images and tags into the docker repository.
+
+    **Example request**
+
+        POST /images/load
+
+        Tarball in body
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+
+    Status Codes:
+
+    -   **200** – no error
+    -   **500** – server error
+
+## 3. Going further
+
+### 3.1 Inside ‘docker run’
+
+Here are the steps of ‘docker run’ :
+
+-   Create the container
+
+-   If the status code is 404, it means the image doesn’t exists:
+    :   -   Try to pull it
+        -   Then retry to create the container
+
+-   Start the container
+
+-   If you are not in detached mode:
+    :   -   Attach to the container, using logs=1 (to have stdout and
+            stderr from the container’s start) and stream=1
+
+-   If in detached mode or only stdin is attached:
+    :   -   Display the container’s id
+
+### 3.2 Hijacking
+
+In this version of the API, /attach, uses hijacking to transport stdin,
+stdout and stderr on the same socket. This might change in the future.
+
+### 3.3 CORS Requests
+
+To enable cross origin requests to the remote api add the flag
+"–api-enable-cors" when running docker in daemon mode.
+
+    docker -d -H="192.168.1.9:4243" --api-enable-cors

+ 525 - 0
docs/sources/reference/api/index_api.md

@@ -0,0 +1,525 @@
+page_title: Index API
+page_description: API Documentation for Docker Index
+page_keywords: API, Docker, index, REST, documentation
+
+# Docker Index API
+
+## Introduction
+
+- This is the REST API for the Docker index
+- Authorization is done with basic auth over SSL
+- Not all commands require authentication, only those noted as such.
+
+## Repository
+
+### Repositories
+
+### User Repo
+
+ `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/`
+:   Create a user repository with the given `namespace`
+ and `repo_name`.
+
+    **Example Request**:
+
+        PUT /v1/repositories/foo/bar/ HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+        X-Docker-Token: true
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}]
+
+    Parameters:
+
+    - **namespace** – the namespace for the repo
+    - **repo\_name** – the name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=write
+        X-Docker-Token: signature=123abc,repository="foo/bar",access=write
+        X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
+
+        ""
+
+    Status Codes:
+
+    - **200** – Created
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+
+ `DELETE /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/`
+:   Delete a user repository with the given `namespace`
+ and `repo_name`.
+
+    **Example Request**:
+
+        DELETE /v1/repositories/foo/bar/ HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+        X-Docker-Token: true
+
+        ""
+
+    Parameters:
+
+    - **namespace** – the namespace for the repo
+    - **repo\_name** – the name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 202
+        Vary: Accept
+        Content-Type: application/json
+        WWW-Authenticate: Token signature=123abc,repository="foo/bar",access=delete
+        X-Docker-Token: signature=123abc,repository="foo/bar",access=delete
+        X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
+
+        ""
+
+    Status Codes:
+
+    - **200** – Deleted
+    - **202** – Accepted
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+
+### Library Repo
+
+ `PUT /v1/repositories/`(*repo\_name*)`/`
+:   Create a library repository with the given `repo_name`
+. This is a restricted feature only available to docker
+    admins.
+
+    When namespace is missing, it is assumed to be `library`
+
+
+    **Example Request**:
+
+        PUT /v1/repositories/foobar/ HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+        X-Docker-Token: true
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"}]
+
+    Parameters:
+
+    - **repo\_name** – the library name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=write
+        X-Docker-Token: signature=123abc,repository="foo/bar",access=write
+        X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
+
+        ""
+
+    Status Codes:
+
+    - **200** – Created
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+
+ `DELETE /v1/repositories/`(*repo\_name*)`/`
+:   Delete a library repository with the given `repo_name`
+. This is a restricted feature only available to docker
+    admins.
+
+    When namespace is missing, it is assumed to be `library`
+
+
+    **Example Request**:
+
+        DELETE /v1/repositories/foobar/ HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+        X-Docker-Token: true
+
+        ""
+
+    Parameters:
+
+    - **repo\_name** – the library name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 202
+        Vary: Accept
+        Content-Type: application/json
+        WWW-Authenticate: Token signature=123abc,repository="library/foobar",access=delete
+        X-Docker-Token: signature=123abc,repository="foo/bar",access=delete
+        X-Docker-Endpoints: registry-1.docker.io [, registry-2.docker.io]
+
+        ""
+
+    Status Codes:
+
+    - **200** – Deleted
+    - **202** – Accepted
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+
+### Repository Images
+
+### User Repo Images
+
+ `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/images`
+:   Update the images for a user repo.
+
+    **Example Request**:
+
+        PUT /v1/repositories/foo/bar/images HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+        "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}]
+
+    Parameters:
+
+    - **namespace** – the namespace for the repo
+    - **repo\_name** – the name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 204
+        Vary: Accept
+        Content-Type: application/json
+
+        ""
+
+    Status Codes:
+
+    - **204** – Created
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active or permission denied
+
+ `GET /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/images`
+:   get the images for a user repo.
+
+    **Example Request**:
+
+        GET /v1/repositories/foo/bar/images HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+
+    Parameters:
+
+    - **namespace** – the namespace for the repo
+    - **repo\_name** – the name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+        "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"},
+        {"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds",
+        "checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}]
+
+    Status Codes:
+
+    - **200** – OK
+    - **404** – Not found
+
+### Library Repo Images
+
+ `PUT /v1/repositories/`(*repo\_name*)`/images`
+:   Update the images for a library repo.
+
+    **Example Request**:
+
+        PUT /v1/repositories/foobar/images HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+        "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"}]
+
+    Parameters:
+
+    - **repo\_name** – the library name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 204
+        Vary: Accept
+        Content-Type: application/json
+
+        ""
+
+    Status Codes:
+
+    - **204** – Created
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active or permission denied
+
+ `GET /v1/repositories/`(*repo\_name*)`/images`
+:   get the images for a library repo.
+
+    **Example Request**:
+
+        GET /v1/repositories/foobar/images HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+
+    Parameters:
+
+    - **repo\_name** – the library name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+
+        [{"id": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+        "checksum": "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"},
+        {"id": "ertwetewtwe38722009fe6857087b486531f9a779a0c1dfddgfgsdgdsgds",
+        "checksum": "34t23f23fc17e3ed29dae8f12c4f9e89cc6f0bsdfgfsdgdsgdsgerwgew"}]
+
+    Status Codes:
+
+    - **200** – OK
+    - **404** – Not found
+
+### Repository Authorization
+
+### Library Repo
+
+ `PUT /v1/repositories/`(*repo\_name*)`/auth`
+:   authorize a token for a library repo
+
+    **Example Request**:
+
+        PUT /v1/repositories/foobar/auth HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Authorization: Token signature=123abc,repository="library/foobar",access=write
+
+    Parameters:
+
+    - **repo\_name** – the library name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+
+        "OK"
+
+    Status Codes:
+
+    - **200** – OK
+    - **403** – Permission denied
+    - **404** – Not found
+
+### User Repo
+
+ `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/auth`
+:   authorize a token for a user repo
+
+    **Example Request**:
+
+        PUT /v1/repositories/foo/bar/auth HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Authorization: Token signature=123abc,repository="foo/bar",access=write
+
+    Parameters:
+
+    - **namespace** – the namespace for the repo
+    - **repo\_name** – the name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+
+        "OK"
+
+    Status Codes:
+
+    - **200** – OK
+    - **403** – Permission denied
+    - **404** – Not found
+
+### Users
+
+### User Login
+
+ `GET /v1/users`
+:   If you want to check your login, you can try this endpoint
+
+    **Example Request**:
+
+        GET /v1/users HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Authorization: Basic akmklmasadalkm==
+
+    **Example Response**:
+
+        HTTP/1.1 200 OK
+        Vary: Accept
+        Content-Type: application/json
+
+        OK
+
+    Status Codes:
+
+    - **200** – no error
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+
+### User Register
+
+ `POST /v1/users`
+:   Registering a new account.
+
+    **Example request**:
+
+        POST /v1/users HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+
+        {"email": "sam@dotcloud.com",
+         "password": "toto42",
+         "username": "foobar"'}
+
+    Json Parameters:
+
+     
+
+    - **email** – valid email address, that needs to be confirmed
+    - **username** – min 4 character, max 30 characters, must match
+        the regular expression [a-z0-9\_].
+    - **password** – min 5 characters
+
+    **Example Response**:
+
+        HTTP/1.1 201 OK
+        Vary: Accept
+        Content-Type: application/json
+
+        "User Created"
+
+    Status Codes:
+
+    - **201** – User Created
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+
+### Update User
+
+ `PUT /v1/users/`(*username*)`/`
+:   Change a password or email address for given user. If you pass in an
+    email, it will add it to your account, it will not remove the old
+    one. Passwords will be updated.
+
+    It is up to the client to verify that that password that is sent is
+    the one that they want. Common approach is to have them type it
+    twice.
+
+    **Example Request**:
+
+        PUT /v1/users/fakeuser/ HTTP/1.1
+        Host: index.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Basic akmklmasadalkm==
+
+        {"email": "sam@dotcloud.com",
+         "password": "toto42"}
+
+    Parameters:
+
+    - **username** – username for the person you want to update
+
+    **Example Response**:
+
+        HTTP/1.1 204
+        Vary: Accept
+        Content-Type: application/json
+
+        ""
+
+    Status Codes:
+
+    - **204** – User Updated
+    - **400** – Errors (invalid json, missing or invalid fields, etc)
+    - **401** – Unauthorized
+    - **403** – Account is not Active
+    - **404** – User not found
+
+## Search
+
+If you need to search the index, this is the endpoint you would use.
+
+### Search
+
+ `GET /v1/search`
+:   Search the Index given a search term. It accepts
+    [GET](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3)
+    only.
+
+    **Example request**:
+
+        GET /v1/search?q=search_term HTTP/1.1
+        Host: example.com
+        Accept: application/json
+
+    **Example response**:
+
+        HTTP/1.1 200 OK
+        Vary: Accept
+        Content-Type: application/json
+
+        {"query":"search_term",
+          "num_results": 3,
+          "results" : [
+             {"name": "ubuntu", "description": "An ubuntu image..."},
+             {"name": "centos", "description": "A centos image..."},
+             {"name": "fedora", "description": "A fedora image..."}
+           ]
+         }
+
+    Query Parameters:
+
+    - **q** – what you want to search for
+
+    Status Codes:
+
+    - **200** – no error
+    - **500** – server error
+
+

+ 501 - 0
docs/sources/reference/api/registry_api.md

@@ -0,0 +1,501 @@
+page_title: Registry API
+page_description: API Documentation for Docker Registry
+page_keywords: API, Docker, index, registry, REST, documentation
+
+# Docker Registry API
+
+## Introduction
+
+- This is the REST API for the Docker Registry
+- It stores the images and the graph for a set of repositories
+- It does not have user accounts data
+- It has no notion of user accounts or authorization
+- It delegates authentication and authorization to the Index Auth
+    service using tokens
+- It supports different storage backends (S3, cloud files, local FS)
+- It doesn’t have a local database
+- It will be open-sourced at some point
+
+We expect that there will be multiple registries out there. To help to
+grasp the context, here are some examples of registries:
+
+- **sponsor registry**: such a registry is provided by a third-party
+    hosting infrastructure as a convenience for their customers and the
+    docker community as a whole. Its costs are supported by the third
+    party, but the management and operation of the registry are
+    supported by dotCloud. It features read/write access, and delegates
+    authentication and authorization to the Index.
+- **mirror registry**: such a registry is provided by a third-party
+    hosting infrastructure but is targeted at their customers only. Some
+    mechanism (unspecified to date) ensures that public images are
+    pulled from a sponsor registry to the mirror registry, to make sure
+    that the customers of the third-party provider can “docker pull”
+    those images locally.
+- **vendor registry**: such a registry is provided by a software
+    vendor, who wants to distribute docker images. It would be operated
+    and managed by the vendor. Only users authorized by the vendor would
+    be able to get write access. Some images would be public (accessible
+    for anyone), others private (accessible only for authorized users).
+    Authentication and authorization would be delegated to the Index.
+    The goal of vendor registries is to let someone do “docker pull
+    basho/riak1.3” and automatically push from the vendor registry
+    (instead of a sponsor registry); i.e. get all the convenience of a
+    sponsor registry, while retaining control on the asset distribution.
+- **private registry**: such a registry is located behind a firewall,
+    or protected by an additional security layer (HTTP authorization,
+    SSL client-side certificates, IP address authorization...). The
+    registry is operated by a private entity, outside of dotCloud’s
+    control. It can optionally delegate additional authorization to the
+    Index, but it is not mandatory.
+
+Note
+
+Mirror registries and private registries which do not use the Index
+don’t even need to run the registry code. They can be implemented by any
+kind of transport implementing HTTP GET and PUT. Read-only registries
+can be powered by a simple static HTTP server.
+
+Note
+
+The latter implies that while HTTP is the protocol of choice for a registry, multiple schemes are possible (and in some cases, trivial):
+:   - HTTP with GET (and PUT for read-write registries);
+    - local mount point;
+    - remote docker addressed through SSH.
+
+The latter would only require two new commands in docker, e.g.
+`registryget` and `registryput`,
+wrapping access to the local filesystem (and optionally doing
+consistency checks). Authentication and authorization are then delegated
+to SSH (e.g. with public keys).
+
+## Endpoints
+
+### Images
+
+### Layer
+
+ `GET /v1/images/`(*image\_id*)`/layer`
+:   get image layer for a given `image_id`
+
+    **Example Request**:
+
+        GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Authorization: Token signature=123abc,repository="foo/bar",access=read
+
+    Parameters:
+
+    - **image\_id** – the id for the layer you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        X-Docker-Registry-Version: 0.6.0
+        Cookie: (Cookie provided by the Registry)
+
+        {layer binary data stream}
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Image not found
+
+ `PUT /v1/images/`(*image\_id*)`/layer`
+:   put image layer for a given `image_id`
+
+    **Example Request**:
+
+        PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/layer HTTP/1.1
+        Host: registry-1.docker.io
+        Transfer-Encoding: chunked
+        Authorization: Token signature=123abc,repository="foo/bar",access=write
+
+        {layer binary data stream}
+
+    Parameters:
+
+    - **image\_id** – the id for the layer you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Image not found
+
+### Image
+
+ `PUT /v1/images/`(*image\_id*)`/json`
+:   put image for a given `image_id`
+
+    **Example Request**:
+
+        PUT /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+        {
+            id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
+            parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
+            created: "2013-04-30T17:46:10.843673+03:00",
+            container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
+            container_config: {
+                Hostname: "host-test",
+                User: "",
+                Memory: 0,
+                MemorySwap: 0,
+                AttachStdin: false,
+                AttachStdout: false,
+                AttachStderr: false,
+                PortSpecs: null,
+                Tty: false,
+                OpenStdin: false,
+                StdinOnce: false,
+                Env: null,
+                Cmd: [
+                "/bin/bash",
+                "-c",
+                "apt-get -q -yy -f install libevent-dev"
+                ],
+                Dns: null,
+                Image: "imagename/blah",
+                Volumes: { },
+                VolumesFrom: ""
+            },
+            docker_version: "0.1.7"
+        }
+
+    Parameters:
+
+    - **image\_id** – the id for the layer you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+
+ `GET /v1/images/`(*image\_id*)`/json`
+:   get image for a given `image_id`
+
+    **Example Request**:
+
+        GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/json HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+    Parameters:
+
+    - **image\_id** – the id for the layer you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+        X-Docker-Size: 456789
+        X-Docker-Checksum: b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087
+
+        {
+            id: "088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c",
+            parent: "aeee6396d62273d180a49c96c62e45438d87c7da4a5cf5d2be6bee4e21bc226f",
+            created: "2013-04-30T17:46:10.843673+03:00",
+            container: "8305672a76cc5e3d168f97221106ced35a76ec7ddbb03209b0f0d96bf74f6ef7",
+            container_config: {
+                Hostname: "host-test",
+                User: "",
+                Memory: 0,
+                MemorySwap: 0,
+                AttachStdin: false,
+                AttachStdout: false,
+                AttachStderr: false,
+                PortSpecs: null,
+                Tty: false,
+                OpenStdin: false,
+                StdinOnce: false,
+                Env: null,
+                Cmd: [
+                "/bin/bash",
+                "-c",
+                "apt-get -q -yy -f install libevent-dev"
+                ],
+                Dns: null,
+                Image: "imagename/blah",
+                Volumes: { },
+                VolumesFrom: ""
+            },
+            docker_version: "0.1.7"
+        }
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Image not found
+
+### Ancestry
+
+ `GET /v1/images/`(*image\_id*)`/ancestry`
+:   get ancestry for an image given an `image_id`
+
+    **Example Request**:
+
+        GET /v1/images/088b4505aa3adc3d35e79c031fa126b403200f02f51920fbd9b7c503e87c7a2c/ancestry HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+    Parameters:
+
+    - **image\_id** – the id for the layer you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ["088b4502f51920fbd9b7c503e87c7a2c05aa3adc3d35e79c031fa126b403200f",
+         "aeee63968d87c7da4a5cf5d2be6bee4e21bc226fd62273d180a49c96c62e4543",
+         "bfa4c5326bc764280b0863b46a4b20d940bc1897ef9c1dfec060604bdc383280",
+         "6ab5893c6927c15a15665191f2c6cf751f5056d8b95ceee32e43c5e8a3648544"]
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Image not found
+
+### Tags
+
+ `GET /v1/repositories/`(*namespace*)`/`(*repository*)`/tags`
+:   get all of the tags for the given repo.
+
+    **Example Request**:
+
+        GET /v1/repositories/foo/bar/tags HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+        Cookie: (Cookie provided by the Registry)
+
+    Parameters:
+
+    - **namespace** – namespace for the repo
+    - **repository** – name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        {
+            "latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+            "0.1.1":  "b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087"
+        }
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Repository not found
+
+ `GET /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*)
+:   get a tag for the given repo.
+
+    **Example Request**:
+
+        GET /v1/repositories/foo/bar/tags/latest HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+        Cookie: (Cookie provided by the Registry)
+
+    Parameters:
+
+    - **namespace** – namespace for the repo
+    - **repository** – name for the repo
+    - **tag** – name of tag you want to get
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Tag not found
+
+ `DELETE /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*)
+:   delete the tag for the repo
+
+    **Example Request**:
+
+        DELETE /v1/repositories/foo/bar/tags/latest HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+    Parameters:
+
+    - **namespace** – namespace for the repo
+    - **repository** – name for the repo
+    - **tag** – name of tag you want to delete
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Tag not found
+
+ `PUT /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*)
+:   put a tag for the given repo.
+
+    **Example Request**:
+
+        PUT /v1/repositories/foo/bar/tags/latest HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+        "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
+
+    Parameters:
+
+    - **namespace** – namespace for the repo
+    - **repository** – name for the repo
+    - **tag** – name of tag you want to add
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+    - **400** – Invalid data
+    - **401** – Requires authorization
+    - **404** – Image not found
+
+### Repositories
+
+ `DELETE /v1/repositories/`(*namespace*)`/`(*repository*)`/`
+:   delete a repository
+
+    **Example Request**:
+
+        DELETE /v1/repositories/foo/bar/ HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+        Cookie: (Cookie provided by the Registry)
+
+        ""
+
+    Parameters:
+
+    - **namespace** – namespace for the repo
+    - **repository** – name for the repo
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+    - **401** – Requires authorization
+    - **404** – Repository not found
+
+### Status
+
+ `GET /v1/_ping`
+:   Check status of the registry. This endpoint is also used to
+    determine if the registry supports SSL.
+
+    **Example Request**:
+
+        GET /v1/_ping HTTP/1.1
+        Host: registry-1.docker.io
+        Accept: application/json
+        Content-Type: application/json
+
+        ""
+
+    **Example Response**:
+
+        HTTP/1.1 200
+        Vary: Accept
+        Content-Type: application/json
+        X-Docker-Registry-Version: 0.6.0
+
+        ""
+
+    Status Codes:
+
+    - **200** – OK
+
+## Authorization
+
+This is where we describe the authorization process, including the
+tokens and cookies.
+
+TODO: add more info.

+ 691 - 0
docs/sources/reference/api/registry_index_spec.md

@@ -0,0 +1,691 @@
+page_title: Registry Documentation
+page_description: Documentation for docker Registry and Registry API
+page_keywords: docker, registry, api, index
+
+# Registry & Index Spec
+
+## The 3 roles
+
+### Index
+
+The Index is responsible for centralizing information about:
+
+- User accounts
+- Checksums of the images
+- Public namespaces
+
+The Index has different components:
+
+- Web UI
+- Meta-data store (comments, stars, list public repositories)
+- Authentication service
+- Tokenization
+
+The index is authoritative for those information.
+
+We expect that there will be only one instance of the index, run and
+managed by Docker Inc.
+
+### Registry
+
+- It stores the images and the graph for a set of repositories
+- It does not have user accounts data
+- It has no notion of user accounts or authorization
+- It delegates authentication and authorization to the Index Auth
+    service using tokens
+- It supports different storage backends (S3, cloud files, local FS)
+- It doesn’t have a local database
+- [Source Code](https://github.com/dotcloud/docker-registry)
+
+We expect that there will be multiple registries out there. To help to
+grasp the context, here are some examples of registries:
+
+- **sponsor registry**: such a registry is provided by a third-party
+    hosting infrastructure as a convenience for their customers and the
+    docker community as a whole. Its costs are supported by the third
+    party, but the management and operation of the registry are
+    supported by dotCloud. It features read/write access, and delegates
+    authentication and authorization to the Index.
+- **mirror registry**: such a registry is provided by a third-party
+    hosting infrastructure but is targeted at their customers only. Some
+    mechanism (unspecified to date) ensures that public images are
+    pulled from a sponsor registry to the mirror registry, to make sure
+    that the customers of the third-party provider can “docker pull”
+    those images locally.
+- **vendor registry**: such a registry is provided by a software
+    vendor, who wants to distribute docker images. It would be operated
+    and managed by the vendor. Only users authorized by the vendor would
+    be able to get write access. Some images would be public (accessible
+    for anyone), others private (accessible only for authorized users).
+    Authentication and authorization would be delegated to the Index.
+    The goal of vendor registries is to let someone do “docker pull
+    basho/riak1.3” and automatically push from the vendor registry
+    (instead of a sponsor registry); i.e. get all the convenience of a
+    sponsor registry, while retaining control on the asset distribution.
+- **private registry**: such a registry is located behind a firewall,
+    or protected by an additional security layer (HTTP authorization,
+    SSL client-side certificates, IP address authorization...). The
+    registry is operated by a private entity, outside of dotCloud’s
+    control. It can optionally delegate additional authorization to the
+    Index, but it is not mandatory.
+
+> **Note:** The latter implies that while HTTP is the protocol
+> of choice for a registry, multiple schemes are possible (and
+> in some cases, trivial):
+>
+> - HTTP with GET (and PUT for read-write registries);
+> - local mount point;
+> - remote docker addressed through SSH.
+
+The latter would only require two new commands in docker, e.g.
+`registryget` and `registryput`,
+wrapping access to the local filesystem (and optionally doing
+consistency checks). Authentication and authorization are then delegated
+to SSH (e.g. with public keys).
+
+### Docker
+
+On top of being a runtime for LXC, Docker is the Registry client. It
+supports:
+
+- Push / Pull on the registry
+- Client authentication on the Index
+
+## Workflow
+
+### Pull
+
+![](../../../_images/docker_pull_chart.png)
+
+1.  Contact the Index to know where I should download “samalba/busybox”
+2.  Index replies: a. `samalba/busybox` is on
+    Registry A b. here are the checksums for `samalba/busybox`
+ (for all layers) c. token
+3.  Contact Registry A to receive the layers for
+    `samalba/busybox` (all of them to the base
+    image). Registry A is authoritative for “samalba/busybox” but keeps
+    a copy of all inherited layers and serve them all from the same
+    location.
+4.  registry contacts index to verify if token/user is allowed to
+    download images
+5.  Index returns true/false lettings registry know if it should proceed
+    or error out
+6.  Get the payload for all layers
+
+It’s possible to run:
+
+    docker pull https://<registry>/repositories/samalba/busybox
+
+In this case, Docker bypasses the Index. However the security is not
+guaranteed (in case Registry A is corrupted) because there won’t be any
+checksum checks.
+
+Currently registry redirects to s3 urls for downloads, going forward all
+downloads need to be streamed through the registry. The Registry will
+then abstract the calls to S3 by a top-level class which implements
+sub-classes for S3 and local storage.
+
+Token is only returned when the `X-Docker-Token`
+header is sent with request.
+
+Basic Auth is required to pull private repos. Basic auth isn’t required
+for pulling public repos, but if one is provided, it needs to be valid
+and for an active account.
+
+#### API (pulling repository foo/bar):
+
+1.  (Docker -\> Index) GET /v1/repositories/foo/bar/images
+    :   **Headers**:
+        :   Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
+            X-Docker-Token: true
+
+        **Action**:
+        :   (looking up the foo/bar in db and gets images and checksums
+            for that repo (all if no tag is specified, if tag, only
+            checksums for those tags) see part 4.4.1)
+
+2.  (Index -\> Docker) HTTP 200 OK
+
+    > **Headers**:
+    > :   - Authorization: Token
+    >         signature=123abc,repository=”foo/bar”,access=write
+    >     - X-Docker-Endpoints: registry.docker.io [,
+    >         registry2.docker.io]
+    >
+    > **Body**:
+    > :   Jsonified checksums (see part 4.4.1)
+    >
+3.  (Docker -\> Registry) GET /v1/repositories/foo/bar/tags/latest
+    :   **Headers**:
+        :   Authorization: Token
+            signature=123abc,repository=”foo/bar”,access=write
+
+4.  (Registry -\> Index) GET /v1/repositories/foo/bar/images
+
+    > **Headers**:
+    > :   Authorization: Token
+    >     signature=123abc,repository=”foo/bar”,access=read
+    >
+    > **Body**:
+    > :   \<ids and checksums in payload\>
+    >
+    > **Action**:
+    > :   ( Lookup token see if they have access to pull.)
+    >
+    >     If good:
+    >     :   HTTP 200 OK Index will invalidate the token
+    >
+    >     If bad:
+    >     :   HTTP 401 Unauthorized
+    >
+5.  (Docker -\> Registry) GET /v1/images/928374982374/ancestry
+    :   **Action**:
+        :   (for each image id returned in the registry, fetch /json +
+            /layer)
+
+Note
+
+If someone makes a second request, then we will always give a new token,
+never reuse tokens.
+
+### Push
+
+![](../../../_images/docker_push_chart.png)
+
+1.  Contact the index to allocate the repository name “samalba/busybox”
+    (authentication required with user credentials)
+2.  If authentication works and namespace available, “samalba/busybox”
+    is allocated and a temporary token is returned (namespace is marked
+    as initialized in index)
+3.  Push the image on the registry (along with the token)
+4.  Registry A contacts the Index to verify the token (token must
+    corresponds to the repository name)
+5.  Index validates the token. Registry A starts reading the stream
+    pushed by docker and store the repository (with its images)
+6.  docker contacts the index to give checksums for upload images
+
+> **Note:**
+> **It’s possible not to use the Index at all!** In this case, a deployed
+> version of the Registry is deployed to store and serve images. Those
+> images are not authenticated and the security is not guaranteed.
+
+> **Note:**
+> **Index can be replaced!** For a private Registry deployed, a custom
+> Index can be used to serve and validate token according to different
+> policies.
+
+Docker computes the checksums and submit them to the Index at the end of
+the push. When a repository name does not have checksums on the Index,
+it means that the push is in progress (since checksums are submitted at
+the end).
+
+#### API (pushing repos foo/bar):
+
+1.  (Docker -\> Index) PUT /v1/repositories/foo/bar/
+    :   **Headers**:
+        :   Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:
+            true
+
+        **Action**::
+        :   - in index, we allocated a new repository, and set to
+                initialized
+
+        **Body**::
+        :   (The body contains the list of images that are going to be
+            pushed, with empty checksums. The checksums will be set at
+            the end of the push):
+
+                [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}]
+
+2.  (Index -\> Docker) 200 Created
+    :   **Headers**:
+        :   - WWW-Authenticate: Token
+                signature=123abc,repository=”foo/bar”,access=write
+            - X-Docker-Endpoints: registry.docker.io [,
+                registry2.docker.io]
+
+3.  (Docker -\> Registry) PUT /v1/images/98765432\_parent/json
+    :   **Headers**:
+        :   Authorization: Token
+            signature=123abc,repository=”foo/bar”,access=write
+
+4.  (Registry-\>Index) GET /v1/repositories/foo/bar/images
+    :   **Headers**:
+        :   Authorization: Token
+            signature=123abc,repository=”foo/bar”,access=write
+
+        **Action**::
+        :   - Index:
+                :   will invalidate the token.
+
+            - Registry:
+                :   grants a session (if token is approved) and fetches
+                    the images id
+
+5.  (Docker -\> Registry) PUT /v1/images/98765432\_parent/json
+    :   **Headers**::
+        :   - Authorization: Token
+                signature=123abc,repository=”foo/bar”,access=write
+            - Cookie: (Cookie provided by the Registry)
+
+6.  (Docker -\> Registry) PUT /v1/images/98765432/json
+    :   **Headers**:
+        :   Cookie: (Cookie provided by the Registry)
+
+7.  (Docker -\> Registry) PUT /v1/images/98765432\_parent/layer
+    :   **Headers**:
+        :   Cookie: (Cookie provided by the Registry)
+
+8.  (Docker -\> Registry) PUT /v1/images/98765432/layer
+    :   **Headers**:
+        :   X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh
+
+9.  (Docker -\> Registry) PUT /v1/repositories/foo/bar/tags/latest
+    :   **Headers**:
+        :   Cookie: (Cookie provided by the Registry)
+
+        **Body**:
+        :   “98765432”
+
+10. (Docker -\> Index) PUT /v1/repositories/foo/bar/images
+
+    **Headers**:
+    :   Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:
+        registry1.docker.io (no validation on this right now)
+
+    **Body**:
+    :   (The image, id’s, tags and checksums)
+
+        [{“id”:
+        “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
+        “checksum”:
+        “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}]
+
+    **Return** HTTP 204
+
+> **Note:** If push fails and they need to start again, what happens in the index,
+> there will already be a record for the namespace/name, but it will be
+> initialized. Should we allow it, or mark as name already used? One edge
+> case could be if someone pushes the same thing at the same time with two
+> different shells.
+
+If it’s a retry on the Registry, Docker has a cookie (provided by the
+registry after token validation). So the Index won’t have to provide a
+new token.
+
+### Delete
+
+If you need to delete something from the index or registry, we need a
+nice clean way to do that. Here is the workflow.
+
+1.  Docker contacts the index to request a delete of a repository
+    `samalba/busybox` (authentication required with
+    user credentials)
+2.  If authentication works and repository is valid,
+    `samalba/busybox` is marked as deleted and a
+    temporary token is returned
+3.  Send a delete request to the registry for the repository (along with
+    the token)
+4.  Registry A contacts the Index to verify the token (token must
+    corresponds to the repository name)
+5.  Index validates the token. Registry A deletes the repository and
+    everything associated to it.
+6.  docker contacts the index to let it know it was removed from the
+    registry, the index removes all records from the database.
+
+Note
+
+The Docker client should present an "Are you sure?" prompt to confirm
+the deletion before starting the process. Once it starts it can’t be
+undone.
+
+#### API (deleting repository foo/bar):
+
+1.  (Docker -\> Index) DELETE /v1/repositories/foo/bar/
+    :   **Headers**:
+        :   Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token:
+            true
+
+        **Action**::
+        :   - in index, we make sure it is a valid repository, and set
+                to deleted (logically)
+
+        **Body**::
+        :   Empty
+
+2.  (Index -\> Docker) 202 Accepted
+    :   **Headers**:
+        :   - WWW-Authenticate: Token
+                signature=123abc,repository=”foo/bar”,access=delete
+            - X-Docker-Endpoints: registry.docker.io [,
+                registry2.docker.io] \# list of endpoints where this
+                repo lives.
+
+3.  (Docker -\> Registry) DELETE /v1/repositories/foo/bar/
+    :   **Headers**:
+        :   Authorization: Token
+            signature=123abc,repository=”foo/bar”,access=delete
+
+4.  (Registry-\>Index) PUT /v1/repositories/foo/bar/auth
+    :   **Headers**:
+        :   Authorization: Token
+            signature=123abc,repository=”foo/bar”,access=delete
+
+        **Action**::
+        :   - Index:
+                :   will invalidate the token.
+
+            - Registry:
+                :   deletes the repository (if token is approved)
+
+5.  (Registry -\> Docker) 200 OK
+    :   200 If success 403 if forbidden 400 if bad request 404 if
+        repository isn’t found
+
+6.  (Docker -\> Index) DELETE /v1/repositories/foo/bar/
+
+    > **Headers**:
+    > :   Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints:
+    >     registry-1.docker.io (no validation on this right now)
+    >
+    > **Body**:
+    > :   Empty
+    >
+    > **Return** HTTP 200
+
+## How to use the Registry in standalone mode
+
+The Index has two main purposes (along with its fancy social features):
+
+- Resolve short names (to avoid passing absolute URLs all the time)
+    :   - username/projectname -\>
+            https://registry.docker.io/users/\<username\>/repositories/\<projectname\>/
+        - team/projectname -\>
+            https://registry.docker.io/team/\<team\>/repositories/\<projectname\>/
+
+- Authenticate a user as a repos owner (for a central referenced
+    repository)
+
+### Without an Index
+
+Using the Registry without the Index can be useful to store the images
+on a private network without having to rely on an external entity
+controlled by Docker Inc.
+
+In this case, the registry will be launched in a special mode
+(–standalone? –no-index?). In this mode, the only thing which changes is
+that Registry will never contact the Index to verify a token. It will be
+the Registry owner responsibility to authenticate the user who pushes
+(or even pulls) an image using any mechanism (HTTP auth, IP based,
+etc...).
+
+In this scenario, the Registry is responsible for the security in case
+of data corruption since the checksums are not delivered by a trusted
+entity.
+
+As hinted previously, a standalone registry can also be implemented by
+any HTTP server handling GET/PUT requests (or even only GET requests if
+no write access is necessary).
+
+### With an Index
+
+The Index data needed by the Registry are simple:
+
+- Serve the checksums
+- Provide and authorize a Token
+
+In the scenario of a Registry running on a private network with the need
+of centralizing and authorizing, it’s easy to use a custom Index.
+
+The only challenge will be to tell Docker to contact (and trust) this
+custom Index. Docker will be configurable at some point to use a
+specific Index, it’ll be the private entity responsibility (basically
+the organization who uses Docker in a private environment) to maintain
+the Index and the Docker’s configuration among its consumers.
+
+## The API
+
+The first version of the api is available here:
+[https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md](https://github.com/jpetazzo/docker/blob/acd51ecea8f5d3c02b00a08176171c59442df8b3/docs/images-repositories-push-pull.md)
+
+### Images
+
+The format returned in the images is not defined here (for layer and
+JSON), basically because Registry stores exactly the same kind of
+information as Docker uses to manage them.
+
+The format of ancestry is a line-separated list of image ids, in age
+order, i.e. the image’s parent is on the last line, the parent of the
+parent on the next-to-last line, etc.; if the image has no parent, the
+file is empty.
+
+    GET /v1/images/<image_id>/layer
+    PUT /v1/images/<image_id>/layer
+    GET /v1/images/<image_id>/json
+    PUT /v1/images/<image_id>/json
+    GET /v1/images/<image_id>/ancestry
+    PUT /v1/images/<image_id>/ancestry
+
+### Users
+
+### Create a user (Index)
+
+POST /v1/users
+
+**Body**:
+:   {"email": "[sam@dotcloud.com](mailto:sam%40dotcloud.com)",
+    "password": "toto42", "username": "foobar"’}
+**Validation**:
+:   - **username**: min 4 character, max 30 characters, must match the
+        regular expression [a-z0-9\_].
+    - **password**: min 5 characters
+
+**Valid**: return HTTP 200
+
+Errors: HTTP 400 (we should create error codes for possible errors) -
+invalid json - missing field - wrong format (username, password, email,
+etc) - forbidden name - name already exists
+
+Note
+
+A user account will be valid only if the email has been validated (a
+validation link is sent to the email address).
+
+### Update a user (Index)
+
+PUT /v1/users/\<username\>
+
+**Body**:
+:   {"password": "toto"}
+
+Note
+
+We can also update email address, if they do, they will need to reverify
+their new email address.
+
+### Login (Index)
+
+Does nothing else but asking for a user authentication. Can be used to
+validate credentials. HTTP Basic Auth for now, maybe change in future.
+
+GET /v1/users
+
+**Return**:
+:   - Valid: HTTP 200
+    - Invalid login: HTTP 401
+    - Account inactive: HTTP 403 Account is not Active
+
+### Tags (Registry)
+
+The Registry does not know anything about users. Even though
+repositories are under usernames, it’s just a namespace for the
+registry. Allowing us to implement organizations or different namespaces
+per user later, without modifying the Registry’s API.
+
+The following naming restrictions apply:
+
+- Namespaces must match the same regular expression as usernames (See
+    4.2.1.)
+- Repository names must match the regular expression [a-zA-Z0-9-\_.]
+
+### Get all tags:
+
+GET /v1/repositories/\<namespace\>/\<repository\_name\>/tags
+
+**Return**: HTTP 200
+:   { "latest":
+    "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f",
+    “0.1.1”:
+    “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087” }
+
+#### 4.3.2 Read the content of a tag (resolve the image id)
+
+GET /v1/repositories/\<namespace\>/\<repo\_name\>/tags/\<tag\>
+
+**Return**:
+:   "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f"
+
+#### 4.3.3 Delete a tag (registry)
+
+DELETE /v1/repositories/\<namespace\>/\<repo\_name\>/tags/\<tag\>
+
+### 4.4 Images (Index)
+
+For the Index to “resolve” the repository name to a Registry location,
+it uses the X-Docker-Endpoints header. In other terms, this requests
+always add a `X-Docker-Endpoints` to indicate the
+location of the registry which hosts this repository.
+
+#### 4.4.1 Get the images
+
+GET /v1/repositories/\<namespace\>/\<repo\_name\>/images
+
+**Return**: HTTP 200
+:   [{“id”:
+    “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
+    “checksum”:
+    “[md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087](md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087)”}]
+
+### Add/update the images:
+
+You always add images, you never remove them.
+
+PUT /v1/repositories/\<namespace\>/\<repo\_name\>/images
+
+**Body**:
+:   [ {“id”:
+    “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”,
+    “checksum”:
+    “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}
+    ]
+
+**Return** 204
+
+### Repositories
+
+### Remove a Repository (Registry)
+
+DELETE /v1/repositories/\<namespace\>/\<repo\_name\>
+
+Return 200 OK
+
+### Remove a Repository (Index)
+
+This starts the delete process. see 2.3 for more details.
+
+DELETE /v1/repositories/\<namespace\>/\<repo\_name\>
+
+Return 202 OK
+
+## Chaining Registries
+
+It’s possible to chain Registries server for several reasons:
+
+- Load balancing
+- Delegate the next request to another server
+
+When a Registry is a reference for a repository, it should host the
+entire images chain in order to avoid breaking the chain during the
+download.
+
+The Index and Registry use this mechanism to redirect on one or the
+other.
+
+Example with an image download:
+
+On every request, a special header can be returned:
+
+    X-Docker-Endpoints: server1,server2
+
+On the next request, the client will always pick a server from this
+list.
+
+## Authentication & Authorization
+
+### On the Index
+
+The Index supports both “Basic” and “Token” challenges. Usually when
+there is a `401 Unauthorized`, the Index replies
+this:
+
+    401 Unauthorized
+    WWW-Authenticate: Basic realm="auth required",Token
+
+You have 3 options:
+
+1.  Provide user credentials and ask for a token
+
+    > **Header**:
+    > :   - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
+    >     - X-Docker-Token: true
+    >
+    > In this case, along with the 200 response, you’ll get a new token
+    > (if user auth is ok): If authorization isn’t correct you get a 401
+    > response. If account isn’t active you will get a 403 response.
+    >
+    > **Response**:
+    > :   - 200 OK
+    >     - X-Docker-Token: Token
+    >         signature=123abc,repository=”foo/bar”,access=read
+    >
+2.  Provide user credentials only
+
+    > **Header**:
+    > :   Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
+    >
+3.  Provide Token
+
+    > **Header**:
+    > :   Authorization: Token
+    >     signature=123abc,repository=”foo/bar”,access=read
+    >
+### 6.2 On the Registry
+
+The Registry only supports the Token challenge:
+
+    401 Unauthorized
+    WWW-Authenticate: Token
+
+The only way is to provide a token on `401 Unauthorized`
+responses:
+
+    Authorization: Token signature=123abc,repository="foo/bar",access=read
+
+Usually, the Registry provides a Cookie when a Token verification
+succeeded. Every time the Registry passes a Cookie, you have to pass it
+back the same cookie.:
+
+    200 OK
+    Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly
+
+Next request:
+
+    GET /(...)
+    Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="
+
+## Document Version
+
+- 1.0 : May 6th 2013 : initial release
+- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new
+    source namespace.
+

+ 89 - 0
docs/sources/reference/api/remote_api_client_libraries.md

@@ -0,0 +1,89 @@
+page_title: Remote API Client Libraries
+page_description: Various client libraries available to use with the Docker remote API
+page_keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, JavaScript, Erlang, Go
+
+# Docker Remote API Client Libraries
+
+These libraries have not been tested by the Docker Maintainers for
+compatibility. Please file issues with the library owners. If you find
+more library implementations, please list them in Docker doc bugs and we
+will add the libraries here.
+
+  -------------------------------------------------------------------------
+  Language/Framewor Name         Repository                         Status
+  k                                                                 
+  ----------------- ------------ ---------------------------------- -------
+  Python            docker-py    [https://github.com/dotcloud/docke Active
+                                 r-py](https://github.com/dotcloud/ 
+                                 docker-py)                         
+
+  Ruby              docker-clien [https://github.com/geku/docker-cl Outdate
+                    t            ient](https://github.com/geku/dock d
+                                 er-client)                         
+
+  Ruby              docker-api   [https://github.com/swipely/docker Active
+                                 -api](https://github.com/swipely/d 
+                                 ocker-api)                         
+
+  JavaScript        dockerode    [https://github.com/apocas/dockero Active
+  (NodeJS)                       de](https://github.com/apocas/dock 
+                                 erode)                             
+                                 Install via NPM: npm install       
+                                 dockerode                          
+
+  JavaScript        docker.io    [https://github.com/appersonlabs/d Active
+  (NodeJS)                       ocker.io](https://github.com/apper 
+                                 sonlabs/docker.io)                 
+                                 Install via NPM: npm install       
+                                 docker.io                          
+
+  JavaScript        docker-js    [https://github.com/dgoujard/docke Outdate
+                                 r-js](https://github.com/dgoujard/ d
+                                 docker-js)                         
+
+  JavaScript        docker-cp    [https://github.com/13W/docker-cp] Active
+  (Angular)                      (https://github.com/13W/docker-cp) 
+  **WebUI**                                                         
+
+  JavaScript        dockerui     [https://github.com/crosbymichael/ Active
+  (Angular)                      dockerui](https://github.com/crosb 
+  **WebUI**                      ymichael/dockerui)                 
+
+  Java              docker-java  [https://github.com/kpelykh/docker Active
+                                 -java](https://github.com/kpelykh/ 
+                                 docker-java)                       
+
+  Erlang            erldocker    [https://github.com/proger/erldock Active
+                                 er](https://github.com/proger/erld 
+                                 ocker)                             
+
+  Go                go-dockercli [https://github.com/fsouza/go-dock Active
+                    ent          erclient](https://github.com/fsouz 
+                                 a/go-dockerclient)                 
+
+  Go                dockerclient [https://github.com/samalba/docker Active
+                                 client](https://github.com/samalba 
+                                 /dockerclient)                     
+
+  PHP               Alvine       [http://pear.alvine.io/](http://pe Active
+                                 ar.alvine.io/)                     
+                                 (alpha)                            
+
+  PHP               Docker-PHP   [http://stage1.github.io/docker-ph Active
+                                 p/](http://stage1.github.io/docker 
+                                 -php/)                             
+
+  Perl              Net::Docker  [https://metacpan.org/pod/Net::Doc Active
+                                 ker](https://metacpan.org/pod/Net: 
+                                 :Docker)                           
+
+  Perl              Eixo::Docker [https://github.com/alambike/eixo- Active
+                                 docker](https://github.com/alambik 
+                                 e/eixo-docker)                     
+
+  Scala             reactive-doc [https://github.com/almoehi/reacti Active
+                    ker          ve-docker](https://github.com/almo 
+                                 ehi/reactive-docker)               
+  -------------------------------------------------------------------------
+
+

+ 510 - 0
docs/sources/reference/builder.md

@@ -0,0 +1,510 @@
+page_title: Dockerfile Reference
+page_description: Dockerfiles use a simple DSL which allows you to automate the steps you would normally manually take to create an image.
+page_keywords: builder, docker, Dockerfile, automation, image creation
+
+# Dockerfile Reference
+
+**Docker can act as a builder** and read instructions from a text
+`Dockerfile` to automate the steps you would
+otherwise take manually to create an image. Executing
+`docker build` will run your steps and commit them
+along the way, giving you a final image.
+
+## Usage
+
+To [*build*](../commandline/cli/#cli-build) an image from a source
+repository, create a description file called `Dockerfile`
+at the root of your repository. This file will describe the
+steps to assemble the image.
+
+Then call `docker build` with the path of your
+source repository as argument (for example, `.`):
+
+> `sudo docker build .`
+
+The path to the source repository defines where to find the *context* of
+the build. The build is run by the Docker daemon, not by the CLI, so the
+whole context must be transferred to the daemon. The Docker CLI reports
+"Uploading context" when the context is sent to the daemon.
+
+You can specify a repository and tag at which to save the new image if
+the build succeeds:
+
+> `sudo docker build -t shykes/myapp .`
+
+The Docker daemon will run your steps one-by-one, committing the result
+to a new image if necessary, before finally outputting the ID of your
+new image. The Docker daemon will automatically clean up the context you
+sent.
+
+Note that each instruction is run independently, and causes a new image
+to be created - so `RUN cd /tmp` will not have any
+effect on the next instructions.
+
+Whenever possible, Docker will re-use the intermediate images,
+accelerating `docker build` significantly (indicated
+by `Using cache`):
+
+    $ docker build -t SvenDowideit/ambassador .
+    Uploading context 10.24 kB
+    Uploading context
+    Step 1 : FROM docker-ut
+     ---> cbba202fe96b
+    Step 2 : MAINTAINER SvenDowideit@home.org.au
+     ---> Using cache
+     ---> 51182097be13
+    Step 3 : CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top
+     ---> Using cache
+     ---> 1a5ffc17324d
+    Successfully built 1a5ffc17324d
+
+When you’re done with your build, you’re ready to look into [*Pushing a
+repository to its
+registry*](../../use/workingwithrepository/#image-push).
+
+## Format
+
+Here is the format of the Dockerfile:
+
+    # Comment
+    INSTRUCTION arguments
+
+The Instruction is not case-sensitive, however convention is for them to
+be UPPERCASE in order to distinguish them from arguments more easily.
+
+Docker evaluates the instructions in a Dockerfile in order. **The first
+instruction must be \`FROM\`** in order to specify the [*Base
+Image*](../../terms/image/#base-image-def) from which you are building.
+
+Docker will treat lines that *begin* with `#` as a
+comment. A `#` marker anywhere else in the line will
+be treated as an argument. This allows statements like:
+
+    # Comment
+    RUN echo 'we are running some # of cool things'
+
+Here is the set of instructions you can use in a `Dockerfile`
+for building images.
+
+## `FROM`
+
+> `FROM <image>`
+
+Or
+
+> `FROM <image>:<tag>`
+
+The `FROM` instruction sets the [*Base
+Image*](../../terms/image/#base-image-def) for subsequent instructions.
+As such, a valid Dockerfile must have `FROM` as its
+first instruction. The image can be any valid image – it is especially
+easy to start by **pulling an image** from the [*Public
+Repositories*](../../use/workingwithrepository/#using-public-repositories).
+
+`FROM` must be the first non-comment instruction in
+the `Dockerfile`.
+
+`FROM` can appear multiple times within a single
+Dockerfile in order to create multiple images. Simply make a note of the
+last image id output by the commit before each new `FROM`
+command.
+
+If no `tag` is given to the `FROM`
+instruction, `latest` is assumed. If the
+used tag does not exist, an error will be returned.
+
+## `MAINTAINER`
+
+> `MAINTAINER <name>`
+
+The `MAINTAINER` instruction allows you to set the
+*Author* field of the generated images.
+
+## `RUN`
+
+RUN has 2 forms:
+
+-   `RUN <command>` (the command is run in a shell -
+    `/bin/sh -c`)
+-   `RUN ["executable", "param1", "param2"]` (*exec*
+    form)
+
+The `RUN` instruction will execute any commands in a
+new layer on top of the current image and commit the results. The
+resulting committed image will be used for the next step in the
+Dockerfile.
+
+Layering `RUN` instructions and generating commits
+conforms to the core concepts of Docker where commits are cheap and
+containers can be created from any point in an image’s history, much
+like source control.
+
+The *exec* form makes it possible to avoid shell string munging, and to
+`RUN` commands using a base image that does not
+contain `/bin/sh`.
+
+### Known Issues (RUN)
+
+-   [Issue 783](https://github.com/dotcloud/docker/issues/783) is about
+    file permissions problems that can occur when using the AUFS file
+    system. You might notice it during an attempt to `rm`
+ a file, for example. The issue describes a workaround.
+-   [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale
+    will not be set automatically.
+
+## `CMD`
+
+CMD has three forms:
+
+-   `CMD ["executable","param1","param2"]` (like an
+    *exec*, preferred form)
+-   `CMD ["param1","param2"]` (as *default
+    parameters to ENTRYPOINT*)
+-   `CMD command param1 param2` (as a *shell*)
+
+There can only be one CMD in a Dockerfile. If you list more than one CMD
+then only the last CMD will take effect.
+
+**The main purpose of a CMD is to provide defaults for an executing
+container.** These defaults can include an executable, or they can omit
+the executable, in which case you must specify an ENTRYPOINT as well.
+
+When used in the shell or exec formats, the `CMD`
+instruction sets the command to be executed when running the image.
+
+If you use the *shell* form of the CMD, then the `<command>`
+will execute in `/bin/sh -c`:
+
+    FROM ubuntu
+    CMD echo "This is a test." | wc -
+
+If you want to **run your** `<command>` **without a
+shell** then you must express the command as a JSON array and give the
+full path to the executable. **This array form is the preferred format
+of CMD.** Any additional parameters must be individually expressed as
+strings in the array:
+
+    FROM ubuntu
+    CMD ["/usr/bin/wc","--help"]
+
+If you would like your container to run the same executable every time,
+then you should consider using `ENTRYPOINT` in
+combination with `CMD`. See
+[*ENTRYPOINT*](#dockerfile-entrypoint).
+
+If the user specifies arguments to `docker run` then
+they will override the default specified in CMD.
+
+Note
+
+Don’t confuse `RUN` with `CMD`.
+`RUN` actually runs a command and commits the
+result; `CMD` does not execute anything at build
+time, but specifies the intended command for the image.
+
+## `EXPOSE`
+
+> `EXPOSE <port> [<port>...]`
+
+The `EXPOSE` instructions informs Docker that the
+container will listen on the specified network ports at runtime. Docker
+uses this information to interconnect containers using links (see
+[*links*](../../use/working_with_links_names/#working-with-links-names)),
+and to setup port redirection on the host system (see [*Redirect
+Ports*](../../use/port_redirection/#port-redirection)).
+
+## `ENV`
+
+> `ENV <key> <value>`
+
+The `ENV` instruction sets the environment variable
+`<key>` to the value `<value>`.
+This value will be passed to all future `RUN`
+instructions. This is functionally equivalent to prefixing the command
+with `<key>=<value>`
+
+The environment variables set using `ENV` will
+persist when a container is run from the resulting image. You can view
+the values using `docker inspect`, and change them
+using `docker run --env <key>=<value>`.
+
+Note
+
+One example where this can cause unexpected consequenses, is setting
+`ENV DEBIAN_FRONTEND noninteractive`. Which will
+persist when the container is run interactively; for example:
+`docker run -t -i image bash`
+
+## `ADD`
+
+> `ADD <src> <dest>`
+
+The `ADD` instruction will copy new files from
+\<src\> and add them to the container’s filesystem at path
+`<dest>`.
+
+`<src>` must be the path to a file or directory
+relative to the source directory being built (also called the *context*
+of the build) or a remote file URL.
+
+`<dest>` is the absolute path to which the source
+will be copied inside the destination container.
+
+All new files and directories are created with mode 0755, uid and gid 0.
+
+Note
+
+if you build using STDIN (`docker build - < somefile`
+.literal}), there is no build context, so the Dockerfile can only
+contain an URL based ADD statement.
+
+Note
+
+if your URL files are protected using authentication, you will need to
+use an `RUN wget` , `RUN curl`
+or other tool from within the container as ADD does not support
+authentication.
+
+The copy obeys the following rules:
+
+-   The `<src>` path must be inside the *context* of
+    the build; you cannot `ADD ../something /something`
+, because the first step of a `docker build`
+ is to send the context directory (and subdirectories) to
+    the docker daemon.
+
+-   If `<src>` is a URL and `<dest>`
+ does not end with a trailing slash, then a file is
+    downloaded from the URL and copied to `<dest>`.
+
+-   If `<src>` is a URL and `<dest>`
+ does end with a trailing slash, then the filename is
+    inferred from the URL and the file is downloaded to
+    `<dest>/<filename>`. For instance,
+    `ADD http://example.com/foobar /` would create
+    the file `/foobar`. The URL must have a
+    nontrivial path so that an appropriate filename can be discovered in
+    this case (`http://example.com` will not work).
+
+-   If `<src>` is a directory, the entire directory
+    is copied, including filesystem metadata.
+
+-   If `<src>` is a *local* tar archive in a
+    recognized compression format (identity, gzip, bzip2 or xz) then it
+    is unpacked as a directory. Resources from *remote* URLs are **not**
+    decompressed.
+
+    When a directory is copied or unpacked, it has the same behavior as
+    `tar -x`: the result is the union of
+
+    1.  whatever existed at the destination path and
+    2.  the contents of the source tree,
+
+    with conflicts resolved in favor of "2." on a file-by-file basis.
+
+-   If `<src>` is any other kind of file, it is
+    copied individually along with its metadata. In this case, if
+    `<dest>` ends with a trailing slash
+    `/`, it will be considered a directory and the
+    contents of `<src>` will be written at
+    `<dest>/base(<src>)`.
+
+-   If `<dest>` does not end with a trailing slash,
+    it will be considered a regular file and the contents of
+    `<src>` will be written at `<dest>`
+.
+
+-   If `<dest>` doesn’t exist, it is created along
+    with all missing directories in its path.
+
+## `ENTRYPOINT`
+
+ENTRYPOINT has two forms:
+
+-   `ENTRYPOINT ["executable", "param1", "param2"]`
+    (like an *exec*, preferred form)
+-   `ENTRYPOINT command param1 param2` (as a
+    *shell*)
+
+There can only be one `ENTRYPOINT` in a Dockerfile.
+If you have more than one `ENTRYPOINT`, then only
+the last one in the Dockerfile will have an effect.
+
+An `ENTRYPOINT` helps you to configure a container
+that you can run as an executable. That is, when you specify an
+`ENTRYPOINT`, then the whole container runs as if it
+was just that executable.
+
+The `ENTRYPOINT` instruction adds an entry command
+that will **not** be overwritten when arguments are passed to
+`docker run`, unlike the behavior of `CMD`
+.literal}. This allows arguments to be passed to the entrypoint. i.e.
+`docker run <image> -d` will pass the "-d" argument
+to the ENTRYPOINT.
+
+You can specify parameters either in the ENTRYPOINT JSON array (as in
+"like an exec" above), or by using a CMD statement. Parameters in the
+ENTRYPOINT will not be overridden by the `docker run`
+arguments, but parameters specified via CMD will be overridden
+by `docker run` arguments.
+
+Like a `CMD`, you can specify a plain string for the
+ENTRYPOINT and it will execute in `/bin/sh -c`:
+
+    FROM ubuntu
+    ENTRYPOINT wc -l -
+
+For example, that Dockerfile’s image will *always* take stdin as input
+("-") and print the number of lines ("-l"). If you wanted to make this
+optional but default, you could use a CMD:
+
+    FROM ubuntu
+    CMD ["-l", "-"]
+    ENTRYPOINT ["/usr/bin/wc"]
+
+## `VOLUME`
+
+> `VOLUME ["/data"]`
+
+The `VOLUME` instruction will create a mount point
+with the specified name and mark it as holding externally mounted
+volumes from native host or other containers. For more
+information/examples and mounting instructions via docker client, refer
+to [*Share Directories via
+Volumes*](../../use/working_with_volumes/#volume-def) documentation.
+
+## `USER`
+
+> `USER daemon`
+
+The `USER` instruction sets the username or UID to
+use when running the image.
+
+## `WORKDIR`
+
+> `WORKDIR /path/to/workdir`
+
+The `WORKDIR` instruction sets the working directory
+for the `RUN`, `CMD` and
+`ENTRYPOINT` Dockerfile commands that follow it.
+
+It can be used multiple times in the one Dockerfile. If a relative path
+is provided, it will be relative to the path of the previous
+`WORKDIR` instruction. For example:
+
+> WORKDIR /a WORKDIR b WORKDIR c RUN pwd
+
+The output of the final `pwd` command in this
+Dockerfile would be `/a/b/c`.
+
+## `ONBUILD`
+
+> `ONBUILD [INSTRUCTION]`
+
+The `ONBUILD` instruction adds to the image a
+"trigger" instruction to be executed at a later time, when the image is
+used as the base for another build. The trigger will be executed in the
+context of the downstream build, as if it had been inserted immediately
+after the *FROM* instruction in the downstream Dockerfile.
+
+Any build instruction can be registered as a trigger.
+
+This is useful if you are building an image which will be used as a base
+to build other images, for example an application build environment or a
+daemon which may be customized with user-specific configuration.
+
+For example, if your image is a reusable python application builder, it
+will require application source code to be added in a particular
+directory, and it might require a build script to be called *after*
+that. You can’t just call *ADD* and *RUN* now, because you don’t yet
+have access to the application source code, and it will be different for
+each application build. You could simply provide application developers
+with a boilerplate Dockerfile to copy-paste into their application, but
+that is inefficient, error-prone and difficult to update because it
+mixes with application-specific code.
+
+The solution is to use *ONBUILD* to register in advance instructions to
+run later, during the next build stage.
+
+Here’s how it works:
+
+1.  When it encounters an *ONBUILD* instruction, the builder adds a
+    trigger to the metadata of the image being built. The instruction
+    does not otherwise affect the current build.
+2.  At the end of the build, a list of all triggers is stored in the
+    image manifest, under the key *OnBuild*. They can be inspected with
+    *docker inspect*.
+3.  Later the image may be used as a base for a new build, using the
+    *FROM* instruction. As part of processing the *FROM* instruction,
+    the downstream builder looks for *ONBUILD* triggers, and executes
+    them in the same order they were registered. If any of the triggers
+    fail, the *FROM* instruction is aborted which in turn causes the
+    build to fail. If all triggers succeed, the FROM instruction
+    completes and the build continues as usual.
+4.  Triggers are cleared from the final image after being executed. In
+    other words they are not inherited by "grand-children" builds.
+
+For example you might add something like this:
+
+    [...]
+    ONBUILD ADD . /app/src
+    ONBUILD RUN /usr/local/bin/python-build --dir /app/src
+    [...]
+
+Warning
+
+Chaining ONBUILD instructions using ONBUILD ONBUILD isn’t allowed.
+
+Warning
+
+ONBUILD may not trigger FROM or MAINTAINER instructions.
+
+## Dockerfile Examples
+
+    # Nginx
+    #
+    # VERSION               0.0.1
+
+    FROM      ubuntu
+    MAINTAINER Guillaume J. Charmes <guillaume@docker.com>
+
+    # make sure the package repository is up to date
+    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
+    RUN apt-get update
+
+    RUN apt-get install -y inotify-tools nginx apache2 openssh-server
+
+    # Firefox over VNC
+    #
+    # VERSION               0.3
+
+    FROM ubuntu
+    # make sure the package repository is up to date
+    RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
+    RUN apt-get update
+
+    # Install vnc, xvfb in order to create a 'fake' display and firefox
+    RUN apt-get install -y x11vnc xvfb firefox
+    RUN mkdir /.vnc
+    # Setup a password
+    RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
+    # Autostart firefox (might not be the best way, but it does the trick)
+    RUN bash -c 'echo "firefox" >> /.bashrc'
+
+    EXPOSE 5900
+    CMD    ["x11vnc", "-forever", "-usepw", "-create"]
+
+    # Multiple images example
+    #
+    # VERSION               0.1
+
+    FROM ubuntu
+    RUN echo foo > bar
+    # Will output something like ===> 907ad6c2736f
+
+    FROM ubuntu
+    RUN echo moo > oink
+    # Will output something like ===> 695d7793cbe4
+
+    # You'll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with
+    # /oink.

+ 7 - 0
docs/sources/reference/commandline.md

@@ -0,0 +1,7 @@
+
+# Commands
+
+## Contents:
+
+-   [Command Line](cli/)
+

+ 1170 - 0
docs/sources/reference/commandline/cli.md

@@ -0,0 +1,1170 @@
+page_title: Command Line Interface
+page_description: Docker's CLI command description and usage
+page_keywords: Docker, Docker documentation, CLI, command line
+
+# Command Line
+
+To list available commands, either run `docker` with
+no parameters or execute `docker help`:
+
+    $ sudo docker
+      Usage: docker [OPTIONS] COMMAND [arg...]
+        -H=[unix:///var/run/docker.sock]: tcp://[host]:port to bind/connect to or unix://[/path/to/socket] to use. When host=[127.0.0.1] is omitted for tcp or path=[/var/run/docker.sock] is omitted for unix sockets, default values are used.
+
+      A self-sufficient runtime for linux containers.
+
+      ...
+
+## Option types
+
+Single character commandline options can be combined, so rather than
+typing `docker run -t -i --name test busybox sh`,
+you can write `docker run -ti --name test busybox sh`
+.literal}.
+
+### Boolean
+
+Boolean options look like `-d=false`. The value you
+see is the default value which gets set if you do **not** use the
+boolean flag. If you do call `run -d`, that sets the
+opposite boolean value, so in this case, `true`, and
+so `docker run -d` **will** run in "detached" mode,
+in the background. Other boolean options are similar – specifying them
+will set the value to the opposite of the default value.
+
+### Multi
+
+Options like `-a=[]` indicate they can be specified
+multiple times:
+
+    docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash
+
+Sometimes this can use a more complex value string, as for
+`-v`:
+
+    docker run -v /host:/container example/mysql
+
+### Strings and Integers
+
+Options like `--name=""` expect a string, and they
+can only be specified once. Options like `-c=0`
+expect an integer, and they can only be specified once.
+
+## `daemon`
+
+    Usage of docker:
+      -D, --debug=false: Enable debug mode
+      -H, --host=[]: Multiple tcp://host:port or unix://path/to/socket to bind in daemon mode, single connection otherwise. systemd socket activation can be used with fd://[socketfd].
+      -G, --group="docker": Group to assign the unix socket specified by -H when running in daemon mode; use '' (the empty string) to disable setting of a group
+      --api-enable-cors=false: Enable CORS headers in the remote API
+      -b, --bridge="": Attach containers to a pre-existing network bridge; use 'none' to disable container networking
+      -bip="": Use this CIDR notation address for the network bridge's IP, not compatible with -b
+      -d, --daemon=false: Enable daemon mode
+      --dns=[]: Force docker to use specific DNS servers
+      --dns-search=[]: Force Docker to use specific DNS search domains
+      -g, --graph="/var/lib/docker": Path to use as the root of the docker runtime
+      --icc=true: Enable inter-container communication
+      --ip="0.0.0.0": Default IP address to use when binding container ports
+      --ip-forward=true: Enable net.ipv4.ip_forward
+      --iptables=true: Enable Docker's addition of iptables rules
+      -p, --pidfile="/var/run/docker.pid": Path to use for daemon PID file
+      -r, --restart=true: Restart previously running containers
+      -s, --storage-driver="": Force the docker runtime to use a specific storage driver
+      -e, --exec-driver="native": Force the docker runtime to use a specific exec driver
+      -v, --version=false: Print version information and quit
+      --tls=false: Use TLS; implied by tls-verify flags
+      --tlscacert="~/.docker/ca.pem": Trust only remotes providing a certificate signed by the CA given here
+      --tlscert="~/.docker/cert.pem": Path to TLS certificate file
+      --tlskey="~/.docker/key.pem": Path to TLS key file
+      --tlsverify=false: Use TLS and verify the remote (daemon: verify client, client: verify daemon)
+      --mtu=0: Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if no default route is available
+
+The Docker daemon is the persistent process that manages containers.
+Docker uses the same binary for both the daemon and client. To run the
+daemon you provide the `-d` flag.
+
+To force Docker to use devicemapper as the storage driver, use
+`docker -d -s devicemapper`.
+
+To set the DNS server for all Docker containers, use
+`docker -d --dns 8.8.8.8`.
+
+To set the DNS search domain for all Docker containers, use
+`docker -d --dns-search example.com`.
+
+To run the daemon with debug output, use `docker -d -D`
+.literal}.
+
+To use lxc as the execution driver, use `docker -d -e lxc`
+.literal}.
+
+The docker client will also honor the `DOCKER_HOST`
+environment variable to set the `-H` flag for the
+client.
+
+    docker -H tcp://0.0.0.0:4243 ps
+    # or
+    export DOCKER_HOST="tcp://0.0.0.0:4243"
+    docker ps
+    # both are equal
+
+To run the daemon with [systemd socket
+activation](http://0pointer.de/blog/projects/socket-activation.html),
+use `docker -d -H fd://`. Using `fd://`
+will work perfectly for most setups but you can also specify
+individual sockets too `docker -d -H fd://3`. If the
+specified socket activated files aren’t found then docker will exit. You
+can find examples of using systemd socket activation with docker and
+systemd in the [docker source
+tree](https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/).
+
+Docker supports softlinks for the Docker data directory
+(`/var/lib/docker`) and for `/tmp`
+.literal}. TMPDIR and the data directory can be set like this:
+
+    TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1
+    # or
+    export TMPDIR=/mnt/disk2/tmp
+    /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1
+
+## `attach`
+
+    Usage: docker attach CONTAINER
+
+    Attach to a running container.
+
+      --no-stdin=false: Do not attach stdin
+      --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
+
+The `attach` command will allow you to view or
+interact with any running container, detached (`-d`)
+or interactive (`-i`). You can attach to the same
+container at the same time - screen sharing style, or quickly view the
+progress of your daemonized process.
+
+You can detach from the container again (and leave it running) with
+`CTRL-C` (for a quiet exit) or `CTRL-\`
+to get a stacktrace of the Docker client when it quits. When
+you detach from the container’s process the exit code will be returned
+to the client.
+
+To stop a container, use `docker stop`.
+
+To kill the container, use `docker kill`.
+
+### Examples:
+
+    $ ID=$(sudo docker run -d ubuntu /usr/bin/top -b)
+    $ sudo docker attach $ID
+    top - 02:05:52 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
+    Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+    Cpu(s):  0.1%us,  0.2%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+    Mem:    373572k total,   355560k used,    18012k free,    27872k buffers
+    Swap:   786428k total,        0k used,   786428k free,   221740k cached
+
+    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
+     1 root      20   0 17200 1116  912 R    0  0.3   0:00.03 top
+
+     top - 02:05:55 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
+     Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+     Cpu(s):  0.0%us,  0.2%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+     Mem:    373572k total,   355244k used,    18328k free,    27872k buffers
+     Swap:   786428k total,        0k used,   786428k free,   221776k cached
+
+       PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
+           1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top
+
+
+     top - 02:05:58 up  3:06,  0 users,  load average: 0.01, 0.02, 0.05
+     Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+     Cpu(s):  0.2%us,  0.3%sy,  0.0%ni, 99.5%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+     Mem:    373572k total,   355780k used,    17792k free,    27880k buffers
+     Swap:   786428k total,        0k used,   786428k free,   221776k cached
+
+     PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
+          1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top
+    ^C$
+    $ sudo docker stop $ID
+
+## `build`
+
+    Usage: docker build [OPTIONS] PATH | URL | -
+    Build a new container image from the source code at PATH
+      -t, --tag="": Repository name (and optionally a tag) to be applied
+             to the resulting image in case of success.
+      -q, --quiet=false: Suppress the verbose output generated by the containers.
+      --no-cache: Do not use the cache when building the image.
+      --rm=true: Remove intermediate containers after a successful build
+
+Use this command to build Docker images from a `Dockerfile`
+and a "context".
+
+The files at `PATH` or `URL` are
+called the "context" of the build. The build process may refer to any of
+the files in the context, for example when using an
+[*ADD*](../../builder/#dockerfile-add) instruction. When a single
+`Dockerfile` is given as `URL`,
+then no context is set.
+
+When a Git repository is set as `URL`, then the
+repository is used as the context. The Git repository is cloned with its
+submodules (git clone –recursive). A fresh git clone occurs in a
+temporary directory on your local host, and then this is sent to the
+Docker daemon as the context. This way, your local user credentials and
+vpn’s etc can be used to access private repositories
+
+See also
+
+[*Dockerfile Reference*](../../builder/#dockerbuilder).
+
+### Examples:
+
+    $ sudo docker build .
+    Uploading context 10240 bytes
+    Step 1 : FROM busybox
+    Pulling repository busybox
+     ---> e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/
+    Step 2 : RUN ls -lh /
+     ---> Running in 9c9e81692ae9
+    total 24
+    drwxr-xr-x    2 root     root        4.0K Mar 12  2013 bin
+    drwxr-xr-x    5 root     root        4.0K Oct 19 00:19 dev
+    drwxr-xr-x    2 root     root        4.0K Oct 19 00:19 etc
+    drwxr-xr-x    2 root     root        4.0K Nov 15 23:34 lib
+    lrwxrwxrwx    1 root     root           3 Mar 12  2013 lib64 -> lib
+    dr-xr-xr-x  116 root     root           0 Nov 15 23:34 proc
+    lrwxrwxrwx    1 root     root           3 Mar 12  2013 sbin -> bin
+    dr-xr-xr-x   13 root     root           0 Nov 15 23:34 sys
+    drwxr-xr-x    2 root     root        4.0K Mar 12  2013 tmp
+    drwxr-xr-x    2 root     root        4.0K Nov 15 23:34 usr
+     ---> b35f4035db3f
+    Step 3 : CMD echo Hello World
+     ---> Running in 02071fceb21b
+     ---> f52f38b7823e
+    Successfully built f52f38b7823e
+    Removing intermediate container 9c9e81692ae9
+    Removing intermediate container 02071fceb21b
+
+This example specifies that the `PATH` is
+`.`, and so all the files in the local directory get
+tar’d and sent to the Docker daemon. The `PATH`
+specifies where to find the files for the "context" of the build on the
+Docker daemon. Remember that the daemon could be running on a remote
+machine and that no parsing of the `Dockerfile`
+happens at the client side (where you’re running
+`docker build`). That means that *all* the files at
+`PATH` get sent, not just the ones listed to
+[*ADD*](../../builder/#dockerfile-add) in the `Dockerfile`
+.literal}.
+
+The transfer of context from the local machine to the Docker daemon is
+what the `docker` client means when you see the
+"Uploading context" message.
+
+If you wish to keep the intermediate containers after the build is
+complete, you must use `--rm=false`. This does not
+affect the build cache.
+
+    $ sudo docker build -t vieux/apache:2.0 .
+
+This will build like the previous example, but it will then tag the
+resulting image. The repository name will be `vieux/apache`
+and the tag will be `2.0`
+
+    $ sudo docker build - < Dockerfile
+
+This will read a `Dockerfile` from *stdin* without
+context. Due to the lack of a context, no contents of any local
+directory will be sent to the `docker` daemon. Since
+there is no context, a `Dockerfile` `ADD`
+only works if it refers to a remote URL.
+
+    $ sudo docker build github.com/creack/docker-firefox
+
+This will clone the GitHub repository and use the cloned repository as
+context. The `Dockerfile` at the root of the
+repository is used as `Dockerfile`. Note that you
+can specify an arbitrary Git repository by using the `git://`
+schema.
+
+## `commit`
+
+    Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
+
+    Create a new image from a container's changes
+
+      -m, --message="": Commit message
+      -a, --author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
+
+It can be useful to commit a container’s file changes or settings into a
+new image. This allows you debug a container by running an interactive
+shell, or to export a working dataset to another server. Generally, it
+is better to use Dockerfiles to manage your images in a documented and
+maintainable way.
+
+### Commit an existing container
+
+    $ sudo docker ps
+    ID                  IMAGE               COMMAND             CREATED             STATUS              PORTS
+    c3f279d17e0a        ubuntu:12.04        /bin/bash           7 days ago          Up 25 hours
+    197387f1b436        ubuntu:12.04        /bin/bash           7 days ago          Up 25 hours
+    $ docker commit c3f279d17e0a  SvenDowideit/testimage:version3
+    f5283438590d
+    $ docker images | head
+    REPOSITORY                        TAG                 ID                  CREATED             VIRTUAL SIZE
+    SvenDowideit/testimage            version3            f5283438590d        16 seconds ago      335.7 MB
+
+## `cp`
+
+    Usage: docker cp CONTAINER:PATH HOSTPATH
+
+    Copy files/folders from the containers filesystem to the host
+    path.  Paths are relative to the root of the filesystem.
+
+    $ sudo docker cp 7bb0e258aefe:/etc/debian_version .
+    $ sudo docker cp blue_frog:/etc/hosts .
+
+## `diff`
+
+    Usage: docker diff CONTAINER
+
+    List the changed files and directories in a container's filesystem
+
+There are 3 events that are listed in the ‘diff’:
+
+1.  `` `A` `` - Add
+2.  `` `D` `` - Delete
+3.  `` `C` `` - Change
+
+For example:
+
+    $ sudo docker diff 7bb0e258aefe
+
+    C /dev
+    A /dev/kmsg
+    C /etc
+    A /etc/mtab
+    A /go
+    A /go/src
+    A /go/src/github.com
+    A /go/src/github.com/dotcloud
+    A /go/src/github.com/dotcloud/docker
+    A /go/src/github.com/dotcloud/docker/.git
+    ....
+
+## `events`
+
+    Usage: docker events
+
+    Get real time events from the server
+
+    --since="": Show all events created since timestamp
+               (either seconds since epoch, or date string as below)
+    --until="": Show events created before timestamp
+               (either seconds since epoch, or date string as below)
+
+### Examples
+
+You’ll need two shells for this example.
+
+#### Shell 1: Listening for events
+
+    $ sudo docker events
+
+#### Shell 2: Start and Stop a Container
+
+    $ sudo docker start 4386fb97867d
+    $ sudo docker stop 4386fb97867d
+
+#### Shell 1: (Again .. now showing events)
+
+    [2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
+
+#### Show events in the past from a specified time
+
+    $ sudo docker events --since 1378216169
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
+
+    $ sudo docker events --since '2013-09-03'
+    [2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
+
+    $ sudo docker events --since '2013-09-03 15:49:29 +0200 CEST'
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
+
+## `export`
+
+    Usage: docker export CONTAINER
+
+    Export the contents of a filesystem as a tar archive to STDOUT
+
+For example:
+
+    $ sudo docker export red_panda > latest.tar
+
+## `history`
+
+    Usage: docker history [OPTIONS] IMAGE
+
+    Show the history of an image
+
+      --no-trunc=false: Don't truncate output
+      -q, --quiet=false: Only show numeric IDs
+
+To see how the `docker:latest` image was built:
+
+    $ docker history docker
+    IMAGE                                                              CREATED             CREATED BY                                                                                                                                                 SIZE
+    3e23a5875458790b7a806f95f7ec0d0b2a5c1659bfc899c89f939f6d5b8f7094   8 days ago          /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8                                                                                                                       0 B
+    8578938dd17054dce7993d21de79e96a037400e8d28e15e7290fea4f65128a36   8 days ago          /bin/sh -c dpkg-reconfigure locales &&    locale-gen C.UTF-8 &&    /usr/sbin/update-locale LANG=C.UTF-8                                                    1.245 MB
+    be51b77efb42f67a5e96437b3e102f81e0a1399038f77bf28cea0ed23a65cf60   8 days ago          /bin/sh -c apt-get update && apt-get install -y    git    libxml2-dev    python    build-essential    make    gcc    python-dev    locales    python-pip   338.3 MB
+    4b137612be55ca69776c7f30c2d2dd0aa2e7d72059820abf3e25b629f887a084   6 weeks ago         /bin/sh -c #(nop) ADD jessie.tar.xz in /                                                                                                                   121 MB
+    750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada   6 weeks ago         /bin/sh -c #(nop) MAINTAINER Tianon Gravi <admwiggin@gmail.com> - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian             0 B
+    511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158   9 months ago                                                                                                                                                                   0 B
+
+## `images`
+
+    Usage: docker images [OPTIONS] [NAME]
+
+    List images
+
+      -a, --all=false: Show all images (by default filter out the intermediate image layers)
+      --no-trunc=false: Don't truncate output
+      -q, --quiet=false: Only show numeric IDs
+
+The default `docker images` will show all top level
+images, their repository and tags, and their virtual size.
+
+Docker images have intermediate layers that increase reuseability,
+decrease disk usage, and speed up `docker build` by
+allowing each step to be cached. These intermediate layers are not shown
+by default.
+
+### Listing the most recently created images
+
+    $ sudo docker images | head
+    REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
+    <none>                        <none>              77af4d6b9913        19 hours ago        1.089 GB
+    committest                    latest              b6fa739cedf5        19 hours ago        1.089 GB
+    <none>                        <none>              78a85c484f71        19 hours ago        1.089 GB
+    docker                        latest              30557a29d5ab        20 hours ago        1.089 GB
+    <none>                        <none>              0124422dd9f9        20 hours ago        1.089 GB
+    <none>                        <none>              18ad6fad3402        22 hours ago        1.082 GB
+    <none>                        <none>              f9f1e26352f0        23 hours ago        1.089 GB
+    tryout                        latest              2629d1fa0b81        23 hours ago        131.5 MB
+    <none>                        <none>              5ed6274db6ce        24 hours ago        1.089 GB
+
+### Listing the full length image IDs
+
+    $ sudo docker images --no-trunc | head
+    REPOSITORY                    TAG                 IMAGE ID                                                           CREATED             VIRTUAL SIZE
+    <none>                        <none>              77af4d6b9913e693e8d0b4b294fa62ade6054e6b2f1ffb617ac955dd63fb0182   19 hours ago        1.089 GB
+    committest                    latest              b6fa739cedf5ea12a620a439402b6004d057da800f91c7524b5086a5e4749c9f   19 hours ago        1.089 GB
+    <none>                        <none>              78a85c484f71509adeaace20e72e941f6bdd2b25b4c75da8693efd9f61a37921   19 hours ago        1.089 GB
+    docker                        latest              30557a29d5abc51e5f1d5b472e79b7e296f595abcf19fe6b9199dbbc809c6ff4   20 hours ago        1.089 GB
+    <none>                        <none>              0124422dd9f9cf7ef15c0617cda3931ee68346455441d66ab8bdc5b05e9fdce5   20 hours ago        1.089 GB
+    <none>                        <none>              18ad6fad340262ac2a636efd98a6d1f0ea775ae3d45240d3418466495a19a81b   22 hours ago        1.082 GB
+    <none>                        <none>              f9f1e26352f0a3ba6a0ff68167559f64f3e21ff7ada60366e2d44a04befd1d3a   23 hours ago        1.089 GB
+    tryout                        latest              2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074   23 hours ago        131.5 MB
+    <none>                        <none>              5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df   24 hours ago        1.089 GB
+
+## `import`
+
+    Usage: docker import URL|- [REPOSITORY[:TAG]]
+
+    Create an empty filesystem image and import the contents of the tarball
+    (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it.
+
+URLs must start with `http` and point to a single
+file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a
+root filesystem. If you would like to import from a local directory or
+archive, you can use the `-` parameter to take the
+data from *stdin*.
+
+### Examples
+
+#### Import from a remote location
+
+This will create a new untagged image.
+
+    $ sudo docker import http://example.com/exampleimage.tgz
+
+#### Import from a local file
+
+Import to docker via pipe and *stdin*.
+
+    $ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new
+
+#### Import from a local directory
+
+    $ sudo tar -c . | docker import - exampleimagedir
+
+Note the `sudo` in this example – you must preserve
+the ownership of the files (especially root ownership) during the
+archiving with tar. If you are not root (or the sudo command) when you
+tar, then the ownerships might not get preserved.
+
+## `info`
+
+    Usage: docker info
+
+    Display system-wide information.
+
+    $ sudo docker info
+    Containers: 292
+    Images: 194
+    Debug mode (server): false
+    Debug mode (client): false
+    Fds: 22
+    Goroutines: 67
+    LXC Version: 0.9.0
+    EventsListeners: 115
+    Kernel Version: 3.8.0-33-generic
+    WARNING: No swap limit support
+
+When sending issue reports, please use `docker version`
+and `docker info` to ensure we know how
+your setup is configured.
+
+## `inspect`
+
+    Usage: docker inspect CONTAINER|IMAGE [CONTAINER|IMAGE...]
+
+    Return low-level information on a container/image
+
+      -f, --format="": Format the output using the given go template.
+
+By default, this will render all results in a JSON array. If a format is
+specified, the given template will be executed for each result.
+
+Go’s [text/template](http://golang.org/pkg/text/template/) package
+describes all the details of the format.
+
+### Examples
+
+#### Get an instance’s IP Address
+
+For the most part, you can pick out any field from the JSON in a fairly
+straightforward manner.
+
+    $ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID
+
+#### List All Port Bindings
+
+One can loop over arrays and maps in the results to produce simple text
+output:
+
+    $ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID
+
+#### Find a Specific Port Mapping
+
+The `.Field` syntax doesn’t work when the field name
+begins with a number, but the template language’s `index`
+function does. The `.NetworkSettings.Ports`
+section contains a map of the internal port mappings to a list
+of external address/port objects, so to grab just the numeric public
+port, you use `index` to find the specific port map,
+and then `index` 0 contains first object inside of
+that. Then we ask for the `HostPort` field to get
+the public address.
+
+    $ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID
+
+#### Get config
+
+The `.Field` syntax doesn’t work when the field
+contains JSON data, but the template language’s custom `json`
+function does. The `.config` section
+contains complex json object, so to grab it as JSON, you use
+`json` to convert config object into JSON
+
+    $ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID
+
+## `kill`
+
+    Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
+
+    Kill a running container (send SIGKILL, or specified signal)
+
+      -s, --signal="KILL": Signal to send to the container
+
+The main process inside the container will be sent SIGKILL, or any
+signal specified with option `--signal`.
+
+### Known Issues (kill)
+
+-   [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates
+    that `docker kill` may leave directories behind
+    and make it difficult to remove the container.
+-   [Issue 3844](https://github.com/dotcloud/docker/issues/3844) lxc
+    1.0.0 beta3 removed `lcx-kill` which is used by
+    Docker versions before 0.8.0; see the issue for a workaround.
+
+## `load`
+
+    Usage: docker load
+
+    Load an image from a tar archive on STDIN
+
+      -i, --input="": Read from a tar archive file, instead of STDIN
+
+Loads a tarred repository from a file or the standard input stream.
+Restores both images and tags.
+
+    $ sudo docker images
+    REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
+    $ sudo docker load < busybox.tar
+    $ sudo docker images
+    REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
+    busybox             latest              769b9341d937        7 weeks ago         2.489 MB
+    $ sudo docker load --input fedora.tar
+    $ sudo docker images
+    REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
+    busybox             latest              769b9341d937        7 weeks ago         2.489 MB
+    fedora              rawhide             0d20aec6529d        7 weeks ago         387 MB
+    fedora              20                  58394af37342        7 weeks ago         385.5 MB
+    fedora              heisenbug           58394af37342        7 weeks ago         385.5 MB
+    fedora              latest              58394af37342        7 weeks ago         385.5 MB
+
+## `login`
+
+    Usage: docker login [OPTIONS] [SERVER]
+
+    Register or Login to the docker registry server
+
+    -e, --email="": Email
+    -p, --password="": Password
+    -u, --username="": Username
+
+    If you want to login to a private registry you can
+    specify this by adding the server name.
+
+    example:
+    docker login localhost:8080
+
+## `logs`
+
+    Usage: docker logs [OPTIONS] CONTAINER
+
+    Fetch the logs of a container
+
+    -f, --follow=false: Follow log output
+
+The `docker logs` command batch-retrieves all logs
+present at the time of execution.
+
+The `docker logs --follow` command combines
+`docker logs` and `docker attach`
+.literal}: it will first return all logs from the beginning and then
+continue streaming new output from the container’s stdout and stderr.
+
+## `port`
+
+    Usage: docker port [OPTIONS] CONTAINER PRIVATE_PORT
+
+    Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
+
+## `ps`
+
+    Usage: docker ps [OPTIONS]
+
+    List containers
+
+      -a, --all=false: Show all containers. Only running containers are shown by default.
+      --before="": Show only container created before Id or Name, include non-running ones.
+      -l, --latest=false: Show only the latest created container, include non-running ones.
+      -n=-1: Show n last created containers, include non-running ones.
+      --no-trunc=false: Don't truncate output
+      -q, --quiet=false: Only display numeric IDs
+      -s, --size=false: Display sizes, not to be used with -q
+      --since="": Show only containers created since Id or Name, include non-running ones.
+
+Running `docker ps` showing 2 linked containers.
+
+    $ docker ps
+    CONTAINER ID        IMAGE                        COMMAND                CREATED              STATUS              PORTS               NAMES
+    4c01db0b339c        ubuntu:12.04                 bash                   17 seconds ago       Up 16 seconds                           webapp
+    d7886598dbe2        crosbymichael/redis:latest   /redis-server --dir    33 minutes ago       Up 33 minutes       6379/tcp            redis,webapp/db
+
+`docker ps` will show only running containers by
+default. To see all containers: `docker ps -a`
+
+## `pull`
+
+    Usage: docker pull NAME[:TAG]
+
+    Pull an image or a repository from the registry
+
+Most of your images will be created on top of a base image from the
+\<Docker Index\>([https://index.docker.io](https://index.docker.io)).
+
+The Docker Index contains many pre-built images that you can
+`pull` and try without needing to define and
+configure your own.
+
+To download a particular image, or set of images (i.e., a repository),
+use `docker pull`:
+
+    $ docker pull debian
+    # will pull all the images in the debian repository
+    $ docker pull debian:testing
+    # will pull only the image named debian:testing and any intermediate layers
+    # it is based on. (typically the empty `scratch` image, a MAINTAINERs layer,
+    # and the un-tared base.
+
+## `push`
+
+    Usage: docker push NAME[:TAG]
+
+    Push an image or a repository to the registry
+
+Use `docker push` to share your images on public or
+private registries.
+
+## `restart`
+
+    Usage: docker restart [OPTIONS] NAME
+
+    Restart a running container
+
+       -t, --time=10: Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default=10
+
+## `rm`
+
+    Usage: docker rm [OPTIONS] CONTAINER
+
+    Remove one or more containers
+        -l, --link="": Remove the link instead of the actual container
+        -f, --force=false: Force removal of running container
+        -v, --volumes=false: Remove the volumes associated to the container
+
+### Known Issues (rm)
+
+-   [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates
+    that `docker kill` may leave directories behind
+    and make it difficult to remove the container.
+
+### Examples:
+
+    $ sudo docker rm /redis
+    /redis
+
+This will remove the container referenced under the link
+`/redis`.
+
+    $ sudo docker rm --link /webapp/redis
+    /webapp/redis
+
+This will remove the underlying link between `/webapp`
+and the `/redis` containers removing all
+network communication.
+
+    $ sudo docker rm $(docker ps -a -q)
+
+This command will delete all stopped containers. The command
+`docker ps -a -q` will return all existing container
+IDs and pass them to the `rm` command which will
+delete them. Any running containers will not be deleted.
+
+## `rmi`
+
+    Usage: docker rmi IMAGE [IMAGE...]
+
+    Remove one or more images
+
+      -f, --force=false: Force
+      --no-prune=false: Do not delete untagged parents
+
+### Removing tagged images
+
+Images can be removed either by their short or long ID’s, or their image
+names. If an image has more than one name, each of them needs to be
+removed before the image is removed.
+
+    $ sudo docker images
+    REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
+    test1                     latest              fd484f19954f        23 seconds ago      7 B (virtual 4.964 MB)
+    test                      latest              fd484f19954f        23 seconds ago      7 B (virtual 4.964 MB)
+    test2                     latest              fd484f19954f        23 seconds ago      7 B (virtual 4.964 MB)
+
+    $ sudo docker rmi fd484f19954f
+    Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories
+    2013/12/11 05:47:16 Error: failed to remove one or more images
+
+    $ sudo docker rmi test1
+    Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
+    $ sudo docker rmi test2
+    Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
+
+    $ sudo docker images
+    REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
+    test1                     latest              fd484f19954f        23 seconds ago      7 B (virtual 4.964 MB)
+    $ sudo docker rmi test
+    Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
+    Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
+
+## `run`
+
+    Usage: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
+
+    Run a command in a new container
+
+      -a, --attach=map[]: Attach to stdin, stdout or stderr
+      -c, --cpu-shares=0: CPU shares (relative weight)
+      --cidfile="": Write the container ID to the file
+      -d, --detach=false: Detached mode: Run container in the background, print new container id
+      -e, --env=[]: Set environment variables
+      --env-file="": Read in a line delimited file of ENV variables
+      -h, --hostname="": Container host name
+      -i, --interactive=false: Keep stdin open even if not attached
+      --privileged=false: Give extended privileges to this container
+      -m, --memory="": Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
+      -n, --networking=true: Enable networking for this container
+      -p, --publish=[]: Map a network port to the container
+      --rm=false: Automatically remove the container when it exits (incompatible with -d)
+      -t, --tty=false: Allocate a pseudo-tty
+      -u, --user="": Username or UID
+      --dns=[]: Set custom dns servers for the container
+      --dns-search=[]: Set custom DNS search domains for the container
+      -v, --volume=[]: Create a bind mount to a directory or file with: [host-path]:[container-path]:[rw|ro]. If a directory "container-path" is missing, then docker creates a new volume.
+      --volumes-from="": Mount all volumes from the given container(s)
+      --entrypoint="": Overwrite the default entrypoint set by the image
+      -w, --workdir="": Working directory inside the container
+      --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
+      --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
+      --expose=[]: Expose a port from the container without publishing it to your host
+      --link="": Add link to another container (name:alias)
+      --name="": Assign the specified name to the container. If no name is specific docker will generate a random name
+      -P, --publish-all=false: Publish all exposed ports to the host interfaces
+
+The `docker run` command first `creates`
+a writeable container layer over the specified image, and then
+`starts` it using the specified command. That is,
+`docker run` is equivalent to the API
+`/containers/create` then
+`/containers/(id)/start`. A stopped container can be
+restarted with all its previous changes intact using
+`docker start`. See `docker ps -a`
+to view a list of all containers.
+
+The `docker run` command can be used in combination
+with `docker commit` to [*change the command that a
+container runs*](#cli-commit-examples).
+
+See [*Redirect Ports*](../../../use/port_redirection/#port-redirection)
+for more detailed information about the `--expose`,
+`-p`, `-P` and
+`--link` parameters, and [*Link
+Containers*](../../../use/working_with_links_names/#working-with-links-names)
+for specific examples using `--link`.
+
+### Known Issues (run –volumes-from)
+
+-   [Issue 2702](https://github.com/dotcloud/docker/issues/2702):
+    "lxc-start: Permission denied - failed to mount" could indicate a
+    permissions problem with AppArmor. Please see the issue for a
+    workaround.
+
+### Examples:
+
+    $ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo "test"
+
+This will create a container and print `test` to the
+console. The `cidfile` flag makes Docker attempt to
+create a new file and write the container ID to it. If the file exists
+already, Docker will return an error. Docker will close this file when
+`docker run` exits.
+
+    $ sudo docker run -t -i --rm ubuntu bash
+    root@bc338942ef20:/# mount -t tmpfs none /mnt
+    mount: permission denied
+
+This will *not* work, because by default, most potentially dangerous
+kernel capabilities are dropped; including `cap_sys_admin`
+(which is required to mount filesystems). However, the
+`--privileged` flag will allow it to run:
+
+    $ sudo docker run --privileged ubuntu bash
+    root@50e3f57e16e6:/# mount -t tmpfs none /mnt
+    root@50e3f57e16e6:/# df -h
+    Filesystem      Size  Used Avail Use% Mounted on
+    none            1.9G     0  1.9G   0% /mnt
+
+The `--privileged` flag gives *all* capabilities to
+the container, and it also lifts all the limitations enforced by the
+`device` cgroup controller. In other words, the
+container can then do almost everything that the host can do. This flag
+exists to allow special use-cases, like running Docker within Docker.
+
+    $ sudo docker  run -w /path/to/dir/ -i -t  ubuntu pwd
+
+The `-w` lets the command being executed inside
+directory given, here `/path/to/dir/`. If the path
+does not exists it is created inside the container.
+
+    $ sudo docker  run  -v `pwd`:`pwd` -w `pwd` -i -t  ubuntu pwd
+
+The `-v` flag mounts the current working directory
+into the container. The `-w` lets the command being
+executed inside the current working directory, by changing into the
+directory to the value returned by `pwd`. So this
+combination executes the command using the container, but inside the
+current working directory.
+
+    $ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash
+
+When the host directory of a bind-mounted volume doesn’t exist, Docker
+will automatically create this directory on the host for you. In the
+example above, Docker will create the `/doesnt/exist`
+folder before starting your container.
+
+    $ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh
+
+By bind-mounting the docker unix socket and statically linked docker
+binary (such as that provided by
+[https://get.docker.io](https://get.docker.io)), you give the container
+the full access to create and manipulate the host’s docker daemon.
+
+    $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash
+
+This binds port `8080` of the container to port
+`80` on `127.0.0.1` of the host
+machine. [*Redirect
+Ports*](../../../use/port_redirection/#port-redirection) explains in
+detail how to manipulate ports in Docker.
+
+    $ sudo docker run --expose 80 ubuntu bash
+
+This exposes port `80` of the container for use
+within a link without publishing the port to the host system’s
+interfaces. [*Redirect
+Ports*](../../../use/port_redirection/#port-redirection) explains in
+detail how to manipulate ports in Docker.
+
+    $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
+
+This sets environmental variables in the container. For illustration all
+three flags are shown here. Where `-e`,
+`--env` take an environment variable and value, or
+if no "=" is provided, then that variable’s current value is passed
+through (i.e. $MYVAR1 from the host is set to $MYVAR1 in the
+container). All three flags, `-e`, `--env`
+and `--env-file` can be repeated.
+
+Regardless of the order of these three flags, the `--env-file`
+are processed first, and then `-e`
+.literal}/`--env` flags. This way, the
+`-e` or `--env` will override
+variables as needed.
+
+    $ cat ./env.list
+    TEST_FOO=BAR
+    $ sudo docker run --env TEST_FOO="This is a test" --env-file ./env.list busybox env | grep TEST_FOO
+    TEST_FOO=This is a test
+
+The `--env-file` flag takes a filename as an
+argument and expects each line to be in the VAR=VAL format, mimicking
+the argument passed to `--env`. Comment lines need
+only be prefixed with `#`
+
+An example of a file passed with `--env-file`
+
+    $ cat ./env.list
+    TEST_FOO=BAR
+
+    # this is a comment
+    TEST_APP_DEST_HOST=10.10.0.127
+    TEST_APP_DEST_PORT=8888
+
+    # pass through this variable from the caller
+    TEST_PASSTHROUGH
+    $ sudo TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env
+    HOME=/
+    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+    HOSTNAME=5198e0745561
+    TEST_FOO=BAR
+    TEST_APP_DEST_HOST=10.10.0.127
+    TEST_APP_DEST_PORT=8888
+    TEST_PASSTHROUGH=howdy
+
+    $ sudo docker run --name console -t -i ubuntu bash
+
+This will create and run a new container with the container name being
+`console`.
+
+    $ sudo docker run --link /redis:redis --name console ubuntu bash
+
+The `--link` flag will link the container named
+`/redis` into the newly created container with the
+alias `redis`. The new container can access the
+network and environment of the redis container via environment
+variables. The `--name` flag will assign the name
+`console` to the newly created container.
+
+    $ sudo docker run --volumes-from 777f7dc92da7,ba8c0c54f0f2:ro -i -t ubuntu pwd
+
+The `--volumes-from` flag mounts all the defined
+volumes from the referenced containers. Containers can be specified by a
+comma separated list or by repetitions of the `--volumes-from`
+argument. The container ID may be optionally suffixed with
+`:ro` or `:rw` to mount the
+volumes in read-only or read-write mode, respectively. By default, the
+volumes are mounted in the same mode (read write or read only) as the
+reference container.
+
+The `-a` flag tells `docker run`
+to bind to the container’s stdin, stdout or stderr. This makes it
+possible to manipulate the output and input as needed.
+
+    $ sudo echo "test" | docker run -i -a stdin ubuntu cat -
+
+This pipes data into a container and prints the container’s ID by
+attaching only to the container’s stdin.
+
+    $ sudo docker run -a stderr ubuntu echo test
+
+This isn’t going to print anything unless there’s an error because we’ve
+only attached to the stderr of the container. The container’s logs still
+store what’s been written to stderr and stdout.
+
+    $ sudo cat somefile | docker run -i -a stdin mybuilder dobuild
+
+This is how piping a file into a container could be done for a build.
+The container’s ID will be printed after the build is done and the build
+logs could be retrieved using `docker logs`. This is
+useful if you need to pipe a file or something else into a container and
+retrieve the container’s ID once the container has finished running.
+
+#### A complete example
+
+    $ sudo docker run -d --name static static-web-files sh
+    $ sudo docker run -d --expose=8098 --name riak riakserver
+    $ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver
+    $ sudo docker run -d -p 1443:443 --dns=dns.dev.org --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver
+    $ sudo docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log
+
+This example shows 5 containers that might be set up to test a web
+application change:
+
+1.  Start a pre-prepared volume image `static-web-files`
+ (in the background) that has CSS, image and static HTML in
+    it, (with a `VOLUME` instruction in the
+    `Dockerfile` to allow the web server to use
+    those files);
+2.  Start a pre-prepared `riakserver` image, give
+    the container name `riak` and expose port
+    `8098` to any containers that link to it;
+3.  Start the `appserver` image, restricting its
+    memory usage to 100MB, setting two environment variables
+    `DEVELOPMENT` and `BRANCH`
+    and bind-mounting the current directory (`$(pwd)`
+) in the container in read-only mode as
+    `/app/bin`;
+4.  Start the `webserver`, mapping port
+    `443` in the container to port `1443`
+ on the Docker server, setting the DNS server to
+    `dns.dev.org` and DNS search domain to
+    `dev.org`, creating a volume to put the log
+    files into (so we can access it from another container), then
+    importing the files from the volume exposed by the
+    `static` container, and linking to all exposed
+    ports from `riak` and `app`.
+    Lastly, we set the hostname to `web.sven.dev.org`
+ so its consistent with the pre-generated SSL certificate;
+5.  Finally, we create a container that runs
+    `tail -f access.log` using the logs volume from
+    the `web` container, setting the workdir to
+    `/var/log/httpd`. The `--rm`
+    option means that when the container exits, the container’s layer is
+    removed.
+
+## `save`
+
+    Usage: docker save IMAGE
+
+    Save an image to a tar archive (streamed to stdout by default)
+
+      -o, --output="": Write to an file, instead of STDOUT
+
+Produces a tarred repository to the standard output stream. Contains all
+parent layers, and all tags + versions, or specified repo:tag.
+
+It is used to create a backup that can then be used with
+`docker load`
+
+    $ sudo docker save busybox > busybox.tar
+    $ ls -sh b.tar
+    2.7M b.tar
+    $ sudo docker save --output busybox.tar busybox
+    $ ls -sh b.tar
+    2.7M b.tar
+    $ sudo docker save -o fedora-all.tar fedora
+    $ sudo docker save -o fedora-latest.tar fedora:latest
+
+## `search`
+
+    Usage: docker search TERM
+
+    Search the docker index for images
+
+     --no-trunc=false: Don't truncate output
+     -s, --stars=0: Only displays with at least xxx stars
+     -t, --trusted=false: Only show trusted builds
+
+See [*Find Public Images on the Central
+Index*](../../../use/workingwithrepository/#searching-central-index) for
+more details on finding shared images from the commandline.
+
+## `start`
+
+    Usage: docker start [OPTIONS] CONTAINER
+
+    Start a stopped container
+
+      -a, --attach=false: Attach container's stdout/stderr and forward all signals to the process
+      -i, --interactive=false: Attach container's stdin
+
+## `stop`
+
+    Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
+
+    Stop a running container (Send SIGTERM, and then SIGKILL after grace period)
+
+      -t, --time=10: Number of seconds to wait for the container to stop before killing it.
+
+The main process inside the container will receive SIGTERM, and after a
+grace period, SIGKILL
+
+## `tag`
+
+    Usage: docker tag [OPTIONS] IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG]
+
+    Tag an image into a repository
+
+      -f, --force=false: Force
+
+You can group your images together using names and tags, and then upload
+them to [*Share Images via
+Repositories*](../../../use/workingwithrepository/#working-with-the-repository).
+
+## `top`
+
+    Usage: docker top CONTAINER [ps OPTIONS]
+
+    Lookup the running processes of a container
+
+## `version`
+
+Show the version of the Docker client, daemon, and latest released
+version.
+
+## `wait`
+
+    Usage: docker wait [OPTIONS] NAME
+
+    Block until a container stops, then print its exit code.

+ 422 - 0
docs/sources/reference/run.md

@@ -0,0 +1,422 @@
+page_title: Docker Run Reference 
+page_description: Configure containers at runtime
+page_keywords: docker, run, configure, runtime
+
+# [Docker Run Reference](#id2)
+
+**Docker runs processes in isolated containers**. When an operator
+executes `docker run`, she starts a process with its
+own file system, its own networking, and its own isolated process tree.
+The [*Image*](../../terms/image/#image-def) which starts the process may
+define defaults related to the binary to run, the networking to expose,
+and more, but `docker run` gives final control to
+the operator who starts the container from the image. That’s the main
+reason [*run*](../commandline/cli/#cli-run) has more options than any
+other `docker` command.
+
+Every one of the [*Examples*](../../examples/#example-list) shows
+running containers, and so here we try to give more in-depth guidance.
+
+## [General Form](#id3)
+
+As you’ve seen in the [*Examples*](../../examples/#example-list), the
+basic run command takes this form:
+
+    docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
+
+To learn how to interpret the types of `[OPTIONS]`,
+see [*Option types*](../commandline/cli/#cli-options).
+
+The list of `[OPTIONS]` breaks down into two groups:
+
+1.  Settings exclusive to operators, including:
+    -   Detached or Foreground running,
+    -   Container Identification,
+    -   Network settings, and
+    -   Runtime Constraints on CPU and Memory
+    -   Privileges and LXC Configuration
+
+2.  Setting shared between operators and developers, where operators can
+    override defaults developers set in images at build time.
+
+Together, the `docker run [OPTIONS]` give complete
+control over runtime behavior to the operator, allowing them to override
+all defaults set by the developer during `docker build`
+and nearly all the defaults set by the Docker runtime itself.
+
+## [Operator Exclusive Options](#id4)
+
+Only the operator (the person executing `docker run`
+.literal}) can set the following options.
+
+-   [Detached vs Foreground](#detached-vs-foreground)
+    -   [Detached (-d)](#detached-d)
+    -   [Foreground](#foreground)
+-   [Container Identification](#container-identification)
+    -   [Name (–name)](#name-name)
+    -   [PID Equivalent](#pid-equivalent)
+-   [Network Settings](#network-settings)
+-   [Clean Up (–rm)](#clean-up-rm)
+-   [Runtime Constraints on CPU and
+    Memory](#runtime-constraints-on-cpu-and-memory)
+-   [Runtime Privilege and LXC
+    Configuration](#runtime-privilege-and-lxc-configuration)
+
+### [Detached vs Foreground](#id2)
+
+When starting a Docker container, you must first decide if you want to
+run the container in the background in a "detached" mode or in the
+default foreground mode:
+
+    -d=false: Detached mode: Run container in the background, print new container id
+
+#### [Detached (-d)](#id3)
+
+In detached mode (`-d=true` or just `-d`
+.literal}), all I/O should be done through network connections or shared
+volumes because the container is no longer listening to the commandline
+where you executed `docker run`. You can reattach to
+a detached container with `docker`
+[*attach*](../commandline/cli/#cli-attach). If you choose to run a
+container in the detached mode, then you cannot use the `--rm`
+option.
+
+#### [Foreground](#id4)
+
+In foreground mode (the default when `-d` is not
+specified), `docker run` can start the process in
+the container and attach the console to the process’s standard input,
+output, and standard error. It can even pretend to be a TTY (this is
+what most commandline executables expect) and pass along signals. All of
+that is configurable:
+
+    -a=[]           : Attach to ``stdin``, ``stdout`` and/or ``stderr``
+    -t=false        : Allocate a pseudo-tty
+    --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode)
+    -i=false        : Keep STDIN open even if not attached
+
+If you do not specify `-a` then Docker will [attach
+everything
+(stdin,stdout,stderr)](https://github.com/dotcloud/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797).
+You can specify to which of the three standard streams
+(`stdin`, `stdout`,
+`stderr`) you’d like to connect instead, as in:
+
+    docker run -a stdin -a stdout -i -t ubuntu /bin/bash
+
+For interactive processes (like a shell) you will typically want a tty
+as well as persistent standard input (`stdin`), so
+you’ll use `-i -t` together in most interactive
+cases.
+
+### [Container Identification](#id5)
+
+#### [Name (–name)](#id6)
+
+The operator can identify a container in three ways:
+
+-   UUID long identifier
+    ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778")
+-   UUID short identifier ("f78375b1c487")
+-   Name ("evil\_ptolemy")
+
+The UUID identifiers come from the Docker daemon, and if you do not
+assign a name to the container with `--name` then
+the daemon will also generate a random string name too. The name can
+become a handy way to add meaning to a container since you can use this
+name when defining
+[*links*](../../use/working_with_links_names/#working-with-links-names)
+(or any other place you need to identify a container). This works for
+both background and foreground Docker containers.
+
+#### [PID Equivalent](#id7)
+
+And finally, to help with automation, you can have Docker write the
+container ID out to a file of your choosing. This is similar to how some
+programs might write out their process ID to a file (you’ve seen them as
+PID files):
+
+    --cidfile="": Write the container ID to the file
+
+### [Network Settings](#id8)
+
+    -n=true   : Enable networking for this container
+    --dns=[]  : Set custom dns servers for the container
+
+By default, all containers have networking enabled and they can make any
+outgoing connections. The operator can completely disable networking
+with `docker run -n` which disables all incoming and
+outgoing networking. In cases like this, you would perform I/O through
+files or STDIN/STDOUT only.
+
+Your container will use the same DNS servers as the host by default, but
+you can override this with `--dns`.
+
+### [Clean Up (–rm)](#id9)
+
+By default a container’s file system persists even after the container
+exits. This makes debugging a lot easier (since you can inspect the
+final state) and you retain all your data by default. But if you are
+running short-term **foreground** processes, these container file
+systems can really pile up. If instead you’d like Docker to
+**automatically clean up the container and remove the file system when
+the container exits**, you can add the `--rm` flag:
+
+    --rm=false: Automatically remove the container when it exits (incompatible with -d)
+
+### [Runtime Constraints on CPU and Memory](#id10)
+
+The operator can also adjust the performance parameters of the
+container:
+
+    -m="": Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
+    -c=0 : CPU shares (relative weight)
+
+The operator can constrain the memory available to a container easily
+with `docker run -m`. If the host supports swap
+memory, then the `-m` memory setting can be larger
+than physical RAM.
+
+Similarly the operator can increase the priority of this container with
+the `-c` option. By default, all containers run at
+the same priority and get the same proportion of CPU cycles, but you can
+tell the kernel to give more shares of CPU time to one or more
+containers when you start them via Docker.
+
+### [Runtime Privilege and LXC Configuration](#id11)
+
+    --privileged=false: Give extended privileges to this container
+    --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
+
+By default, Docker containers are "unprivileged" and cannot, for
+example, run a Docker daemon inside a Docker container. This is because
+by default a container is not allowed to access any devices, but a
+"privileged" container is given access to all devices (see
+[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go)
+and documentation on [cgroups
+devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
+
+When the operator executes `docker run --privileged`
+.literal}, Docker will enable to access to all devices on the host as
+well as set some configuration in AppArmor to allow the container nearly
+all the same access to the host as processes running outside containers
+on the host. Additional information about running with
+`--privileged` is available on the [Docker
+Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/).
+
+If the Docker daemon was started using the `lxc`
+exec-driver (`docker -d --exec-driver=lxc`) then the
+operator can also specify LXC options using one or more
+`--lxc-conf` parameters. These can be new parameters
+or override existing parameters from the
+[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go).
+Note that in the future, a given host’s Docker daemon may not use LXC,
+so this is an implementation-specific configuration meant for operators
+already familiar with using LXC directly.
+
+## Overriding `Dockerfile` Image Defaults
+
+When a developer builds an image from a
+[*Dockerfile*](../builder/#dockerbuilder) or when she commits it, the
+developer can set a number of default parameters that take effect when
+the image starts up as a container.
+
+Four of the `Dockerfile` commands cannot be
+overridden at runtime: `FROM, MAINTAINER, RUN`, and
+`ADD`. Everything else has a corresponding override
+in `docker run`. We’ll go through what the developer
+might have set in each `Dockerfile` instruction and
+how the operator can override that setting.
+
+-   [CMD (Default Command or Options)](#cmd-default-command-or-options)
+-   [ENTRYPOINT (Default Command to Execute at
+    Runtime](#entrypoint-default-command-to-execute-at-runtime)
+-   [EXPOSE (Incoming Ports)](#expose-incoming-ports)
+-   [ENV (Environment Variables)](#env-environment-variables)
+-   [VOLUME (Shared Filesystems)](#volume-shared-filesystems)
+-   [USER](#user)
+-   [WORKDIR](#workdir)
+
+### [CMD (Default Command or Options)](#id12)
+
+Recall the optional `COMMAND` in the Docker
+commandline:
+
+    docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
+
+This command is optional because the person who created the
+`IMAGE` may have already provided a default
+`COMMAND` using the `Dockerfile`
+`CMD`. As the operator (the person running a
+container from the image), you can override that `CMD`
+just by specifying a new `COMMAND`.
+
+If the image also specifies an `ENTRYPOINT` then the
+`CMD` or `COMMAND` get appended
+as arguments to the `ENTRYPOINT`.
+
+### [ENTRYPOINT (Default Command to Execute at Runtime](#id13)
+
+    --entrypoint="": Overwrite the default entrypoint set by the image
+
+The ENTRYPOINT of an image is similar to a `COMMAND`
+because it specifies what executable to run when the container starts,
+but it is (purposely) more difficult to override. The
+`ENTRYPOINT` gives a container its default nature or
+behavior, so that when you set an `ENTRYPOINT` you
+can run the container *as if it were that binary*, complete with default
+options, and you can pass in more options via the `COMMAND`
+.literal}. But, sometimes an operator may want to run something else
+inside the container, so you can override the default
+`ENTRYPOINT` at runtime by using a string to specify
+the new `ENTRYPOINT`. Here is an example of how to
+run a shell in a container that has been set up to automatically run
+something else (like `/usr/bin/redis-server`):
+
+    docker run -i -t --entrypoint /bin/bash example/redis
+
+or two examples of how to pass more parameters to that ENTRYPOINT:
+
+    docker run -i -t --entrypoint /bin/bash example/redis -c ls -l
+    docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help
+
+### [EXPOSE (Incoming Ports)](#id14)
+
+The `Dockerfile` doesn’t give much control over
+networking, only providing the `EXPOSE` instruction
+to give a hint to the operator about what incoming ports might provide
+services. The following options work with or override the
+`Dockerfile`‘s exposed defaults:
+
+    --expose=[]: Expose a port from the container
+                without publishing it to your host
+    -P=false   : Publish all exposed ports to the host interfaces
+    -p=[]      : Publish a container's port to the host (format:
+                 ip:hostPort:containerPort | ip::containerPort |
+                 hostPort:containerPort)
+                 (use 'docker port' to see the actual mapping)
+    --link=""  : Add link to another container (name:alias)
+
+As mentioned previously, `EXPOSE` (and
+`--expose`) make a port available **in** a container
+for incoming connections. The port number on the inside of the container
+(where the service listens) does not need to be the same number as the
+port exposed on the outside of the container (where clients connect), so
+inside the container you might have an HTTP service listening on port 80
+(and so you `EXPOSE 80` in the
+`Dockerfile`), but outside the container the port
+might be 42800.
+
+To help a new client container reach the server container’s internal
+port operator `--expose`‘d by the operator or
+`EXPOSE`‘d by the developer, the operator has three
+choices: start the server container with `-P` or
+`-p,` or start the client container with
+`--link`.
+
+If the operator uses `-P` or `-p`
+then Docker will make the exposed port accessible on the host
+and the ports will be available to any client that can reach the host.
+To find the map between the host ports and the exposed ports, use
+`docker port`)
+
+If the operator uses `--link` when starting the new
+client container, then the client container can access the exposed port
+via a private networking interface. Docker will set some environment
+variables in the client container to help indicate which interface and
+port to use.
+
+### [ENV (Environment Variables)](#id15)
+
+The operator can **set any environment variable** in the container by
+using one or more `-e` flags, even overriding those
+already defined by the developer with a Dockefile `ENV`
+.literal}:
+
+    $ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export
+    declare -x HOME="/"
+    declare -x HOSTNAME="85bc26a0e200"
+    declare -x OLDPWD
+    declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+    declare -x PWD="/"
+    declare -x SHLVL="1"
+    declare -x container="lxc"
+    declare -x deep="purple"
+
+Similarly the operator can set the **hostname** with `-h`
+.literal}.
+
+`--link name:alias` also sets environment variables,
+using the *alias* string to define environment variables within the
+container that give the IP and PORT information for connecting to the
+service container. Let’s imagine we have a container running Redis:
+
+    # Start the service container, named redis-name
+    $ docker run -d --name redis-name dockerfiles/redis
+    4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3
+
+    # The redis-name container exposed port 6379
+    $ docker ps
+    CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS               NAMES
+    4241164edf6f        dockerfiles/redis:latest   /redis-stable/src/re   5 seconds ago       Up 4 seconds        6379/tcp            redis-name
+
+    # Note that there are no public ports exposed since we didn't use -p or -P
+    $ docker port 4241164edf6f 6379
+    2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f
+
+Yet we can get information about the Redis container’s exposed ports
+with `--link`. Choose an alias that will form a
+valid environment variable!
+
+    $ docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export
+    declare -x HOME="/"
+    declare -x HOSTNAME="acda7f7b1cdc"
+    declare -x OLDPWD
+    declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+    declare -x PWD="/"
+    declare -x REDIS_ALIAS_NAME="/distracted_wright/redis"
+    declare -x REDIS_ALIAS_PORT="tcp://172.17.0.32:6379"
+    declare -x REDIS_ALIAS_PORT_6379_TCP="tcp://172.17.0.32:6379"
+    declare -x REDIS_ALIAS_PORT_6379_TCP_ADDR="172.17.0.32"
+    declare -x REDIS_ALIAS_PORT_6379_TCP_PORT="6379"
+    declare -x REDIS_ALIAS_PORT_6379_TCP_PROTO="tcp"
+    declare -x SHLVL="1"
+    declare -x container="lxc"
+
+And we can use that information to connect from another container as a
+client:
+
+    $ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'
+    172.17.0.32:6379>
+
+### [VOLUME (Shared Filesystems)](#id16)
+
+    -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
+           If "container-dir" is missing, then docker creates a new volume.
+    --volumes-from="": Mount all volumes from the given container(s)
+
+The volumes commands are complex enough to have their own documentation
+in section [*Share Directories via
+Volumes*](../../use/working_with_volumes/#volume-def). A developer can
+define one or more `VOLUME`s associated with an
+image, but only the operator can give access from one container to
+another (or from a container to a volume mounted on the host).
+
+### [USER](#id17)
+
+The default user within a container is `root` (id =
+0), but if the developer created additional users, those are accessible
+too. The developer can set a default user to run the first process with
+the `Dockerfile USER` command, but the operator can
+override it
+
+    -u="": Username or UID
+
+### [WORKDIR](#id18)
+
+The default working directory for running binaries within a container is
+the root directory (`/`), but the developer can set
+a different default with the `Dockerfile WORKDIR`
+command. The operator can override this with:
+
+    -w="": Working directory inside the container

+ 10 - 0
docs/sources/search.md

@@ -0,0 +1,10 @@
+# Search
+
+*Please activate JavaScript to enable the search functionality.*
+
+## How To Search
+
+From here you can search these documents. Enter your search words into
+the box below and click "search". Note that the search function will
+automatically search for all of the words. Pages containing fewer words
+won't appear in the result list.

+ 13 - 0
docs/sources/terms.md

@@ -0,0 +1,13 @@
+# Glossary
+
+*Definitions of terms used in Docker documentation.*
+
+## Contents:
+
+- [File System](filesystem/)
+- [Layers](layer/)
+- [Image](image/)
+- [Container](container/)
+- [Registry](registry/)
+- [Repository](repository/)
+

+ 46 - 0
docs/sources/terms/container.md

@@ -0,0 +1,46 @@
+page_title: Container
+page_description: Definitions of a container
+page_keywords: containers, lxc, concepts, explanation, image, container
+
+# Container
+
+## Introduction
+
+![](../../_images/docker-filesystems-busyboxrw.png)
+
+Once you start a process in Docker from an
+[*Image*](../image/#image-def), Docker fetches the image and its
+[*Parent Image*](../image/#parent-image-def), and repeats the process
+until it reaches the [*Base Image*](../image/#base-image-def). Then the
+[*Union File System*](../layer/#ufs-def) adds a read-write layer on top.
+That read-write layer, plus the information about its [*Parent
+Image*](../image/#parent-image-def) and some additional information like
+its unique id, networking configuration, and resource limits is called a
+**container**.
+
+## Container State
+
+Containers can change, and so they have state. A container may be
+**running** or **exited**.
+
+When a container is running, the idea of a "container" also includes a
+tree of processes running on the CPU, isolated from the other processes
+running on the host.
+
+When the container is exited, the state of the file system and its exit
+value is preserved. You can start, stop, and restart a container. The
+processes restart from scratch (their memory state is **not** preserved
+in a container), but the file system is just as it was when the
+container was stopped.
+
+You can promote a container to an [*Image*](../image/#image-def) with
+`docker commit`. Once a container is an image, you
+can use it as a parent for new containers.
+
+## Container IDs
+
+All containers are identified by a 64 hexadecimal digit string
+(internally a 256bit value). To simplify their use, a short ID of the
+first 12 characters can be used on the commandline. There is a small
+possibility of short id collisions, so the docker server will always
+return the long ID.

+ 36 - 0
docs/sources/terms/filesystem.md

@@ -0,0 +1,36 @@
+page_title: File Systems
+page_description: How Linux organizes its persistent storage
+page_keywords: containers, files, linux
+
+# File System
+
+## Introduction
+
+![](../../_images/docker-filesystems-generic.png)
+
+In order for a Linux system to run, it typically needs two [file
+systems](http://en.wikipedia.org/wiki/Filesystem):
+
+1.  boot file system (bootfs)
+2.  root file system (rootfs)
+
+The **boot file system** contains the bootloader and the kernel. The
+user never makes any changes to the boot file system. In fact, soon
+after the boot process is complete, the entire kernel is in memory, and
+the boot file system is unmounted to free up the RAM associated with the
+initrd disk image.
+
+The **root file system** includes the typical directory structure we
+associate with Unix-like operating systems:
+`/dev, /proc, /bin, /etc, /lib, /usr,` and
+`/tmp` plus all the configuration files, binaries
+and libraries required to run user applications (like bash, ls, and so
+forth).
+
+While there can be important kernel differences between different Linux
+distributions, the contents and organization of the root file system are
+usually what make your software packages dependent on one distribution
+versus another. Docker can help solve this problem by running multiple
+distributions at the same time.
+
+![](../../_images/docker-filesystems-multiroot.png)

+ 40 - 0
docs/sources/terms/image.md

@@ -0,0 +1,40 @@
+page_title: Images
+page_description: Definition of an image
+page_keywords: containers, lxc, concepts, explanation, image, container
+
+# Image
+
+## Introduction
+
+![](../../_images/docker-filesystems-debian.png)
+
+In Docker terminology, a read-only [*Layer*](../layer/#layer-def) is
+called an **image**. An image never changes.
+
+Since Docker uses a [*Union File System*](../layer/#ufs-def), the
+processes think the whole file system is mounted read-write. But all the
+changes go to the top-most writeable layer, and underneath, the original
+file in the read-only image is unchanged. Since images don’t change,
+images do not have state.
+
+![](../../_images/docker-filesystems-debianrw.png)
+
+## Parent Image
+
+![](../../_images/docker-filesystems-multilayer.png)
+
+Each image may depend on one more image which forms the layer beneath
+it. We sometimes say that the lower image is the **parent** of the upper
+image.
+
+## Base Image
+
+An image that has no parent is a **base image**.
+
+## Image IDs
+
+All images are identified by a 64 hexadecimal digit string (internally a
+256bit value). To simplify their use, a short ID of the first 12
+characters can be used on the command line. There is a small possibility
+of short id collisions, so the docker server will always return the long
+ID.

+ 35 - 0
docs/sources/terms/layer.md

@@ -0,0 +1,35 @@
+page_title: Layers
+page_description: Organizing the Docker Root File System
+page_keywords: containers, lxc, concepts, explanation, image, container
+
+# Layers
+
+## Introduction
+
+In a traditional Linux boot, the kernel first mounts the root [*File
+System*](../filesystem/#filesystem-def) as read-only, checks its
+integrity, and then switches the whole rootfs volume to read-write mode.
+
+## Layer
+
+When Docker mounts the rootfs, it starts read-only, as in a traditional
+Linux boot, but then, instead of changing the file system to read-write
+mode, it takes advantage of a [union
+mount](http://en.wikipedia.org/wiki/Union_mount) to add a read-write
+file system *over* the read-only file system. In fact there may be
+multiple read-only file systems stacked on top of each other. We think
+of each one of these file systems as a **layer**.
+
+![](../../_images/docker-filesystems-multilayer.png)
+
+At first, the top read-write layer has nothing in it, but any time a
+process creates a file, this happens in the top layer. And if something
+needs to update an existing file in a lower layer, then the file gets
+copied to the upper layer and changes go into the copy. The version of
+the file on the lower layer cannot be seen by the applications anymore,
+but it is there, unchanged.
+
+## Union File System
+
+We call the union of the read-write layer and all the read-only layers a
+**union file system**.

+ 20 - 0
docs/sources/terms/registry.md

@@ -0,0 +1,20 @@
+page_title: Registry
+page_description: Definition of an Registry
+page_keywords: containers, lxc, concepts, explanation, image, repository, container
+
+# Registry
+
+## Introduction
+
+A Registry is a hosted service containing
+[*repositories*](../repository/#repository-def) of
+[*images*](../image/#image-def) which responds to the Registry API.
+
+The default registry can be accessed using a browser at
+[http://images.docker.io](http://images.docker.io) or using the
+`sudo docker search` command.
+
+## Further Reading
+
+For more information see [*Working with
+Repositories*](../../use/workingwithrepository/#working-with-the-repository)

+ 39 - 0
docs/sources/terms/repository.md

@@ -0,0 +1,39 @@
+page_title: Repository
+page_description: Definition of an Repository
+page_keywords: containers, lxc, concepts, explanation, image, repository, container
+
+# Repository
+
+## Introduction
+
+A repository is a set of images either on your local Docker server, or
+shared, by pushing it to a [*Registry*](../registry/#registry-def)
+server.
+
+Images can be associated with a repository (or multiple) by giving them
+an image name using one of three different commands:
+
+1.  At build time (e.g. `sudo docker build -t IMAGENAME`
+),
+2.  When committing a container (e.g.
+    `sudo docker commit CONTAINERID IMAGENAME`) or
+3.  When tagging an image id with an image name (e.g.
+    `sudo docker tag IMAGEID IMAGENAME`).
+
+A Fully Qualified Image Name (FQIN) can be made up of 3 parts:
+
+`[registry_hostname[:port]/][user_name/](repository_name:version_tag)`
+.literal}
+
+`username` and `registry_hostname`
+default to an empty string. When `registry_hostname`
+is an empty string, then `docker push`
+will push to `index.docker.io:80`.
+
+If you create a new repository which you want to share, you will need to
+set at least the `user_name`, as the ‘default’ blank
+`user_name` prefix is reserved for official Docker
+images.
+
+For more information see [*Working with
+Repositories*](../../use/workingwithrepository/#working-with-the-repository)

+ 17 - 0
docs/sources/toctree.md

@@ -0,0 +1,17 @@
+page_title: Documentation
+page_description: -- todo: change me
+page_keywords: todo, docker, documentation, installation, usage, examples, contributing, faq, command line, concepts
+
+# Documentation
+
+This documentation has the following resources:
+
+-   [Installation](../installation/)
+-   [Use](../use/)
+-   [Examples](../examples/)
+-   [Reference Manual](../reference/)
+-   [Contributing](../contributing/)
+-   [Glossary](../terms/)
+-   [Articles](../articles/)
+-   [FAQ](../faq/)
+

+ 13 - 0
docs/sources/use.md

@@ -0,0 +1,13 @@
+# Use
+
+## Contents:
+
+- [First steps with Docker](basics/)
+- [Share Images via Repositories](workingwithrepository/)
+- [Redirect Ports](port_redirection/)
+- [Configure Networking](networking/)
+- [Automatically Start Containers](host_integration/)
+- [Share Directories via Volumes](working_with_volumes/)
+- [Link Containers](working_with_links_names/)
+- [Link via an Ambassador Container](ambassador_pattern_linking/)
+- [Using Puppet](puppet/)

+ 157 - 0
docs/sources/use/ambassador_pattern_linking.md

@@ -0,0 +1,157 @@
+page_title: Link via an Ambassador Container
+page_description: Using the Ambassador pattern to abstract (network) services
+page_keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
+
+# Link via an Ambassador Container
+
+## Introduction
+
+Rather than hardcoding network links between a service consumer and
+provider, Docker encourages service portability.
+
+eg, instead of
+
+    (consumer) --> (redis)
+
+requiring you to restart the `consumer` to attach it
+to a different `redis` service, you can add
+ambassadors
+
+    (consumer) --> (redis-ambassador) --> (redis)
+
+    or
+
+    (consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
+
+When you need to rewire your consumer to talk to a different redis
+server, you can just restart the `redis-ambassador`
+container that the consumer is connected to.
+
+This pattern also allows you to transparently move the redis server to a
+different docker host from the consumer.
+
+Using the `svendowideit/ambassador` container, the
+link wiring is controlled entirely from the `docker run`
+parameters.
+
+## Two host Example
+
+Start actual redis server on one Docker host
+
+    big-server $ docker run -d -name redis crosbymichael/redis
+
+Then add an ambassador linked to the redis server, mapping a port to the
+outside world
+
+    big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador
+
+On the other host, you can set up another ambassador setting environment
+variables for each remote port we want to proxy to the
+`big-server`
+
+    client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador
+
+Then on the `client-server` host, you can use a
+redis client container to talk to the remote redis server, just by
+linking to the local redis ambassador.
+
+    client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
+    redis 172.17.0.160:6379> ping
+    PONG
+
+## How it works
+
+The following example shows what the `svendowideit/ambassador`
+container does automatically (with a tiny amount of
+`sed`)
+
+On the docker host (192.168.1.52) that redis will run on:
+
+    # start actual redis server
+    $ docker run -d -name redis crosbymichael/redis
+
+    # get a redis-cli container for connection testing
+    $ docker pull relateiq/redis-cli
+
+    # test the redis server by talking to it directly
+    $ docker run -t -i -rm -link redis:redis relateiq/redis-cli
+    redis 172.17.0.136:6379> ping
+    PONG
+    ^D
+
+    # add redis ambassador
+    $ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh
+
+in the redis\_ambassador container, you can see the linked redis
+containers’s env
+
+    $ env
+    REDIS_PORT=tcp://172.17.0.136:6379
+    REDIS_PORT_6379_TCP_ADDR=172.17.0.136
+    REDIS_NAME=/redis_ambassador/redis
+    HOSTNAME=19d7adf4705e
+    REDIS_PORT_6379_TCP_PORT=6379
+    HOME=/
+    REDIS_PORT_6379_TCP_PROTO=tcp
+    container=lxc
+    REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
+    TERM=xterm
+    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+    PWD=/
+
+This environment is used by the ambassador socat script to expose redis
+to the world (via the -p 6379:6379 port mapping)
+
+    $ docker rm redis_ambassador
+    $ sudo ./contrib/mkimage-unittest.sh
+    $ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 docker-ut sh
+
+    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
+
+then ping the redis server via the ambassador
+
+Now goto a different server
+
+    $ sudo ./contrib/mkimage-unittest.sh
+    $ docker run -t -i  -expose 6379 -name redis_ambassador docker-ut sh
+
+    $ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
+
+and get the redis-cli image so we can talk over the ambassador bridge
+
+    $ docker pull relateiq/redis-cli
+    $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
+    redis 172.17.0.160:6379> ping
+    PONG
+
+## The svendowideit/ambassador Dockerfile
+
+The `svendowideit/ambassador` image is a small
+busybox image with `socat` built in. When you start
+the container, it uses a small `sed` script to parse
+out the (possibly multiple) link environment variables to set up the
+port forwarding. On the remote host, you need to set the variable using
+the `-e` command line option.
+
+`--expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379`
+will forward the local `1234` port to the
+remote IP and port - in this case `192.168.1.52:6379`
+.literal}.
+
+    #
+    #
+    # first you need to build the docker-ut image
+    # using ./contrib/mkimage-unittest.sh
+    # then
+    #   docker build -t SvenDowideit/ambassador .
+    #   docker tag SvenDowideit/ambassador ambassador
+    # then to run it (on the host that has the real backend on it)
+    #   docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 ambassador
+    # on the remote host, you can set up another ambassador
+    #    docker run -t -i -name redis_ambassador -expose 6379 sh
+
+    FROM    docker-ut
+    MAINTAINER      SvenDowideit@home.org.au
+
+
+    CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top

+ 180 - 0
docs/sources/use/basics.md

@@ -0,0 +1,180 @@
+page_title: First steps with Docker
+page_description: Common usage and commands
+page_keywords: Examples, Usage, basic commands, docker, documentation, examples
+
+# First steps with Docker
+
+## Check your Docker install
+
+This guide assumes you have a working installation of Docker. To check
+your Docker install, run the following command:
+
+    # Check that you have a working install
+    docker info
+
+If you get `docker: command not found` or something
+like `/var/lib/docker/repositories: permission denied`
+you may have an incomplete docker installation or insufficient
+privileges to access Docker on your machine.
+
+Please refer to [*Installation*](../../installation/#installation-list)
+for installation instructions.
+
+## Download a pre-built image
+
+    # Download an ubuntu image
+    sudo docker pull ubuntu
+
+This will find the `ubuntu` image by name in the
+[*Central Index*](../workingwithrepository/#searching-central-index) and
+download it from the top-level Central Repository to a local image
+cache.
+
+Note
+
+When the image has successfully downloaded, you will see a 12 character
+hash `539c0211cd76: Download complete` which is the
+short form of the image ID. These short image IDs are the first 12
+characters of the full image ID - which can be found using
+`docker inspect` or
+`docker images --no-trunc=true`
+
+**If you’re using OS X** then you shouldn’t use `sudo`
+.literal}
+
+## Running an interactive shell
+
+    # Run an interactive shell in the ubuntu image,
+    # allocate a tty, attach stdin and stdout
+    # To detach the tty without exiting the shell,
+    # use the escape sequence Ctrl-p + Ctrl-q
+    # note: This will continue to exist in a stopped state once exited (see "docker ps -a")
+    sudo docker run -i -t ubuntu /bin/bash
+
+## Bind Docker to another host/port or a Unix socket
+
+Warning
+
+Changing the default `docker` daemon binding to a
+TCP port or Unix *docker* user group will increase your security risks
+by allowing non-root users to gain *root* access on the host. Make sure
+you control access to `docker`. If you are binding
+to a TCP port, anyone with access to that port has full Docker access;
+so it is not advisable on an open network.
+
+With `-H` it is possible to make the Docker daemon
+to listen on a specific IP and port. By default, it will listen on
+`unix:///var/run/docker.sock` to allow only local
+connections by the *root* user. You *could* set it to
+`0.0.0.0:4243` or a specific host IP to give access
+to everybody, but that is **not recommended** because then it is trivial
+for someone to gain root access to the host where the daemon is running.
+
+Similarly, the Docker client can use `-H` to connect
+to a custom port.
+
+`-H` accepts host and port assignment in the
+following format: `tcp://[host][:port]` or
+`unix://path`
+
+For example:
+
+-   `tcp://host:4243` -\> tcp connection on
+    host:4243
+-   `unix://path/to/socket` -\> unix socket located
+    at `path/to/socket`
+
+`-H`, when empty, will default to the same value as
+when no `-H` was passed in.
+
+`-H` also accepts short form for TCP bindings:
+`host[:port]` or `:port`
+
+    # Run docker in daemon mode
+    sudo <path to>/docker -H 0.0.0.0:5555 -d &
+    # Download an ubuntu image
+    sudo docker -H :5555 pull ubuntu
+
+You can use multiple `-H`, for example, if you want
+to listen on both TCP and a Unix socket
+
+    # Run docker in daemon mode
+    sudo <path to>/docker -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -d &
+    # Download an ubuntu image, use default Unix socket
+    sudo docker pull ubuntu
+    # OR use the TCP port
+    sudo docker -H tcp://127.0.0.1:4243 pull ubuntu
+
+## Starting a long-running worker process
+
+    # Start a very useful long-running process
+    JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
+
+    # Collect the output of the job so far
+    sudo docker logs $JOB
+
+    # Kill the job
+    sudo docker kill $JOB
+
+## Listing containers
+
+    sudo docker ps # Lists only running containers
+    sudo docker ps -a # Lists all containers
+
+## Controlling containers
+
+    # Start a new container
+    JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
+
+    # Stop the container
+    docker stop $JOB
+
+    # Start the container
+    docker start $JOB
+
+    # Restart the container
+    docker restart $JOB
+
+    # SIGKILL a container
+    docker kill $JOB
+
+    # Remove a container
+    docker stop $JOB # Container must be stopped to remove it
+    docker rm $JOB
+
+## Bind a service on a TCP port
+
+    # Bind port 4444 of this container, and tell netcat to listen on it
+    JOB=$(sudo docker run -d -p 4444 ubuntu:12.10 /bin/nc -l 4444)
+
+    # Which public port is NATed to my container?
+    PORT=$(sudo docker port $JOB 4444 | awk -F: '{ print $2 }')
+
+    # Connect to the public port
+    echo hello world | nc 127.0.0.1 $PORT
+
+    # Verify that the network connection worked
+    echo "Daemon received: $(sudo docker logs $JOB)"
+
+## Committing (saving) a container state
+
+Save your containers state to a container image, so the state can be
+re-used.
+
+When you commit your container only the differences between the image
+the container was created from and the current state of the container
+will be stored (as a diff). See which images you already have using the
+`docker images` command.
+
+    # Commit your container to a new named image
+    sudo docker commit <container_id> <some_name>
+
+    # List your containers
+    sudo docker images
+
+You now have a image state from which you can create new instances.
+
+Read more about [*Share Images via
+Repositories*](../workingwithrepository/#working-with-the-repository) or
+continue to the complete [*Command
+Line*](../../reference/commandline/cli/#cli)

+ 75 - 0
docs/sources/use/chef.md

@@ -0,0 +1,75 @@
+page_title: Chef Usage
+page_description: Installation and using Docker via Chef
+page_keywords: chef, installation, usage, docker, documentation
+
+# Using Chef
+
+Note
+
+Please note this is a community contributed installation path. The only
+‘official’ installation is using the
+[*Ubuntu*](../../installation/ubuntulinux/#ubuntu-linux) installation
+path. This version may sometimes be out of date.
+
+## Requirements
+
+To use this guide you’ll need a working installation of
+[Chef](http://www.getchef.com/). This cookbook supports a variety of
+operating systems.
+
+## Installation
+
+The cookbook is available on the [Chef Community
+Site](community.opscode.com/cookbooks/docker) and can be installed using
+your favorite cookbook dependency manager.
+
+The source can be found on
+[GitHub](https://github.com/bflad/chef-docker).
+
+## Usage
+
+The cookbook provides recipes for installing Docker, configuring init
+for Docker, and resources for managing images and containers. It
+supports almost all Docker functionality.
+
+### Installation
+
+    include_recipe 'docker'
+
+### Images
+
+The next step is to pull a Docker image. For this, we have a resource:
+
+    docker_image 'samalba/docker-registry'
+
+This is equivalent to running:
+
+    docker pull samalba/docker-registry
+
+There are attributes available to control how long the cookbook will
+allow for downloading (5 minute default).
+
+To remove images you no longer need:
+
+    docker_image 'samalba/docker-registry' do
+      action :remove
+    end
+
+### Containers
+
+Now you have an image where you can run commands within a container
+managed by Docker.
+
+    docker_container 'samalba/docker-registry' do
+      detach true
+      port '5000:5000'
+      env 'SETTINGS_FLAVOR=local'
+      volume '/mnt/docker:/docker-storage'
+    end
+
+This is equivalent to running the following command, but under upstart:
+
+    docker run --detach=true --publish='5000:5000' --env='SETTINGS_FLAVOR=local' --volume='/mnt/docker:/docker-storage' samalba/docker-registry
+
+The resources will accept a single string or an array of values for any
+docker flags that allow multiple values.

+ 63 - 0
docs/sources/use/host_integration.md

@@ -0,0 +1,63 @@
+page_title: Automatically Start Containers
+page_description: How to generate scripts for upstart, systemd, etc.
+page_keywords: systemd, upstart, supervisor, docker, documentation, host integration
+
+# Automatically Start Containers
+
+You can use your Docker containers with process managers like
+`upstart`, `systemd` and
+`supervisor`.
+
+## Introduction
+
+If you want a process manager to manage your containers you will need to
+run the docker daemon with the `-r=false` so that
+docker will not automatically restart your containers when the host is
+restarted.
+
+When you have finished setting up your image and are happy with your
+running container, you can then attach a process manager to manage it.
+When your run `docker start -a` docker will
+automatically attach to the running container, or start it if needed and
+forward all signals so that the process manager can detect when a
+container stops and correctly restart it.
+
+Here are a few sample scripts for systemd and upstart to integrate with
+docker.
+
+## Sample Upstart Script
+
+In this example we’ve already created a container to run Redis with
+`--name redis_server`. To create an upstart script
+for our container, we create a file named
+`/etc/init/redis.conf` and place the following into
+it:
+
+    description "Redis container"
+    author "Me"
+    start on filesystem and started docker
+    stop on runlevel [!2345]
+    respawn
+    script
+      /usr/bin/docker start -a redis_server
+    end script
+
+Next, we have to configure docker so that it’s run with the option
+`-r=false`. Run the following command:
+
+    $ sudo sh -c "echo 'DOCKER_OPTS=\"-r=false\"' > /etc/default/docker"
+
+## Sample systemd Script
+
+    [Unit]
+    Description=Redis container
+    Author=Me
+    After=docker.service
+
+    [Service]
+    Restart=always
+    ExecStart=/usr/bin/docker start -a redis_server
+    ExecStop=/usr/bin/docker stop -t 2 redis_server
+
+    [Install]
+    WantedBy=local.target

+ 142 - 0
docs/sources/use/networking.md

@@ -0,0 +1,142 @@
+page_title: Configure Networking
+page_description: Docker networking
+page_keywords: network, networking, bridge, docker, documentation
+
+# Configure Networking
+
+## Introduction
+
+Docker uses Linux bridge capabilities to provide network connectivity to
+containers. The `docker0` bridge interface is
+managed by Docker for this purpose. When the Docker daemon starts it :
+
+- creates the `docker0` bridge if not present
+- searches for an IP address range which doesn’t overlap with an existing route
+- picks an IP in the selected range
+- assigns this IP to the `docker0` bridge
+
+<!-- -->
+
+    # List host bridges
+    $ sudo brctl show
+    bridge      name    bridge id               STP enabled     interfaces
+    docker0             8000.000000000000       no
+
+    # Show docker0 IP address
+    $ sudo ifconfig docker0
+    docker0   Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
+         inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
+
+At runtime, a [*specific kind of virtual interface*](#vethxxxx-device)
+is given to each container which is then bonded to the
+`docker0` bridge. Each container also receives a
+dedicated IP address from the same range as `docker0`
+.literal}. The `docker0` IP address is used as the
+default gateway for the container.
+
+    # Run a container
+    $ sudo docker run -t -i -d base /bin/bash
+    52f811c5d3d69edddefc75aff5a4525fc8ba8bcfa1818132f9dc7d4f7c7e78b4
+
+    $ sudo brctl show
+    bridge      name    bridge id               STP enabled     interfaces
+    docker0             8000.fef213db5a66       no              vethQCDY1N
+
+Above, `docker0` acts as a bridge for the
+`vethQCDY1N` interface which is dedicated to the
+52f811c5d3d6 container.
+
+## How to use a specific IP address range
+
+Docker will try hard to find an IP range that is not used by the host.
+Even though it works for most cases, it’s not bullet-proof and sometimes
+you need to have more control over the IP addressing scheme.
+
+For this purpose, Docker allows you to manage the `docker0`
+bridge or your own one using the `-b=<bridgename>`
+parameter.
+
+In this scenario:
+
+-   ensure Docker is stopped
+-   create your own bridge (`bridge0` for example)
+-   assign a specific IP to this bridge
+-   start Docker with the `-b=bridge0` parameter
+
+<!-- -->
+
+    # Stop Docker
+    $ sudo service docker stop
+
+    # Clean docker0 bridge and
+    # add your very own bridge0
+    $ sudo ifconfig docker0 down
+    $ sudo brctl addbr bridge0
+    $ sudo ifconfig bridge0 192.168.227.1 netmask 255.255.255.0
+
+    # Edit your Docker startup file
+    $ echo "DOCKER_OPTS=\"-b=bridge0\"" >> /etc/default/docker
+
+    # Start Docker
+    $ sudo service docker start
+
+    # Ensure bridge0 IP is not changed by Docker
+    $ sudo ifconfig bridge0
+    bridge0   Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
+              inet addr:192.168.227.1  Bcast:192.168.227.255  Mask:255.255.255.0
+
+    # Run a container
+    $ docker run -i -t base /bin/bash
+
+    # Container IP in the 192.168.227/24 range
+    root@261c272cd7d5:/# ifconfig eth0
+    eth0      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
+              inet addr:192.168.227.5  Bcast:192.168.227.255  Mask:255.255.255.0
+
+    # bridge0 IP as the default gateway
+    root@261c272cd7d5:/# route -n
+    Kernel IP routing table
+    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
+    0.0.0.0         192.168.227.1   0.0.0.0         UG    0      0        0 eth0
+    192.168.227.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
+
+    # hits CTRL+P then CTRL+Q to detach
+
+    # Display bridge info
+    $ sudo brctl show
+    bridge      name    bridge id               STP enabled     interfaces
+    bridge0             8000.fe7c2e0faebd       no              vethAQI2QT
+
+## Container intercommunication
+
+The value of the Docker daemon’s `icc` parameter
+determines whether containers can communicate with each other over the
+bridge network.
+
+-   The default, `-icc=true` allows containers to
+    communicate with each other.
+-   `-icc=false` means containers are isolated from
+    each other.
+
+Docker uses `iptables` under the hood to either
+accept or drop communication between containers.
+
+## What is the vethXXXX device?
+
+Well. Things get complicated here.
+
+The `vethXXXX` interface is the host side of a
+point-to-point link between the host and the corresponding container;
+the other side of the link is the container’s `eth0`
+interface. This pair (host `vethXXX` and container
+`eth0`) are connected like a tube. Everything that
+comes in one side will come out the other side.
+
+All the plumbing is delegated to Linux network capabilities (check the
+ip link command) and the namespaces infrastructure.
+
+## I want more
+
+Jérôme Petazzoni has create `pipework` to connect
+together containers in arbitrarily complex scenarios :
+[https://github.com/jpetazzo/pipework](https://github.com/jpetazzo/pipework)

+ 140 - 0
docs/sources/use/port_redirection.md

@@ -0,0 +1,140 @@
+page_title: Redirect Ports
+page_description: usage about port redirection
+page_keywords: Usage, basic port, docker, documentation, examples
+
+# Redirect Ports
+
+## Introduction
+
+Interacting with a service is commonly done through a connection to a
+port. When this service runs inside a container, one can connect to the
+port after finding the IP address of the container as follows:
+
+    # Find IP address of container with ID <container_id>
+    docker inspect <container_id> | grep IPAddress | cut -d '"' -f 4
+
+However, this IP address is local to the host system and the container
+port is not reachable by the outside world. Furthermore, even if the
+port is used locally, e.g. by another container, this method is tedious
+as the IP address of the container changes every time it starts.
+
+Docker addresses these two problems and give a simple and robust way to
+access services running inside containers.
+
+To allow non-local clients to reach the service running inside the
+container, Docker provide ways to bind the container port to an
+interface of the host system. To simplify communication between
+containers, Docker provides the linking mechanism.
+
+## Auto map all exposed ports on the host
+
+To bind all the exposed container ports to the host automatically, use
+`docker run -P <imageid>`. The mapped host ports
+will be auto-selected from a pool of unused ports (49000..49900), and
+you will need to use `docker ps`,
+`docker inspect <container_id>` or
+`docker port <container_id> <port>` to determine
+what they are.
+
+## Binding a port to a host interface
+
+To bind a port of the container to a specific interface of the host
+system, use the `-p` parameter of the
+`docker run` command:
+
+    # General syntax
+    docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image> <cmd>
+
+When no host interface is provided, the port is bound to all available
+interfaces of the host machine (aka INADDR\_ANY, or 0.0.0.0).When no
+host port is provided, one is dynamically allocated. The possible
+combinations of options for TCP port are the following:
+
+    # Bind TCP port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine.
+    docker run -p 127.0.0.1:80:8080 <image> <cmd>
+
+    # Bind TCP port 8080 of the container to a dynamically allocated TCP port on 127.0.0.1 of the host machine.
+    docker run -p 127.0.0.1::8080 <image> <cmd>
+
+    # Bind TCP port 8080 of the container to TCP port 80 on all available interfaces of the host machine.
+    docker run -p 80:8080 <image> <cmd>
+
+    # Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces of the host machine.
+    docker run -p 8080 <image> <cmd>
+
+UDP ports can also be bound by adding a trailing `/udp`
+.literal}. All the combinations described for TCP work. Here is only one
+example:
+
+    # Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine.
+    docker run -p 127.0.0.1:53:5353/udp <image> <cmd>
+
+The command `docker port` lists the interface and
+port on the host machine bound to a given container port. It is useful
+when using dynamically allocated ports:
+
+    # Bind to a dynamically allocated port
+    docker run -p 127.0.0.1::8080 -name dyn-bound <image> <cmd>
+
+    # Lookup the actual port
+    docker port dyn-bound 8080
+    127.0.0.1:49160
+
+## Linking a container
+
+Communication between two containers can also be established in a
+docker-specific way called linking.
+
+To briefly present the concept of linking, let us consider two
+containers: `server`, containing the service, and
+`client`, accessing the service. Once
+`server` is running, `client` is
+started and links to server. Linking sets environment variables in
+`client` giving it some information about
+`server`. In this sense, linking is a method of
+service discovery.
+
+Let us now get back to our topic of interest; communication between the
+two containers. We mentioned that the tricky part about this
+communication was that the IP address of `server`
+was not fixed. Therefore, some of the environment variables are going to
+be used to inform `client` about this IP address.
+This process called exposure, is possible because `client`
+is started after `server` has been
+started.
+
+Here is a full example. On `server`, the port of
+interest is exposed. The exposure is done either through the
+`--expose` parameter to the `docker run`
+command, or the `EXPOSE` build command in
+a Dockerfile:
+
+    # Expose port 80
+    docker run --expose 80 --name server <image> <cmd>
+
+The `client` then links to the `server`
+.literal}:
+
+    # Link
+    docker run --name client --link server:linked-server <image> <cmd>
+
+`client` locally refers to `server`
+as `linked-server`. The following
+environment variables, among others, are available on `client`
+.literal}:
+
+    # The default protocol, ip, and port of the service running in the container
+    LINKED-SERVER_PORT=tcp://172.17.0.8:80
+
+    # A specific protocol, ip, and port of various services
+    LINKED-SERVER_PORT_80_TCP=tcp://172.17.0.8:80
+    LINKED-SERVER_PORT_80_TCP_PROTO=tcp
+    LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8
+    LINKED-SERVER_PORT_80_TCP_PORT=80
+
+This tells `client` that a service is running on
+port 80 of `server` and that `server`
+is accessible at the IP address 172.17.0.8
+
+Note: Using the `-p` parameter also exposes the
+port..

+ 92 - 0
docs/sources/use/puppet.md

@@ -0,0 +1,92 @@
+page_title: Puppet Usage
+page_description: Installating and using Puppet
+page_keywords: puppet, installation, usage, docker, documentation
+
+# Using Puppet
+
+> *Note:* Please note this is a community contributed installation path. The only
+> ‘official’ installation is using the
+> [*Ubuntu*](../../installation/ubuntulinux/#ubuntu-linux) installation
+> path. This version may sometimes be out of date.
+
+## Requirements
+
+To use this guide you’ll need a working installation of Puppet from
+[Puppetlabs](https://www.puppetlabs.com) .
+
+The module also currently uses the official PPA so only works with
+Ubuntu.
+
+## Installation
+
+The module is available on the [Puppet
+Forge](https://forge.puppetlabs.com/garethr/docker/) and can be
+installed using the built-in module tool.
+
+    puppet module install garethr/docker
+
+It can also be found on
+[GitHub](https://www.github.com/garethr/garethr-docker) if you would
+rather download the source.
+
+## Usage
+
+The module provides a puppet class for installing Docker and two defined
+types for managing images and containers.
+
+### Installation
+
+    include 'docker'
+
+### Images
+
+The next step is probably to install a Docker image. For this, we have a
+defined type which can be used like so:
+
+    docker::image { 'ubuntu': }
+
+This is equivalent to running:
+
+    docker pull ubuntu
+
+Note that it will only be downloaded if an image of that name does not
+already exist. This is downloading a large binary so on first run can
+take a while. For that reason this define turns off the default 5 minute
+timeout for the exec type. Note that you can also remove images you no
+longer need with:
+
+    docker::image { 'ubuntu':
+      ensure => 'absent',
+    }
+
+### Containers
+
+Now you have an image where you can run commands within a container
+managed by Docker.
+
+    docker::run { 'helloworld':
+      image   => 'ubuntu',
+      command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
+    }
+
+This is equivalent to running the following command, but under upstart:
+
+    docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
+
+Run also contains a number of optional parameters:
+
+    docker::run { 'helloworld':
+      image        => 'ubuntu',
+      command      => '/bin/sh -c "while true; do echo hello world; sleep 1; done"',
+      ports        => ['4444', '4555'],
+      volumes      => ['/var/lib/couchdb', '/var/log'],
+      volumes_from => '6446ea52fbc9',
+      memory_limit => 10485760, # bytes
+      username     => 'example',
+      hostname     => 'example.com',
+      env          => ['FOO=BAR', 'FOO2=BAR2'],
+      dns          => ['8.8.8.8', '8.8.4.4'],
+    }
+
+Note that ports, env, dns and volumes can be set with either a single
+string or as above with an array of values.

+ 121 - 0
docs/sources/use/working_with_links_names.md

@@ -0,0 +1,121 @@
+page_title: Link Containers
+page_description: How to create and use both links and names
+page_keywords: Examples, Usage, links, linking, docker, documentation, examples, names, name, container naming
+
+# Link Containers
+
+## Introduction
+
+From version 0.6.5 you are now able to `name` a
+container and `link` it to another container by
+referring to its name. This will create a parent -\> child relationship
+where the parent container can see selected information about its child.
+
+## Container Naming
+
+New in version v0.6.5.
+
+You can now name your container by using the `--name`
+flag. If no name is provided, Docker will automatically
+generate a name. You can see this name using the `docker ps`
+command.
+
+    # format is "sudo docker run --name <container_name> <image_name> <command>"
+    $ sudo docker run --name test ubuntu /bin/bash
+
+    # the flag "-a" Show all containers. Only running containers are shown by default.
+    $ sudo docker ps -a
+    CONTAINER ID        IMAGE                            COMMAND             CREATED             STATUS              PORTS               NAMES
+    2522602a0d99        ubuntu:12.04                     /bin/bash           14 seconds ago      Exit 0                                  test
+
+## Links: service discovery for docker
+
+New in version v0.6.5.
+
+Links allow containers to discover and securely communicate with each
+other by using the flag `-link name:alias`.
+Inter-container communication can be disabled with the daemon flag
+`-icc=false`. With this flag set to
+`false`, Container A cannot access Container B
+unless explicitly allowed via a link. This is a huge win for securing
+your containers. When two containers are linked together Docker creates
+a parent child relationship between the containers. The parent container
+will be able to access information via environment variables of the
+child such as name, exposed ports, IP and other selected environment
+variables.
+
+When linking two containers Docker will use the exposed ports of the
+container to create a secure tunnel for the parent to access. If a
+database container only exposes port 8080 then the linked container will
+only be allowed to access port 8080 and nothing else if inter-container
+communication is set to false.
+
+For example, there is an image called `crosbymichael/redis`
+that exposes the port 6379 and starts the Redis server. Let’s
+name the container as `redis` based on that image
+and run it as daemon.
+
+    $ sudo docker run -d -name redis crosbymichael/redis
+
+We can issue all the commands that you would expect using the name
+`redis`; start, stop, attach, using the name for our
+container. The name also allows us to link other containers into this
+one.
+
+Next, we can start a new web application that has a dependency on Redis
+and apply a link to connect both containers. If you noticed when running
+our Redis server we did not use the `-p` flag to
+publish the Redis port to the host system. Redis exposed port 6379 and
+this is all we need to establish a link.
+
+    $ sudo docker run -t -i -link redis:db -name webapp ubuntu bash
+
+When you specified `-link redis:db` you are telling
+Docker to link the container named `redis` into this
+new container with the alias `db`. Environment
+variables are prefixed with the alias so that the parent container can
+access network and environment information from the containers that are
+linked into it.
+
+If we inspect the environment variables of the second container, we
+would see all the information about the child container.
+
+    $ root@4c01db0b339c:/# env
+
+    HOSTNAME=4c01db0b339c
+    DB_NAME=/webapp/db
+    TERM=xterm
+    DB_PORT=tcp://172.17.0.8:6379
+    DB_PORT_6379_TCP=tcp://172.17.0.8:6379
+    DB_PORT_6379_TCP_PROTO=tcp
+    DB_PORT_6379_TCP_ADDR=172.17.0.8
+    DB_PORT_6379_TCP_PORT=6379
+    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+    PWD=/
+    SHLVL=1
+    HOME=/
+    container=lxc
+    _=/usr/bin/env
+    root@4c01db0b339c:/#
+
+Accessing the network information along with the environment of the
+child container allows us to easily connect to the Redis service on the
+specific IP and port in the environment.
+
+Note
+
+These Environment variables are only set for the first process in the
+container. Similarly, some daemons (such as `sshd`)
+will scrub them when spawning shells for connection.
+
+You can work around this by storing the initial `env`
+in a file, or looking at `/proc/1/environ`
+.literal}.
+
+Running `docker ps` shows the 2 containers, and the
+`webapp/db` alias name for the Redis container.
+
+    $ docker ps
+    CONTAINER ID        IMAGE                        COMMAND                CREATED              STATUS              PORTS               NAMES
+    4c01db0b339c        ubuntu:12.04                 bash                   17 seconds ago       Up 16 seconds                           webapp
+    d7886598dbe2        crosbymichael/redis:latest   /redis-server --dir    33 minutes ago       Up 33 minutes       6379/tcp            redis,webapp/db

+ 178 - 0
docs/sources/use/working_with_volumes.md

@@ -0,0 +1,178 @@
+page_title: Share Directories via Volumes
+page_description: How to create and share volumes
+page_keywords: Examples, Usage, volume, docker, documentation, examples
+
+# Share Directories via Volumes
+
+## Introduction
+
+A *data volume* is a specially-designated directory within one or more
+containers that bypasses the [*Union File
+System*](../../terms/layer/#ufs-def) to provide several useful features
+for persistent or shared data:
+
+- **Data volumes can be shared and reused between containers:**  
+  This is the feature that makes data volumes so powerful. You can
+  use it for anything from hot database upgrades to custom backup or
+  replication tools. See the example below.
+- **Changes to a data volume are made directly:**  
+  Without the overhead of a copy-on-write mechanism. This is good for
+  very large files.
+- **Changes to a data volume will not be included at the next commit:**  
+  Because they are not recorded as regular filesystem changes in the
+  top layer of the [*Union File System*](../../terms/layer/#ufs-def)
+- **Volumes persist until no containers use them:**  
+  As they are a reference counted resource. The container does not need to be
+  running to share its volumes, but running it can help protect it
+  against accidental removal via `docker rm`.
+
+Each container can have zero or more data volumes.
+
+New in version v0.3.0.
+
+## Getting Started
+
+Using data volumes is as simple as adding a `-v`
+parameter to the `docker run` command. The
+`-v` parameter can be used more than once in order
+to create more volumes within the new container. To create a new
+container with two new volumes:
+
+    $ docker run -v /var/volume1 -v /var/volume2 busybox true
+
+This command will create the new container with two new volumes that
+exits instantly (`true` is pretty much the smallest,
+simplest program that you can run). Once created you can mount its
+volumes in any other container using the `--volumes-from`
+option; irrespective of whether the container is running or
+not.
+
+Or, you can use the VOLUME instruction in a Dockerfile to add one or
+more new volumes to any container created from that image:
+
+    # BUILD-USING:        docker build -t data .
+    # RUN-USING:          docker run -name DATA data
+    FROM          busybox
+    VOLUME        ["/var/volume1", "/var/volume2"]
+    CMD           ["/bin/true"]
+
+### Creating and mounting a Data Volume Container
+
+If you have some persistent data that you want to share between
+containers, or want to use from non-persistent containers, its best to
+create a named Data Volume Container, and then to mount the data from
+it.
+
+Create a named container with volumes to share (`/var/volume1`
+and `/var/volume2`):
+
+    $ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true
+
+Then mount those data volumes into your application containers:
+
+    $ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash
+
+You can use multiple `-volumes-from` parameters to
+bring together multiple data volumes from multiple containers.
+
+Interestingly, you can mount the volumes that came from the
+`DATA` container in yet another container via the
+`client1` middleman container:
+
+    $ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash
+
+This allows you to abstract the actual data source from users of that
+data, similar to
+[*ambassador\_pattern\_linking*](../ambassador_pattern_linking/#ambassador-pattern-linking).
+
+If you remove containers that mount volumes, including the initial DATA
+container, or the middleman, the volumes will not be deleted until there
+are no containers still referencing those volumes. This allows you to
+upgrade, or effectively migrate data volumes between containers.
+
+### Mount a Host Directory as a Container Volume:
+
+    -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
+
+You must specify an absolute path for `host-dir`. If
+`host-dir` is missing from the command, then docker
+creates a new volume. If `host-dir` is present but
+points to a non-existent directory on the host, Docker will
+automatically create this directory and use it as the source of the
+bind-mount.
+
+Note that this is not available from a Dockerfile due the portability
+and sharing purpose of it. The `host-dir` volumes
+are entirely host-dependent and might not work on any other machine.
+
+For example:
+
+    sudo docker run -t -i -v /var/logs:/var/host_logs:ro ubuntu bash
+
+The command above mounts the host directory `/var/logs`
+into the container with read only permissions as
+`/var/host_logs`.
+
+New in version v0.5.0.
+
+### Note for OS/X users and remote daemon users:
+
+OS/X users run `boot2docker` to create a minimalist
+virtual machine running the docker daemon. That virtual machine then
+launches docker commands on behalf of the OS/X command line. The means
+that `host directories` refer to directories in the
+`boot2docker` virtual machine, not the OS/X
+filesystem.
+
+Similarly, anytime when the docker daemon is on a remote machine, the
+`host directories` always refer to directories on
+the daemon’s machine.
+
+### Backup, restore, or migrate data volumes
+
+You cannot back up volumes using `docker export`,
+`docker save` and `docker cp`
+because they are external to images. Instead you can use
+`--volumes-from` to start a new container that can
+access the data-container’s volume. For example:
+
+    $ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
+
+-   `-rm` - remove the container when it exits
+-   `--volumes-from DATA` - attach to the volumes
+    shared by the `DATA` container
+-   `-v $(pwd):/backup` - bind mount the current
+    directory into the container; to write the tar file to
+-   `busybox` - a small simpler image - good for
+    quick maintenance
+-   `tar cvf /backup/backup.tar /data` - creates an
+    uncompressed tar file of all the files in the `/data`
+ directory
+
+Then to restore to the same container, or another that you’ve made
+elsewhere:
+
+    # create a new data container
+    $ sudo docker run -v /data -name DATA2 busybox true
+    # untar the backup files into the new container's data volume
+    $ sudo docker run -rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
+    data/
+    data/sven.txt
+    # compare to the original container
+    $ sudo docker run -rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
+    sven.txt
+
+You can use the basic techniques above to automate backup, migration and
+restore testing using your preferred tools.
+
+## Known Issues
+
+-   [Issue 2702](https://github.com/dotcloud/docker/issues/2702):
+    "lxc-start: Permission denied - failed to mount" could indicate a
+    permissions problem with AppArmor. Please see the issue for a
+    workaround.
+-   [Issue 2528](https://github.com/dotcloud/docker/issues/2528): the
+    busybox container is used to make the resulting container as small
+    and simple as possible - whenever you need to interact with the data
+    in the volume you mount it into another container.
+

+ 235 - 0
docs/sources/use/workingwithrepository.md

@@ -0,0 +1,235 @@
+page_title: Share Images via Repositories
+page_description: Repositories allow users to share images.
+page_keywords: repo, repositories, usage, pull image, push image, image, documentation
+
+# Share Images via Repositories
+
+## Introduction
+
+A *repository* is a shareable collection of tagged
+[*images*](../../terms/image/#image-def) that together create the file
+systems for containers. The repository’s name is a label that indicates
+the provenance of the repository, i.e. who created it and where the
+original copy is located.
+
+You can find one or more repositories hosted on a *registry*. There can
+be an implicit or explicit host name as part of the repository tag. The
+implicit registry is located at `index.docker.io`,
+the home of "top-level" repositories and the Central Index. This
+registry may also include public "user" repositories.
+
+Docker is not only a tool for creating and managing your own
+[*containers*](../../terms/container/#container-def) – **Docker is also
+a tool for sharing**. The Docker project provides a Central Registry to
+host public repositories, namespaced by user, and a Central Index which
+provides user authentication and search over all the public
+repositories. You can host your own Registry too! Docker acts as a
+client for these services via `docker search, pull, login`
+and `push`.
+
+## Repositories
+
+### Local Repositories
+
+Docker images which have been created and labeled on your local Docker
+server need to be pushed to a Public or Private registry to be shared.
+
+### Public Repositories
+
+There are two types of public repositories: *top-level* repositories
+which are controlled by the Docker team, and *user* repositories created
+by individual contributors. Anyone can read from these repositories –
+they really help people get started quickly! You could also use
+[*Trusted Builds*](#using-private-repositories) if you need to keep
+control of who accesses your images, but we will only refer to public
+repositories in these examples.
+
+-   Top-level repositories can easily be recognized by **not** having a
+    `/` (slash) in their name. These repositories
+    can generally be trusted.
+-   User repositories always come in the form of
+    `<username>/<repo_name>`. This is what your
+    published images will look like if you push to the public Central
+    Registry.
+-   Only the authenticated user can push to their *username* namespace
+    on the Central Registry.
+-   User images are not checked, it is therefore up to you whether or
+    not you trust the creator of this image.
+
+## Find Public Images on the Central Index
+
+You can search the Central Index [online](https://index.docker.io) or
+using the command line interface. Searching can find images by name,
+user name or description:
+
+    $ sudo docker help search
+    Usage: docker search NAME
+
+    Search the docker index for images
+
+      -notrunc=false: Don't truncate output
+    $ sudo docker search centos
+    Found 25 results matching your query ("centos")
+    NAME                             DESCRIPTION
+    centos
+    slantview/centos-chef-solo       CentOS 6.4 with chef-solo.
+    ...
+
+There you can see two example results: `centos` and
+`slantview/centos-chef-solo`. The second result
+shows that it comes from the public repository of a user,
+`slantview/`, while the first result
+(`centos`) doesn’t explicitly list a repository so
+it comes from the trusted Central Repository. The `/`
+character separates a user’s repository and the image name.
+
+Once you have found the image name, you can download it:
+
+    # sudo docker pull <value>
+    $ sudo docker pull centos
+    Pulling repository centos
+    539c0211cd76: Download complete
+
+What can you do with that image? Check out the
+[*Examples*](../../examples/#example-list) and, when you’re ready with
+your own image, come back here to learn how to share it.
+
+## Contributing to the Central Registry
+
+Anyone can pull public images from the Central Registry, but if you
+would like to share one of your own images, then you must register a
+unique user name first. You can create your username and login on the
+[central Docker Index online](https://index.docker.io/account/signup/),
+or by running
+
+    sudo docker login
+
+This will prompt you for a username, which will become a public
+namespace for your public repositories.
+
+If your username is available then `docker` will
+also prompt you to enter a password and your e-mail address. It will
+then automatically log you in. Now you’re ready to commit and push your
+own images!
+
+## Committing a Container to a Named Image
+
+When you make changes to an existing image, those changes get saved to a
+container’s file system. You can then promote that container to become
+an image by making a `commit`. In addition to
+converting the container to an image, this is also your opportunity to
+name the image, specifically a name that includes your user name from
+the Central Docker Index (as you did a `login`
+above) and a meaningful name for the image.
+
+    # format is "sudo docker commit <container_id> <username>/<imagename>"
+    $ sudo docker commit $CONTAINER_ID myname/kickassapp
+
+## Pushing a repository to its registry
+
+In order to push an repository to its registry you need to have named an
+image, or committed your container to a named image (see above)
+
+Now you can push this repository to the registry designated by its name
+or tag.
+
+    # format is "docker push <username>/<repo_name>"
+    $ sudo docker push myname/kickassapp
+
+## Trusted Builds
+
+Trusted Builds automate the building and updating of images from GitHub,
+directly on `docker.io` servers. It works by adding
+a commit hook to your selected repository, triggering a build and update
+when you push a commit.
+
+### To setup a trusted build
+
+1.  Create a [Docker Index account](https://index.docker.io/) and login.
+2.  Link your GitHub account through the `Link Accounts`
+ menu.
+3.  [Configure a Trusted build](https://index.docker.io/builds/).
+4.  Pick a GitHub project that has a `Dockerfile`
+    that you want to build.
+5.  Pick the branch you want to build (the default is the
+    `master` branch).
+6.  Give the Trusted Build a name.
+7.  Assign an optional Docker tag to the Build.
+8.  Specify where the `Dockerfile` is located. The
+    default is `/`.
+
+Once the Trusted Build is configured it will automatically trigger a
+build, and in a few minutes, if there are no errors, you will see your
+new trusted build on the Docker Index. It will will stay in sync with
+your GitHub repo until you deactivate the Trusted Build.
+
+If you want to see the status of your Trusted Builds you can go to your
+[Trusted Builds page](https://index.docker.io/builds/) on the Docker
+index, and it will show you the status of your builds, and the build
+history.
+
+Once you’ve created a Trusted Build you can deactivate or delete it. You
+cannot however push to a Trusted Build with the `docker push`
+command. You can only manage it by committing code to your
+GitHub repository.
+
+You can create multiple Trusted Builds per repository and configure them
+to point to specific `Dockerfile`‘s or Git branches.
+
+## Private Registry
+
+Private registries and private shared repositories are only possible by
+hosting [your own
+registry](https://github.com/dotcloud/docker-registry). To push or pull
+to a repository on your own registry, you must prefix the tag with the
+address of the registry’s host (a `.` or
+`:` is used to identify a host), like this:
+
+    # Tag to create a repository with the full registry location.
+    # The location (e.g. localhost.localdomain:5000) becomes
+    # a permanent part of the repository name
+    sudo docker tag 0u812deadbeef localhost.localdomain:5000/repo_name
+
+    # Push the new repository to its home location on localhost
+    sudo docker push localhost.localdomain:5000/repo_name
+
+Once a repository has your registry’s host name as part of the tag, you
+can push and pull it like any other repository, but it will **not** be
+searchable (or indexed at all) in the Central Index, and there will be
+no user name checking performed. Your registry will function completely
+independently from the Central Index.
+
+<iframe width="640" height="360" src="//www.youtube.com/embed/CAewZCBT4PI?rel=0" frameborder="0" allowfullscreen></iframe>
+
+See also
+
+[Docker Blog: How to use your own
+registry](http://blog.docker.io/2013/07/how-to-use-your-own-registry/)
+
+## Authentication File
+
+The authentication is stored in a json file, `.dockercfg`
+located in your home directory. It supports multiple registry
+urls.
+
+`docker login` will create the
+"[https://index.docker.io/v1/](https://index.docker.io/v1/)" key.
+
+`docker login https://my-registry.com` will create
+the "[https://my-registry.com](https://my-registry.com)" key.
+
+For example:
+
+    {
+         "https://index.docker.io/v1/": {
+                 "auth": "xXxXxXxXxXx=",
+                 "email": "email@example.com"
+         },
+         "https://my-registry.com": {
+                 "auth": "XxXxXxXxXxX=",
+                 "email": "email@my-registry.com"
+         }
+    }
+
+The `auth` field represents
+`base64(<username>:<password>)`