diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 38cec6ac14..3538642717 100755 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -116,11 +116,12 @@ pages: - ['jsearch.md', '**HIDDEN**'] # - ['static_files/README.md', 'static_files', 'README'] -#- ['terms/index.md', '**HIDDEN**'] -# - ['terms/layer.md', 'terms', 'layer'] -# - ['terms/index.md', 'terms', 'Home'] -# - ['terms/registry.md', 'terms', 'registry'] -# - ['terms/container.md', 'terms', 'container'] -# - ['terms/repository.md', 'terms', 'repository'] -# - ['terms/filesystem.md', 'terms', 'filesystem'] -# - ['terms/image.md', 'terms', 'image'] +- ['terms/index.md', '**HIDDEN**'] +- ['terms/layer.md', '**HIDDEN**', 'layer'] +- ['terms/index.md', '**HIDDEN**', 'Home'] +- ['terms/registry.md', '**HIDDEN**', 'registry'] +- ['terms/container.md', '**HIDDEN**', 'container'] +- ['terms/repository.md', '**HIDDEN**', 'repository'] +- ['terms/filesystem.md', '**HIDDEN**', 'filesystem'] +- ['terms/image.md', '**HIDDEN**', 'image'] + diff --git a/docs/sources/articles.md b/docs/sources/articles.md index da5a2d255f..54c067d0cc 100644 --- a/docs/sources/articles.md +++ b/docs/sources/articles.md @@ -2,7 +2,7 @@ ## Contents: -- [Docker Security](security/) -- [Create a Base Image](baseimages/) -- [Runtime Metrics](runmetrics/) + - [Docker Security](security/) + - [Create a Base Image](baseimages/) + - [Runtime Metrics](runmetrics/) diff --git a/docs/sources/articles/baseimages.md b/docs/sources/articles/baseimages.md index d2d6336a6c..c795b7a0a7 100644 --- a/docs/sources/articles/baseimages.md +++ b/docs/sources/articles/baseimages.md @@ -4,8 +4,8 @@ page_keywords: Examples, Usage, base image, docker, documentation, examples # Create a Base Image -So you want to create your own [*Base -Image*](../../terms/image/#base-image-def)? Great! +So you want to create your own [*Base Image*]( +/terms/image/#base-image-def)? Great! The specific process will depend heavily on the Linux distribution you want to package. We have some examples below, and you are encouraged to @@ -13,9 +13,9 @@ submit pull requests to contribute new ones. ## Create a full image using tar -In general, you’ll want to start with a working machine that is running -the distribution you’d like to package as a base image, though that is -not required for some tools like Debian’s +In general, you'll want to start with a working machine that is running +the distribution you'd like to package as a base image, though that is +not required for some tools like Debian's [Debootstrap](https://wiki.debian.org/Debootstrap), which you can also use to build Ubuntu images. @@ -33,19 +33,18 @@ It can be as simple as this to create an Ubuntu base image: There are more example scripts for creating base images in the Docker GitHub Repo: -- [BusyBox](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-busybox.sh) -- CentOS / Scientific Linux CERN (SLC) [on - Debian/Ubuntu](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-rinse.sh) - or [on - CentOS/RHEL/SLC/etc.](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-yum.sh) -- [Debian / - Ubuntu](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-debootstrap.sh) + - [BusyBox](https://github.com/dotcloud/docker/blob/master/contrib/mkimage-busybox.sh) + - CentOS / Scientific Linux CERN (SLC) [on Debian/Ubuntu]( + https://github.com/dotcloud/docker/blob/master/contrib/mkimage-rinse.sh) or + [on CentOS/RHEL/SLC/etc.]( + https://github.com/dotcloud/docker/blob/master/contrib/mkimage-yum.sh) + - [Debian / Ubuntu]( + https://github.com/dotcloud/docker/blob/master/contrib/mkimage-debootstrap.sh) ## Creating a simple base image using `scratch` -There is a special repository in the Docker registry called -`scratch`, which was created using an empty tar -file: +There is a special repository in the Docker registry called `scratch`, which +was created using an empty tar file: $ tar cv --files-from /dev/null | docker import - scratch @@ -56,5 +55,5 @@ image to base your new minimal containers `FROM`: ADD true-asm /true CMD ["/true"] -The Dockerfile above is from extremely minimal image - -[tianon/true](https://github.com/tianon/dockerfiles/tree/master/true). +The Dockerfile above is from extremely minimal image - [tianon/true]( +https://github.com/tianon/dockerfiles/tree/master/true). diff --git a/docs/sources/articles/runmetrics.md b/docs/sources/articles/runmetrics.md index 30e204c892..4cc210bb52 100644 --- a/docs/sources/articles/runmetrics.md +++ b/docs/sources/articles/runmetrics.md @@ -4,8 +4,8 @@ page_keywords: docker, metrics, CPU, memory, disk, IO, run, runtime # Runtime Metrics -Linux Containers rely on [control -groups](https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt) +Linux Containers rely on [control groups]( +https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt) which not only track groups of processes, but also expose metrics about CPU, memory, and block I/O usage. You can access those metrics and obtain network usage metrics as well. This is relevant for "pure" LXC @@ -14,16 +14,15 @@ containers, as well as for Docker containers. ## Control Groups Control groups are exposed through a pseudo-filesystem. In recent -distros, you should find this filesystem under -`/sys/fs/cgroup`. Under that directory, you will see -multiple sub-directories, called devices, freezer, blkio, etc.; each -sub-directory actually corresponds to a different cgroup hierarchy. +distros, you should find this filesystem under `/sys/fs/cgroup`. Under +that directory, you will see multiple sub-directories, called devices, +freezer, blkio, etc.; each sub-directory actually corresponds to a different +cgroup hierarchy. -On older systems, the control groups might be mounted on -`/cgroup`, without distinct hierarchies. In that -case, instead of seeing the sub-directories, you will see a bunch of -files in that directory, and possibly some directories corresponding to -existing containers. +On older systems, the control groups might be mounted on `/cgroup`, without +distinct hierarchies. In that case, instead of seeing the sub-directories, +you will see a bunch of files in that directory, and possibly some directories +corresponding to existing containers. To figure out where your control groups are mounted, you can run: @@ -31,17 +30,14 @@ To figure out where your control groups are mounted, you can run: ## Enumerating Cgroups -You can look into `/proc/cgroups` to see the -different control group subsystems known to the system, the hierarchy -they belong to, and how many groups they contain. +You can look into `/proc/cgroups` to see the different control group subsystems +known to the system, the hierarchy they belong to, and how many groups they contain. -You can also look at `/proc//cgroup` to see -which control groups a process belongs to. The control group will be -shown as a path relative to the root of the hierarchy mountpoint; e.g. -`/` means “this process has not been assigned into a -particular group”, while `/lxc/pumpkin` means that -the process is likely to be a member of a container named -`pumpkin`. +You can also look at `/proc//cgroup` to see which control groups a process +belongs to. The control group will be shown as a path relative to the root of +the hierarchy mountpoint; e.g. `/` means “this process has not been assigned into +a particular group”, while `/lxc/pumpkin` means that the process is likely to be +a member of a container named `pumpkin`. ## Finding the Cgroup for a Given Container @@ -53,12 +49,11 @@ of the LXC tools, the cgroup will be `lxc/.` For Docker containers using cgroups, the container name will be the full ID or long ID of the container. If a container shows up as ae836c95b4c3 in `docker ps`, its long ID might be something like -`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can look it up with `docker inspect` -or `docker ps -notrunc`. +`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can +look it up with `docker inspect` or `docker ps -notrunc`. Putting everything together to look at the memory metrics for a Docker -container, take a look at -`/sys/fs/cgroup/memory/lxc//`. +container, take a look at `/sys/fs/cgroup/memory/lxc//`. ## Metrics from Cgroups: Memory, CPU, Block IO @@ -106,10 +101,9 @@ Here is what it will look like: total_active_file 4489052160 total_unevictable 32768 -The first half (without the `total_` prefix) -contains statistics relevant to the processes within the cgroup, -excluding sub-cgroups. The second half (with the `total_` -prefix) includes sub-cgroups as well. +The first half (without the `total_` prefix) contains statistics relevant +to the processes within the cgroup, excluding sub-cgroups. The second half +(with the `total_` prefix) includes sub-cgroups as well. Some metrics are "gauges", i.e. values that can increase or decrease (e.g. swap, the amount of swap space used by the members of the cgroup). @@ -118,95 +112,104 @@ they represent occurrences of a specific event (e.g. pgfault, which indicates the number of page faults which happened since the creation of the cgroup; this number can never decrease). -cache -: the amount of memory used by the processes of this control group - that can be associated precisely with a block on a block device. - When you read from and write to files on disk, this amount will - increase. This will be the case if you use "conventional" I/O - (`open`, `read`, - `write` syscalls) as well as mapped files (with - `mmap`). It also accounts for the memory used by - `tmpfs` mounts, though the reasons are unclear. -rss -: the amount of memory that *doesn’t* correspond to anything on disk: - stacks, heaps, and anonymous memory maps. -mapped\_file -: indicates the amount of memory mapped by the processes in the - control group. It doesn’t give you information about *how much* - memory is used; it rather tells you *how* it is used. -pgfault and pgmajfault -: indicate the number of times that a process of the cgroup triggered - a "page fault" and a "major fault", respectively. A page fault - happens when a process accesses a part of its virtual memory space - which is nonexistent or protected. The former can happen if the - process is buggy and tries to access an invalid address (it will - then be sent a `SIGSEGV` signal, typically - killing it with the famous `Segmentation fault` - message). The latter can happen when the process reads from a memory - zone which has been swapped out, or which corresponds to a mapped - file: in that case, the kernel will load the page from disk, and let - the CPU complete the memory access. It can also happen when the - process writes to a copy-on-write memory zone: likewise, the kernel - will preempt the process, duplicate the memory page, and resume the - write operation on the process’ own copy of the page. "Major" faults - happen when the kernel actually has to read the data from disk. When - it just has to duplicate an existing page, or allocate an empty - page, it’s a regular (or "minor") fault. -swap -: the amount of swap currently used by the processes in this cgroup. -active\_anon and inactive\_anon -: the amount of *anonymous* memory that has been identified has - respectively *active* and *inactive* by the kernel. "Anonymous" - memory is the memory that is *not* linked to disk pages. In other - words, that’s the equivalent of the rss counter described above. In - fact, the very definition of the rss counter is **active\_anon** + - **inactive\_anon** - **tmpfs** (where tmpfs is the amount of memory - used up by `tmpfs` filesystems mounted by this - control group). Now, what’s the difference between "active" and - "inactive"? Pages are initially "active"; and at regular intervals, - the kernel sweeps over the memory, and tags some pages as - "inactive". Whenever they are accessed again, they are immediately - retagged "active". When the kernel is almost out of memory, and time - comes to swap out to disk, the kernel will swap "inactive" pages. -active\_file and inactive\_file -: cache memory, with *active* and *inactive* similar to the *anon* - memory above. The exact formula is cache = **active\_file** + - **inactive\_file** + **tmpfs**. The exact rules used by the kernel - to move memory pages between active and inactive sets are different - from the ones used for anonymous memory, but the general principle - is the same. Note that when the kernel needs to reclaim memory, it - is cheaper to reclaim a clean (=non modified) page from this pool, - since it can be reclaimed immediately (while anonymous pages and - dirty/modified pages have to be written to disk first). -unevictable -: the amount of memory that cannot be reclaimed; generally, it will - account for memory that has been "locked" with `mlock` -. It is often used by crypto frameworks to make sure that - secret keys and other sensitive material never gets swapped out to - disk. -memory and memsw limits -: These are not really metrics, but a reminder of the limits applied - to this cgroup. The first one indicates the maximum amount of - physical memory that can be used by the processes of this control - group; the second one indicates the maximum amount of RAM+swap. + + - **cache:** + the amount of memory used by the processes of this control group + that can be associated precisely with a block on a block device. + When you read from and write to files on disk, this amount will + increase. This will be the case if you use "conventional" I/O + (`open`, `read`, + `write` syscalls) as well as mapped files (with + `mmap`). It also accounts for the memory used by + `tmpfs` mounts, though the reasons are unclear. + + - **rss:** + the amount of memory that *doesn't* correspond to anything on disk: + stacks, heaps, and anonymous memory maps. + + - **mapped_file:** + indicates the amount of memory mapped by the processes in the + control group. It doesn't give you information about *how much* + memory is used; it rather tells you *how* it is used. + + - **pgfault and pgmajfault:** + indicate the number of times that a process of the cgroup triggered + a "page fault" and a "major fault", respectively. A page fault + happens when a process accesses a part of its virtual memory space + which is nonexistent or protected. The former can happen if the + process is buggy and tries to access an invalid address (it will + then be sent a `SIGSEGV` signal, typically + killing it with the famous `Segmentation fault` + message). The latter can happen when the process reads from a memory + zone which has been swapped out, or which corresponds to a mapped + file: in that case, the kernel will load the page from disk, and let + the CPU complete the memory access. It can also happen when the + process writes to a copy-on-write memory zone: likewise, the kernel + will preempt the process, duplicate the memory page, and resume the + write operation on the process` own copy of the page. "Major" faults + happen when the kernel actually has to read the data from disk. When + it just has to duplicate an existing page, or allocate an empty + page, it's a regular (or "minor") fault. + + - **swap:** + the amount of swap currently used by the processes in this cgroup. + + - **active_anon and inactive_anon:** + the amount of *anonymous* memory that has been identified has + respectively *active* and *inactive* by the kernel. "Anonymous" + memory is the memory that is *not* linked to disk pages. In other + words, that's the equivalent of the rss counter described above. In + fact, the very definition of the rss counter is **active_anon** + + **inactive_anon** - **tmpfs** (where tmpfs is the amount of memory + used up by `tmpfs` filesystems mounted by this + control group). Now, what's the difference between "active" and + "inactive"? Pages are initially "active"; and at regular intervals, + the kernel sweeps over the memory, and tags some pages as + "inactive". Whenever they are accessed again, they are immediately + retagged "active". When the kernel is almost out of memory, and time + comes to swap out to disk, the kernel will swap "inactive" pages. + + - **active_file and inactive_file:** + cache memory, with *active* and *inactive* similar to the *anon* + memory above. The exact formula is cache = **active_file** + + **inactive_file** + **tmpfs**. The exact rules used by the kernel + to move memory pages between active and inactive sets are different + from the ones used for anonymous memory, but the general principle + is the same. Note that when the kernel needs to reclaim memory, it + is cheaper to reclaim a clean (=non modified) page from this pool, + since it can be reclaimed immediately (while anonymous pages and + dirty/modified pages have to be written to disk first). + + - **unevictable:** + the amount of memory that cannot be reclaimed; generally, it will + account for memory that has been "locked" with `mlock`. + It is often used by crypto frameworks to make sure that + secret keys and other sensitive material never gets swapped out to + disk. + + - **memory and memsw limits:** + These are not really metrics, but a reminder of the limits applied + to this cgroup. The first one indicates the maximum amount of + physical memory that can be used by the processes of this control + group; the second one indicates the maximum amount of RAM+swap. Accounting for memory in the page cache is very complex. If two processes in different control groups both read the same file (ultimately relying on the same blocks on disk), the corresponding -memory charge will be split between the control groups. It’s nice, but +memory charge will be split between the control groups. It's nice, but it also means that when a cgroup is terminated, it could increase the memory usage of another cgroup, because they are not splitting the cost anymore for those memory pages. ### CPU metrics: `cpuacct.stat` -Now that we’ve covered memory metrics, everything else will look very +Now that we've covered memory metrics, everything else will look very simple in comparison. CPU metrics will be found in the `cpuacct` controller. For each container, you will find a pseudo-file `cpuacct.stat`, containing the CPU usage accumulated by the processes of the container, -broken down between `user` and `system` time. If you’re not familiar +broken down between `user` and `system` time. If you're not familiar with the distinction, `user` is the time during which the processes were in direct control of the CPU (i.e. executing process code), and `system` is the time during which the CPU was executing system calls on behalf of @@ -217,43 +220,47 @@ they are expressed in "user jiffies". There are `USER_HZ` *"jiffies"* per second, and on x86 systems, `USER_HZ` is 100. This used to map exactly to the number of scheduler "ticks" per second; but with the advent of higher -frequency scheduling, as well as [tickless -kernels](http://lwn.net/Articles/549580/), the number of kernel ticks -wasn’t relevant anymore. It stuck around anyway, mainly for legacy and +frequency scheduling, as well as [tickless kernels]( +http://lwn.net/Articles/549580/), the number of kernel ticks +wasn't relevant anymore. It stuck around anyway, mainly for legacy and compatibility reasons. ### Block I/O metrics Block I/O is accounted in the `blkio` controller. Different metrics are scattered across different files. While you can -find in-depth details in the -[blkio-controller](https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt) +find in-depth details in the [blkio-controller]( +https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt) file in the kernel documentation, here is a short list of the most relevant ones: -blkio.sectors -: contain the number of 512-bytes sectors read and written by the - processes member of the cgroup, device by device. Reads and writes - are merged in a single counter. -blkio.io\_service\_bytes -: indicates the number of bytes read and written by the cgroup. It has - 4 counters per device, because for each device, it differentiates - between synchronous vs. asynchronous I/O, and reads vs. writes. -blkio.io\_serviced -: the number of I/O operations performed, regardless of their size. It - also has 4 counters per device. -blkio.io\_queued -: indicates the number of I/O operations currently queued for this - cgroup. In other words, if the cgroup isn’t doing any I/O, this will - be zero. Note that the opposite is not true. In other words, if - there is no I/O queued, it does not mean that the cgroup is idle - (I/O-wise). It could be doing purely synchronous reads on an - otherwise quiescent device, which is therefore able to handle them - immediately, without queuing. Also, while it is helpful to figure - out which cgroup is putting stress on the I/O subsystem, keep in - mind that is is a relative quantity. Even if a process group does - not perform more I/O, its queue size can increase just because the - device load increases because of other devices. + + - **blkio.sectors:** + contain the number of 512-bytes sectors read and written by the + processes member of the cgroup, device by device. Reads and writes + are merged in a single counter. + + - **blkio.io_service_bytes:** + indicates the number of bytes read and written by the cgroup. It has + 4 counters per device, because for each device, it differentiates + between synchronous vs. asynchronous I/O, and reads vs. writes. + + - **blkio.io_serviced:** + the number of I/O operations performed, regardless of their size. It + also has 4 counters per device. + + - **blkio.io_queued:** + indicates the number of I/O operations currently queued for this + cgroup. In other words, if the cgroup isn't doing any I/O, this will + be zero. Note that the opposite is not true. In other words, if + there is no I/O queued, it does not mean that the cgroup is idle + (I/O-wise). It could be doing purely synchronous reads on an + otherwise quiescent device, which is therefore able to handle them + immediately, without queuing. Also, while it is helpful to figure + out which cgroup is putting stress on the I/O subsystem, keep in + mind that is is a relative quantity. Even if a process group does + not perform more I/O, its queue size can increase just because the + device load increases because of other devices. ## Network Metrics @@ -261,9 +268,9 @@ Network metrics are not exposed directly by control groups. There is a good explanation for that: network interfaces exist within the context of *network namespaces*. The kernel could probably accumulate metrics about packets and bytes sent and received by a group of processes, but -those metrics wouldn’t be very useful. You want per-interface metrics +those metrics wouldn't be very useful. You want per-interface metrics (because traffic happening on the local `lo` -interface doesn’t really count). But since processes in a single cgroup +interface doesn't really count). But since processes in a single cgroup can belong to multiple network namespaces, those metrics would be harder to interpret: multiple network namespaces means multiple `lo` interfaces, potentially multiple `eth0` @@ -324,7 +331,7 @@ The `ip-netns exec` command will let you execute any program (present in the host system) within any network namespace visible to the current process. This means that your host will be able to enter the network namespace of your containers, but your containers -won’t be able to access the host, nor their sibling containers. +won't be able to access the host, nor their sibling containers. Containers will be able to “see” and affect their sub-containers, though. @@ -351,11 +358,9 @@ those pseudo-files. (Symlinks are accepted.) In other words, to execute a command within the network namespace of a container, we need to: -- Find out the PID of any process within the container that we want to - investigate; -- Create a symlink from `/var/run/netns/` - to `/proc//ns/net` -- Execute `ip netns exec ....` +- Find out the PID of any process within the container that we want to investigate; +- Create a symlink from `/var/run/netns/` to `/proc//ns/net` +- Execute `ip netns exec ....` Please review [*Enumerating Cgroups*](#enumerating-cgroups) to learn how to find the cgroup of a pprocess running in the container of which you want to @@ -386,7 +391,7 @@ write your metric collector in C (or any language that lets you do low-level system calls). You need to use a special system call, `setns()`, which lets the current process enter any arbitrary namespace. It requires, however, an open file descriptor to -the namespace pseudo-file (remember: that’s the pseudo-file in +the namespace pseudo-file (remember: that's the pseudo-file in `/proc//ns/net`). However, there is a catch: you must not keep this file descriptor open. @@ -409,26 +414,26 @@ carefully cleans up after itself, but it is still possible. It is usually easier to collect metrics at regular intervals (e.g. every minute, with the collectd LXC plugin) and rely on that instead. -But, if you’d still like to gather the stats when a container stops, +But, if you'd still like to gather the stats when a container stops, here is how: For each container, start a collection process, and move it to the control groups that you want to monitor by writing its PID to the tasks file of the cgroup. The collection process should periodically re-read -the tasks file to check if it’s the last process of the control group. +the tasks file to check if it's the last process of the control group. (If you also want to collect network statistics as explained in the previous section, you should also move the process to the appropriate network namespace.) When the container exits, `lxc-start` will try to delete the control groups. It will fail, since the control group is -still in use; but that’s fine. You process should now detect that it is +still in use; but that's fine. You process should now detect that it is the only one remaining in the group. Now is the right time to collect all the metrics you need! Finally, your process should move itself back to the root control group, and remove the container control group. To remove a control group, just -`rmdir` its directory. It’s counter-intuitive to +`rmdir` its directory. It's counter-intuitive to `rmdir` a directory as it still contains files; but -remember that this is a pseudo-filesystem, so usual rules don’t apply. +remember that this is a pseudo-filesystem, so usual rules don't apply. After the cleanup is done, the collection process can exit safely. diff --git a/docs/sources/articles/security.md b/docs/sources/articles/security.md index 1a438295e7..69284db836 100644 --- a/docs/sources/articles/security.md +++ b/docs/sources/articles/security.md @@ -9,11 +9,11 @@ page_keywords: Docker, Docker documentation, security There are three major areas to consider when reviewing Docker security: -- the intrinsic security of containers, as implemented by kernel - namespaces and cgroups; -- the attack surface of the Docker daemon itself; -- the "hardening" security features of the kernel and how they - interact with containers. + - the intrinsic security of containers, as implemented by kernel + namespaces and cgroups; + - the attack surface of the Docker daemon itself; + - the "hardening" security features of the kernel and how they + interact with containers. ## Kernel Namespaces @@ -33,12 +33,12 @@ less affect, processes running in another container, or in the host system. **Each container also gets its own network stack**, meaning that a -container doesn’t get a privileged access to the sockets or interfaces +container doesn't get a privileged access to the sockets or interfaces of another container. Of course, if the host system is setup accordingly, containers can interact with each other through their respective network interfaces — just like they can interact with external hosts. When you specify public ports for your containers or use -[*links*](../../use/working_with_links_names/#working-with-links-names) +[*links*](/use/working_with_links_names/#working-with-links-names) then IP traffic is allowed between containers. They can ping each other, send/receive UDP packets, and establish TCP connections, but that can be restricted if necessary. From a network architecture point of view, all @@ -54,8 +54,8 @@ This means that since July 2008 (date of the 2.6.26 release, now 5 years ago), namespace code has been exercised and scrutinized on a large number of production systems. And there is more: the design and inspiration for the namespaces code are even older. Namespaces are -actually an effort to reimplement the features of -[OpenVZ](http://en.wikipedia.org/wiki/OpenVZ) in such a way that they +actually an effort to reimplement the features of [OpenVZ]( +http://en.wikipedia.org/wiki/OpenVZ) in such a way that they could be merged within the mainstream kernel. And OpenVZ was initially released in 2005, so both the design and the implementation are pretty mature. @@ -90,11 +90,10 @@ Docker daemon**. This is a direct consequence of some powerful Docker features. Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you -can start a container where the `/host` directory -will be the `/` directory on your host; and the -container will be able to alter your host filesystem without any -restriction. This sounds crazy? Well, you have to know that **all -virtualization systems allowing filesystem resource sharing behave the +can start a container where the `/host` directory will be the `/` directory +on your host; and the container will be able to alter your host filesystem +without any restriction. This sounds crazy? Well, you have to know that +**all virtualization systems allowing filesystem resource sharing behave the same way**. Nothing prevents you from sharing your root filesystem (or even your root block device) with a virtual machine. @@ -120,8 +119,8 @@ and client SSL certificates. Recent improvements in Linux namespaces will soon allow to run full-featured containers without root privileges, thanks to the new user -namespace. This is covered in detail -[here](http://s3hh.wordpress.com/2013/07/19/creating-and-using-containers-without-privilege/). +namespace. This is covered in detail [here]( +http://s3hh.wordpress.com/2013/07/19/creating-and-using-containers-without-privilege/). Moreover, this will solve the problem caused by sharing filesystems between host and guest, since the user namespace allows users within containers (including the root user) to be mapped to other users in the @@ -130,13 +129,13 @@ host system. The end goal for Docker is therefore to implement two additional security improvements: -- map the root user of a container to a non-root user of the Docker - host, to mitigate the effects of a container-to-host privilege - escalation; -- allow the Docker daemon to run without root privileges, and delegate - operations requiring those privileges to well-audited sub-processes, - each with its own (very limited) scope: virtual network setup, - filesystem management, etc. + - map the root user of a container to a non-root user of the Docker + host, to mitigate the effects of a container-to-host privilege + escalation; + - allow the Docker daemon to run without root privileges, and delegate + operations requiring those privileges to well-audited sub-processes, + each with its own (very limited) scope: virtual network setup, + filesystem management, etc. Finally, if you run Docker on a server, it is recommended to run exclusively Docker in the server, and move all other services within @@ -152,11 +151,11 @@ capabilities. What does that mean? Capabilities turn the binary "root/non-root" dichotomy into a fine-grained access control system. Processes (like web servers) that just need to bind on a port below 1024 do not have to run as root: they -can just be granted the `net_bind_service` -capability instead. And there are many other capabilities, for almost -all the specific areas where root privileges are usually needed. +can just be granted the `net_bind_service` capability instead. And there +are many other capabilities, for almost all the specific areas where root +privileges are usually needed. -This means a lot for container security; let’s see why! +This means a lot for container security; let's see why! Your average server (bare metal or virtual machine) needs to run a bunch of processes as root. Those typically include SSH, cron, syslogd; @@ -165,41 +164,41 @@ tools (to handle e.g. DHCP, WPA, or VPNs), and much more. A container is very different, because almost all of those tasks are handled by the infrastructure around the container: -- SSH access will typically be managed by a single server running in - the Docker host; -- `cron`, when necessary, should run as a user - process, dedicated and tailored for the app that needs its - scheduling service, rather than as a platform-wide facility; -- log management will also typically be handed to Docker, or by - third-party services like Loggly or Splunk; -- hardware management is irrelevant, meaning that you never need to - run `udevd` or equivalent daemons within - containers; -- network management happens outside of the containers, enforcing - separation of concerns as much as possible, meaning that a container - should never need to perform `ifconfig`, - `route`, or ip commands (except when a container - is specifically engineered to behave like a router or firewall, of - course). + - SSH access will typically be managed by a single server running in + the Docker host; + - `cron`, when necessary, should run as a user + process, dedicated and tailored for the app that needs its + scheduling service, rather than as a platform-wide facility; + - log management will also typically be handed to Docker, or by + third-party services like Loggly or Splunk; + - hardware management is irrelevant, meaning that you never need to + run `udevd` or equivalent daemons within + containers; + - network management happens outside of the containers, enforcing + separation of concerns as much as possible, meaning that a container + should never need to perform `ifconfig`, + `route`, or ip commands (except when a container + is specifically engineered to behave like a router or firewall, of + course). This means that in most cases, containers will not need "real" root privileges *at all*. And therefore, containers can run with a reduced capability set; meaning that "root" within a container has much less privileges than the real "root". For instance, it is possible to: -- deny all "mount" operations; -- deny access to raw sockets (to prevent packet spoofing); -- deny access to some filesystem operations, like creating new device - nodes, changing the owner of files, or altering attributes - (including the immutable flag); -- deny module loading; -- and many others. + - deny all "mount" operations; + - deny access to raw sockets (to prevent packet spoofing); + - deny access to some filesystem operations, like creating new device + nodes, changing the owner of files, or altering attributes (including + the immutable flag); + - deny module loading; + - and many others. This means that even if an intruder manages to escalate to root within a container, it will be much harder to do serious damage, or to escalate to the host. -This won’t affect regular web apps; but malicious users will find that +This won't affect regular web apps; but malicious users will find that the arsenal at their disposal has shrunk considerably! You can see [the list of dropped capabilities in the Docker code](https://github.com/dotcloud/docker/blob/v0.5.0/lxc_template.go#L97), @@ -217,28 +216,28 @@ modern Linux kernels. It is also possible to leverage existing, well-known systems like TOMOYO, AppArmor, SELinux, GRSEC, etc. with Docker. -While Docker currently only enables capabilities, it doesn’t interfere +While Docker currently only enables capabilities, it doesn't interfere with the other systems. This means that there are many different ways to harden a Docker host. Here are a few examples. -- You can run a kernel with GRSEC and PAX. This will add many safety - checks, both at compile-time and run-time; it will also defeat many - exploits, thanks to techniques like address randomization. It - doesn’t require Docker-specific configuration, since those security - features apply system-wide, independently of containers. -- If your distribution comes with security model templates for LXC - containers, you can use them out of the box. For instance, Ubuntu - comes with AppArmor templates for LXC, and those templates provide - an extra safety net (even though it overlaps greatly with - capabilities). -- You can define your own policies using your favorite access control - mechanism. Since Docker containers are standard LXC containers, - there is nothing “magic” or specific to Docker. + - You can run a kernel with GRSEC and PAX. This will add many safety + checks, both at compile-time and run-time; it will also defeat many + exploits, thanks to techniques like address randomization. It + doesn't require Docker-specific configuration, since those security + features apply system-wide, independently of containers. + - If your distribution comes with security model templates for LXC + containers, you can use them out of the box. For instance, Ubuntu + comes with AppArmor templates for LXC, and those templates provide + an extra safety net (even though it overlaps greatly with + capabilities). + - You can define your own policies using your favorite access control + mechanism. Since Docker containers are standard LXC containers, + there is nothing “magic” or specific to Docker. Just like there are many third-party tools to augment Docker containers with e.g. special network topologies or shared filesystems, you can expect to see tools to harden existing Docker containers without -affecting Docker’s core. +affecting Docker's core. ## Conclusions @@ -254,5 +253,5 @@ containerization systems, you will be able to implement them as well with Docker, since everything is provided by the kernel anyway. For more context and especially for comparisons with VMs and other -container systems, please also see the [original blog -post](http://blog.docker.io/2013/08/containers-docker-how-secure-are-they/). +container systems, please also see the [original blog post]( +http://blog.docker.io/2013/08/containers-docker-how-secure-are-they/). diff --git a/docs/sources/contributing.md b/docs/sources/contributing.md index b311d13f8c..0a1e4fd282 100644 --- a/docs/sources/contributing.md +++ b/docs/sources/contributing.md @@ -2,6 +2,6 @@ ## Contents: -- [Contributing to Docker](contributing/) -- [Setting Up a Dev Environment](devenvironment/) + - [Contributing to Docker](contributing/) + - [Setting Up a Dev Environment](devenvironment/) diff --git a/docs/sources/contributing/contributing.md b/docs/sources/contributing/contributing.md index 9e2ad19073..dd764eb855 100644 --- a/docs/sources/contributing/contributing.md +++ b/docs/sources/contributing/contributing.md @@ -6,19 +6,19 @@ page_keywords: contributing, docker, documentation, help, guideline Want to hack on Docker? Awesome! -The repository includes [all the instructions you need to get -started](https://github.com/dotcloud/docker/blob/master/CONTRIBUTING.md). +The repository includes [all the instructions you need to get started]( +https://github.com/dotcloud/docker/blob/master/CONTRIBUTING.md). -The [developer environment -Dockerfile](https://github.com/dotcloud/docker/blob/master/Dockerfile) +The [developer environment Dockerfile]( +https://github.com/dotcloud/docker/blob/master/Dockerfile) specifies the tools and versions used to test and build Docker. -If you’re making changes to the documentation, see the -[README.md](https://github.com/dotcloud/docker/blob/master/docs/README.md). +If you're making changes to the documentation, see the [README.md]( +https://github.com/dotcloud/docker/blob/master/docs/README.md). -The [documentation environment -Dockerfile](https://github.com/dotcloud/docker/blob/master/docs/Dockerfile) +The [documentation environment Dockerfile]( +https://github.com/dotcloud/docker/blob/master/docs/Dockerfile) specifies the tools and versions used to build the Documentation. -Further interesting details can be found in the [Packaging -hints](https://github.com/dotcloud/docker/blob/master/hack/PACKAGERS.md). +Further interesting details can be found in the [Packaging hints]( +https://github.com/dotcloud/docker/blob/master/hack/PACKAGERS.md). diff --git a/docs/sources/contributing/devenvironment.md b/docs/sources/contributing/devenvironment.md index 6551d9fbac..f7c66274e8 100644 --- a/docs/sources/contributing/devenvironment.md +++ b/docs/sources/contributing/devenvironment.md @@ -12,18 +12,18 @@ binaries, go environment, go dependencies, etc. ## Install Docker -Docker’s build environment itself is a Docker container, so the first +Docker's build environment itself is a Docker container, so the first step is to install Docker on your system. You can follow the [install instructions most relevant to your -system](https://docs.docker.io/en/latest/installation/). Make sure you +system](https://docs.docker.io/installation/). Make sure you have a working, up-to-date docker installation, then continue to the next step. ## Install tools used for this tutorial -Install `git`; honest, it’s very good. You can use -other ways to get the Docker source, but they’re not anywhere near as +Install `git`; honest, it's very good. You can use +other ways to get the Docker source, but they're not anywhere near as easy. Install `make`. This tutorial uses our base Makefile @@ -56,8 +56,7 @@ To create the Docker binary, run this command: sudo make binary -This will create the Docker binary in -`./bundles/-dev/binary/` +This will create the Docker binary in `./bundles/-dev/binary/` ### Using your built Docker binary @@ -107,10 +106,10 @@ something like this ok github.com/dotcloud/docker/utils 0.017s If $TESTFLAGS is set in the environment, it is passed as extra -arguments to ‘go test’. You can use this to select certain tests to run, +arguments to `go test`. You can use this to select certain tests to run, eg. -> TESTFLAGS=’-run \^TestBuild\$’ make test + TESTFLAGS=`-run \^TestBuild\$` make test If the output indicates "FAIL" and you see errors like this: @@ -118,7 +117,7 @@ If the output indicates "FAIL" and you see errors like this: utils_test.go:179: Error copy: exit status 1 (cp: writing '/tmp/docker-testd5c9-[...]': No space left on device -Then you likely don’t have enough memory available the test suite. 2GB +Then you likely don't have enough memory available the test suite. 2GB is recommended. ## Use Docker @@ -135,13 +134,14 @@ If you want to read the documentation from a local website, or are making changes to it, you can build the documentation and then serve it by: - sudo make docs + sudo make docs + # when its done, you can point your browser to http://yourdockerhost:8000 - # type Ctrl-C to exit + # type Ctrl-C to exit **Need More Help?** -If you need more help then hop on to the [\#docker-dev IRC +If you need more help then hop on to the [#docker-dev IRC channel](irc://chat.freenode.net#docker-dev) or post a message on the [Docker developer mailing list](https://groups.google.com/d/forum/docker-dev). diff --git a/docs/sources/examples.md b/docs/sources/examples.md index 98b3d25893..f1d1567f52 100644 --- a/docs/sources/examples.md +++ b/docs/sources/examples.md @@ -9,17 +9,17 @@ substantial services like those which you might find in production. ## Contents: -- [Check your Docker install](hello_world/) -- [Hello World](hello_world/#hello-world) -- [Hello World Daemon](hello_world/#hello-world-daemon) -- [Node.js Web App](nodejs_web_app/) -- [Redis Service](running_redis_service/) -- [SSH Daemon Service](running_ssh_service/) -- [CouchDB Service](couchdb_data_volumes/) -- [PostgreSQL Service](postgresql_service/) -- [Building an Image with MongoDB](mongodb/) -- [Riak Service](running_riak_service/) -- [Using Supervisor with Docker](using_supervisord/) -- [Process Management with CFEngine](cfengine_process_management/) -- [Python Web App](python_web_app/) + - [Check your Docker install](hello_world/) + - [Hello World](hello_world/#hello-world) + - [Hello World Daemon](hello_world/#hello-world-daemon) + - [Node.js Web App](nodejs_web_app/) + - [Redis Service](running_redis_service/) + - [SSH Daemon Service](running_ssh_service/) + - [CouchDB Service](couchdb_data_volumes/) + - [PostgreSQL Service](postgresql_service/) + - [Building an Image with MongoDB](mongodb/) + - [Riak Service](running_riak_service/) + - [Using Supervisor with Docker](using_supervisord/) + - [Process Management with CFEngine](cfengine_process_management/) + - [Python Web App](python_web_app/) diff --git a/docs/sources/examples/apt-cacher-ng.md b/docs/sources/examples/apt-cacher-ng.md index c7fee5542a..0293ac5d0b 100644 --- a/docs/sources/examples/apt-cacher-ng.md +++ b/docs/sources/examples/apt-cacher-ng.md @@ -9,13 +9,13 @@ page_keywords: docker, example, package installation, networking, debian, ubuntu > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup). -> - **If you’re using OS X or docker via TCP** then you shouldn’t use +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup). +> - **If you're using OS X or docker via TCP** then you shouldn't use > sudo. When you have multiple Docker servers, or build unrelated Docker -containers which can’t make use of the Docker build cache, it can be +containers which can't make use of the Docker build cache, it can be useful to have a caching proxy for your packages. This container makes the second download of any package almost instant. @@ -45,7 +45,7 @@ Then run it, mapping the exposed port to one on the host $ sudo docker run -d -p 3142:3142 --name test_apt_cacher_ng eg_apt_cacher_ng -To see the logfiles that are ‘tailed’ in the default command, you can +To see the logfiles that are `tailed` in the default command, you can use: $ sudo docker logs -f test_apt_cacher_ng @@ -53,13 +53,12 @@ use: To get your Debian-based containers to use the proxy, you can do one of three things -1. Add an apt Proxy setting - `echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/conf.d/01proxy` - -2. Set an environment variable: - `http_proxy=http://dockerhost:3142/` -3. Change your `sources.list` entries to start with - `http://dockerhost:3142/` +1. Add an apt Proxy setting + `echo 'Acquire::http { Proxy "http://dockerhost:3142"; };' >> /etc/apt/conf.d/01proxy` +2. Set an environment variable: + `http_proxy=http://dockerhost:3142/` +3. Change your `sources.list` entries to start with + `http://dockerhost:3142/` **Option 1** injects the settings safely into your apt configuration in a local version of a common base: diff --git a/docs/sources/examples/cfengine_process_management.md b/docs/sources/examples/cfengine_process_management.md index 45d6edcec4..965ad252d2 100644 --- a/docs/sources/examples/cfengine_process_management.md +++ b/docs/sources/examples/cfengine_process_management.md @@ -10,14 +10,14 @@ Docker monitors one process in each running container and the container lives or dies with that process. By introducing CFEngine inside Docker containers, we can alleviate a few of the issues that may arise: -- It is possible to easily start multiple processes within a - container, all of which will be managed automatically, with the - normal `docker run` command. -- If a managed process dies or crashes, CFEngine will start it again - within 1 minute. -- The container itself will live as long as the CFEngine scheduling - daemon (cf-execd) lives. With CFEngine, we are able to decouple the - life of the container from the uptime of the service it provides. + - It is possible to easily start multiple processes within a + container, all of which will be managed automatically, with the + normal `docker run` command. + - If a managed process dies or crashes, CFEngine will start it again + within 1 minute. + - The container itself will live as long as the CFEngine scheduling + daemon (cf-execd) lives. With CFEngine, we are able to decouple the + life of the container from the uptime of the service it provides. ## How it works @@ -25,23 +25,20 @@ CFEngine, together with the cfe-docker integration policies, are installed as part of the Dockerfile. This builds CFEngine into our Docker image. -The Dockerfile’s `ENTRYPOINT` takes an arbitrary +The Dockerfile's `ENTRYPOINT` takes an arbitrary amount of commands (with any desired arguments) as parameters. When we run the Docker container these parameters get written to CFEngine policies and CFEngine takes over to ensure that the desired processes are running in the container. -CFEngine scans the process table for the `basename` -of the commands given to the `ENTRYPOINT` and runs -the command to start the process if the `basename` +CFEngine scans the process table for the `basename` of the commands given +to the `ENTRYPOINT` and runs the command to start the process if the `basename` is not found. For example, if we start the container with -`docker run "/path/to/my/application parameters"`, -CFEngine will look for a process named `application` -and run the command. If an entry for `application` -is not found in the process table at any point in time, CFEngine will -execute `/path/to/my/application parameters` to -start the application once again. The check on the process table happens -every minute. +`docker run "/path/to/my/application parameters"`, CFEngine will look for a +process named `application` and run the command. If an entry for `application` +is not found in the process table at any point in time, CFEngine will execute +`/path/to/my/application parameters` to start the application once again. The +check on the process table happens every minute. Note that it is therefore important that the command to start your application leaves a process with the basename of the command. This can @@ -56,11 +53,10 @@ in a single container. There are three steps: -1. Install CFEngine into the container. -2. Copy the CFEngine Docker process management policy into the - containerized CFEngine installation. -3. Start your application processes as part of the - `docker run` command. +1. Install CFEngine into the container. +2. Copy the CFEngine Docker process management policy into the + containerized CFEngine installation. +3. Start your application processes as part of the `docker run` command. ### Building the container image @@ -90,25 +86,22 @@ The first two steps can be done as part of a Dockerfile, as follows. ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"] -By saving this file as `Dockerfile` to a working -directory, you can then build your container with the docker build -command, e.g. `docker build -t managed_image`. +By saving this file as Dockerfile to a working directory, you can then build +your container with the docker build command, e.g. +`docker build -t managed_image`. ### Testing the container -Start the container with `apache2` and -`sshd` running and managed, forwarding a port to our -SSH instance: +Start the container with `apache2` and `sshd` running and managed, forwarding +a port to our SSH instance: docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start" We now clearly see one of the benefits of the cfe-docker integration: it -allows to start several processes as part of a normal -`docker run` command. +allows to start several processes as part of a normal `docker run` command. -We can now log in to our new container and see that both -`apache2` and `sshd` are -running. We have set the root password to "password" in the Dockerfile +We can now log in to our new container and see that both `apache2` and `sshd` +are running. We have set the root password to "password" in the Dockerfile above and can use that to log in with ssh: ssh -p222 root@127.0.0.1 @@ -144,9 +137,8 @@ CFEngine. To make sure your applications get managed in the same manner, there are just two things you need to adjust from the above example: -- In the Dockerfile used above, install your applications instead of - `apache2` and `sshd`. -- When you start the container with `docker run`, - specify the command line arguments to your applications rather than - `apache2` and `sshd`. - + - In the Dockerfile used above, install your applications instead of + `apache2` and `sshd`. + - When you start the container with `docker run`, + specify the command line arguments to your applications rather than + `apache2` and `sshd`. diff --git a/docs/sources/examples/couchdb_data_volumes.md b/docs/sources/examples/couchdb_data_volumes.md index 1b18cf0aa7..10abe7af02 100644 --- a/docs/sources/examples/couchdb_data_volumes.md +++ b/docs/sources/examples/couchdb_data_volumes.md @@ -9,23 +9,22 @@ page_keywords: docker, example, package installation, networking, couchdb, data > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) -Here’s an example of using data volumes to share the same data between +Here's an example of using data volumes to share the same data between two CouchDB containers. This could be used for hot upgrades, testing different versions of CouchDB on the same data, etc. ## Create first database -Note that we’re marking `/var/lib/couchdb` as a data -volume. +Note that we're marking `/var/lib/couchdb` as a data volume. COUCH1=$(sudo docker run -d -p 5984 -v /var/lib/couchdb shykes/couchdb:2013-05-03) ## Add data to the first database -We’re assuming your Docker host is reachable at `localhost`. If not, +We're assuming your Docker host is reachable at `localhost`. If not, replace `localhost` with the public IP of your Docker host. HOST=localhost @@ -34,7 +33,7 @@ replace `localhost` with the public IP of your Docker host. ## Create second database -This time, we’re requesting shared access to `$COUCH1`'s volumes. +This time, we're requesting shared access to `$COUCH1`'s volumes. COUCH2=$(sudo docker run -d -p 5984 --volumes-from $COUCH1 shykes/couchdb:2013-05-03) diff --git a/docs/sources/examples/hello_world.md b/docs/sources/examples/hello_world.md index 062d5d37b3..c7e821136c 100644 --- a/docs/sources/examples/hello_world.md +++ b/docs/sources/examples/hello_world.md @@ -15,7 +15,7 @@ like `/var/lib/docker/repositories: permission denied` you may have an incomplete Docker installation or insufficient privileges to access docker on your machine. -Please refer to [*Installation*](../../installation/) +Please refer to [*Installation*](/installation/) for installation instructions. ## Hello World @@ -25,8 +25,8 @@ for installation instructions. > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](#check-your-docker-installation). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) This is the most basic example available for using Docker. @@ -61,7 +61,6 @@ See the example in action - ## Hello World Daemon @@ -71,8 +70,8 @@ See the example in action > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](#check-your-docker-installation). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) And now for the most boring daemon ever written! @@ -87,16 +86,16 @@ continue to do this until we stop it. We are going to run a simple hello world daemon in a new container made from the `ubuntu` image. -- **"sudo docker run -d "** run a command in a new container. We pass - "-d" so it runs as a daemon. -- **"ubuntu"** is the image we want to run the command inside of. -- **"/bin/sh -c"** is the command we want to run in the container -- **"while true; do echo hello world; sleep 1; done"** is the mini - script we want to run, that will just print hello world once a - second until we stop it. -- **$container_id** the output of the run command will return a - container id, we can use in future commands to see what is going on - with this process. + - **"sudo docker run -d "** run a command in a new container. We pass + "-d" so it runs as a daemon. + - **"ubuntu"** is the image we want to run the command inside of. + - **"/bin/sh -c"** is the command we want to run in the container + - **"while true; do echo hello world; sleep 1; done"** is the mini + script we want to run, that will just print hello world once a + second until we stop it. + - **$container_id** the output of the run command will return a + container id, we can use in future commands to see what is going on + with this process. @@ -104,8 +103,8 @@ from the `ubuntu` image. Check the logs make sure it is working correctly. -- **"docker logs**" This will return the logs for a container -- **$container_id** The Id of the container we want the logs for. + - **"docker logs**" This will return the logs for a container + - **$container_id** The Id of the container we want the logs for. @@ -113,12 +112,12 @@ Check the logs make sure it is working correctly. Attach to the container to see the results in real-time. -- **"docker attach**" This will allow us to attach to a background - process to see what is going on. -- **"–sig-proxy=false"** Do not forward signals to the container; - allows us to exit the attachment using Control-C without stopping - the container. -- **$container_id** The Id of the container we want to attach to. + - **"docker attach**" This will allow us to attach to a background + process to see what is going on. + - **"–sig-proxy=false"** Do not forward signals to the container; + allows us to exit the attachment using Control-C without stopping + the container. + - **$container_id** The Id of the container we want to attach to. Exit from the container attachment by pressing Control-C. @@ -126,16 +125,16 @@ Exit from the container attachment by pressing Control-C. Check the process list to make sure it is running. -- **"docker ps"** this shows all running process managed by docker + - **"docker ps"** this shows all running process managed by docker sudo docker stop $container_id -Stop the container, since we don’t need it anymore. +Stop the container, since we don't need it anymore. -- **"docker stop"** This stops a container -- **$container_id** The Id of the container we want to stop. + - **"docker stop"** This stops a container + - **$container_id** The Id of the container we want to stop. @@ -151,16 +150,14 @@ See the example in action -The next example in the series is a [*Node.js Web -App*](../nodejs_web_app/#nodejs-web-app) example, or you could skip to -any of the other examples: - -- [*Node.js Web App*](../nodejs_web_app/#nodejs-web-app) -- [*Redis Service*](../running_redis_service/#running-redis-service) -- [*SSH Daemon Service*](../running_ssh_service/#running-ssh-service) -- [*CouchDB - Service*](../couchdb_data_volumes/#running-couchdb-service) -- [*PostgreSQL Service*](../postgresql_service/#postgresql-service) -- [*Building an Image with MongoDB*](../mongodb/#mongodb-image) -- [*Python Web App*](../python_web_app/#python-web-app) +The next example in the series is a [*Node.js Web App*]( +../nodejs_web_app/#nodejs-web-app) example, or you could skip to any of the +other examples: + - [*Node.js Web App*](../nodejs_web_app/#nodejs-web-app) + - [*Redis Service*](../running_redis_service/#running-redis-service) + - [*SSH Daemon Service*](../running_ssh_service/#running-ssh-service) + - [*CouchDB Service*](../couchdb_data_volumes/#running-couchdb-service) + - [*PostgreSQL Service*](../postgresql_service/#postgresql-service) + - [*Building an Image with MongoDB*](../mongodb/#mongodb-image) + - [*Python Web App*](../python_web_app/#python-web-app) diff --git a/docs/sources/examples/https.md b/docs/sources/examples/https.md index 153a6c0cf9..c46cf6b88c 100644 --- a/docs/sources/examples/https.md +++ b/docs/sources/examples/https.md @@ -8,7 +8,7 @@ By default, Docker runs via a non-networked Unix socket. It can also optionally communicate using a HTTP socket. If you need Docker reachable via the network in a safe manner, you can -enable TLS by specifying the tlsverify flag and pointing Docker’s +enable TLS by specifying the tlsverify flag and pointing Docker's tlscacert flag to a trusted CA certificate. In daemon mode, it will only allow connections from clients @@ -31,12 +31,12 @@ keys: Now that we have a CA, you can create a server key and certificate signing request. Make sure that "Common Name (e.g. server FQDN or YOUR name)" matches the hostname you will use to connect to Docker or just -use ‘\*’ for a certificate valid for any hostname: +use `\*` for a certificate valid for any hostname: $ openssl genrsa -des3 -out server-key.pem $ openssl req -new -key server-key.pem -out server.csr -Next we’re going to sign the key with our CA: +Next we're going to sign the key with our CA: $ openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \ -out server-cert.pem @@ -76,7 +76,7 @@ need to provide your client keys, certificates and trusted CA: -H=dns-name-of-docker-host:4243 > **Warning**: -> As shown in the example above, you don’t have to run the +> As shown in the example above, you don't have to run the > `docker` client with `sudo` or > the `docker` group when you use certificate > authentication. That means anyone with the keys can give any @@ -86,22 +86,22 @@ need to provide your client keys, certificates and trusted CA: ## Other modes -If you don’t want to have complete two-way authentication, you can run +If you don't want to have complete two-way authentication, you can run Docker in various other modes by mixing the flags. ### Daemon modes -- tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients -- tls, tlscert, tlskey: Do not authenticate clients + - tlsverify, tlscacert, tlscert, tlskey set: Authenticate clients + - tls, tlscert, tlskey: Do not authenticate clients ### Client modes -- tls: Authenticate server based on public/default CA pool -- tlsverify, tlscacert: Authenticate server based on given CA -- tls, tlscert, tlskey: Authenticate with client certificate, do not - authenticate server based on given CA -- tlsverify, tlscacert, tlscert, tlskey: Authenticate with client - certificate, authenticate server based on given CA + - tls: Authenticate server based on public/default CA pool + - tlsverify, tlscacert: Authenticate server based on given CA + - tls, tlscert, tlskey: Authenticate with client certificate, do not + authenticate server based on given CA + - tlsverify, tlscacert, tlscert, tlskey: Authenticate with client + certificate, authenticate server based on given CA The client will send its client certificate if found, so you just need -to drop your keys into \~/.docker/\.pem +to drop your keys into ~/.docker/.pem diff --git a/docs/sources/examples/mongodb.md b/docs/sources/examples/mongodb.md index c9078419d6..36a5a58ad8 100644 --- a/docs/sources/examples/mongodb.md +++ b/docs/sources/examples/mongodb.md @@ -9,57 +9,57 @@ page_keywords: docker, example, package installation, networking, mongodb > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) The goal of this example is to show how you can build your own Docker images with MongoDB pre-installed. We will do that by constructing a -`Dockerfile` that downloads a base image, adds an +Dockerfile that downloads a base image, adds an apt source and installs the database software on Ubuntu. -## Creating a `Dockerfile` +## Creating a Dockerfile -Create an empty file called `Dockerfile`: +Create an empty file called Dockerfile: touch Dockerfile Next, define the parent image you want to use to build your own image on -top of. Here, we’ll use [Ubuntu](https://index.docker.io/_/ubuntu/) +top of. Here, we'll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag: `latest`) available on the [docker index](http://index.docker.io): FROM ubuntu:latest -Since we want to be running the latest version of MongoDB we’ll need to +Since we want to be running the latest version of MongoDB we'll need to add the 10gen repo to our apt sources list. # Add 10gen official apt source to the sources list RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list -Then, we don’t want Ubuntu to complain about init not being available so -we’ll divert `/sbin/initctl` to +Then, we don't want Ubuntu to complain about init not being available so +we'll divert `/sbin/initctl` to `/bin/true` so it thinks everything is working. # Hack for initctl not being available in Ubuntu RUN dpkg-divert --local --rename --add /sbin/initctl RUN ln -s /bin/true /sbin/initctl -Afterwards we’ll be able to update our apt repositories and install +Afterwards we'll be able to update our apt repositories and install MongoDB # Install MongoDB RUN apt-get update RUN apt-get install mongodb-10gen -To run MongoDB we’ll have to create the default data directory (because +To run MongoDB we'll have to create the default data directory (because we want it to run without needing to provide a special configuration file) # Create the MongoDB data directory RUN mkdir -p /data/db -Finally, we’ll expose the standard port that MongoDB runs on, 27107, as +Finally, we'll expose the standard port that MongoDB runs on, 27107, as well as define an `ENTRYPOINT` instruction for the container. @@ -67,7 +67,7 @@ container. ENTRYPOINT ["usr/bin/mongod"] Now, lets build the image which will go through the -`Dockerfile` we made and run all of the commands. +Dockerfile we made and run all of the commands. sudo docker build -t /mongodb . diff --git a/docs/sources/examples/nodejs_web_app.md b/docs/sources/examples/nodejs_web_app.md index 77d75047b6..f7d63dadcf 100644 --- a/docs/sources/examples/nodejs_web_app.md +++ b/docs/sources/examples/nodejs_web_app.md @@ -9,8 +9,8 @@ page_keywords: docker, example, package installation, node, centos > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) The goal of this example is to show you how you can build your own Docker images from a parent image using a `Dockerfile` @@ -52,11 +52,11 @@ app using the [Express.js](http://expressjs.com/) framework: app.listen(PORT); console.log('Running on http://localhost:' + PORT); -In the next steps, we’ll look at how you can run this app inside a -CentOS container using Docker. First, you’ll need to build a Docker +In the next steps, we'll look at how you can run this app inside a +CentOS container using Docker. First, you'll need to build a Docker image of your app. -## Creating a `Dockerfile` +## Creating a Dockerfile Create an empty file called `Dockerfile`: @@ -69,47 +69,44 @@ requires to build (this example uses Docker 0.3.4): # DOCKER-VERSION 0.3.4 Next, define the parent image you want to use to build your own image on -top of. Here, we’ll use [CentOS](https://index.docker.io/_/centos/) +top of. Here, we'll use [CentOS](https://index.docker.io/_/centos/) (tag: `6.4`) available on the [Docker index](https://index.docker.io/): FROM centos:6.4 -Since we’re building a Node.js app, you’ll have to install Node.js as +Since we're building a Node.js app, you'll have to install Node.js as well as npm on your CentOS image. Node.js is required to run your app -and npm to install your app’s dependencies defined in +and npm to install your app's dependencies defined in `package.json`. To install the right package for -CentOS, we’ll use the instructions from the [Node.js -wiki](https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager#rhelcentosscientific-linux-6): +CentOS, we'll use the instructions from the [Node.js wiki]( +https://github.com/joyent/node/wiki/Installing-Node.js- +via-package-manager#rhelcentosscientific-linux-6): # Enable EPEL for Node.js RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm # Install Node.js and npm RUN yum install -y npm -To bundle your app’s source code inside the Docker image, use the -`ADD` instruction: +To bundle your app's source code inside the Docker image, use the `ADD` +instruction: # Bundle app source ADD . /src -Install your app dependencies using the `npm` -binary: +Install your app dependencies using the `npm` binary: # Install app dependencies RUN cd /src; npm install -Your app binds to port `8080` so you’ll use the -`EXPOSE` instruction to have it mapped by the -`docker` daemon: +Your app binds to port `8080` so you'll use the` EXPOSE` instruction to have +it mapped by the `docker` daemon: EXPOSE 8080 -Last but not least, define the command to run your app using -`CMD` which defines your runtime, i.e. -`node`, and the path to our app, i.e. -`src/index.js` (see the step where we added the -source to the container): +Last but not least, define the command to run your app using `CMD` which +defines your runtime, i.e. `node`, and the path to our app, i.e. `src/index.js` +(see the step where we added the source to the container): CMD ["node", "/src/index.js"] @@ -133,10 +130,9 @@ Your `Dockerfile` should now look like this: ## Building your image -Go to the directory that has your `Dockerfile` and -run the following command to build a Docker image. The `-t` -flag let’s you tag your image so it’s easier to find later -using the `docker images` command: +Go to the directory that has your `Dockerfile` and run the following command +to build a Docker image. The `-t` flag let's you tag your image so it's easier +to find later using the `docker images` command: sudo docker build -t /centos-node-hello . @@ -151,10 +147,9 @@ Your image will now be listed by Docker: ## Run the image -Running your image with `-d` runs the container in -detached mode, leaving the container running in the background. The -`-p` flag redirects a public port to a private port -in the container. Run the image you previously built: +Running your image with `-d` runs the container in detached mode, leaving the +container running in the background. The `-p` flag redirects a public port to +a private port in the container. Run the image you previously built: sudo docker run -p 49160:8080 -d /centos-node-hello @@ -179,11 +174,10 @@ To test your app, get the the port of your app that Docker mapped: > ID IMAGE COMMAND ... PORTS > ecce33b30ebf gasi/centos-node-hello:latest node /src/index.js 49160->8080 -In the example above, Docker mapped the `8080` port -of the container to `49160`. +In the example above, Docker mapped the `8080` port of the container to `49160`. -Now you can call your app using `curl` (install if -needed via: `sudo apt-get install curl`): +Now you can call your app using `curl` (install if needed via: +`sudo apt-get install curl`): curl -i localhost:49160 @@ -200,5 +194,4 @@ We hope this tutorial helped you get up and running with Node.js and CentOS on Docker. You can get the full source code at [https://github.com/gasi/docker-node-hello](https://github.com/gasi/docker-node-hello). -Continue to [*Redis -Service*](../running_redis_service/#running-redis-service). +Continue to [*Redis Service*](../running_redis_service/#running-redis-service). diff --git a/docs/sources/examples/postgresql_service.md b/docs/sources/examples/postgresql_service.md index 053bf410c0..1a10cd4415 100644 --- a/docs/sources/examples/postgresql_service.md +++ b/docs/sources/examples/postgresql_service.md @@ -9,13 +9,13 @@ page_keywords: docker, example, package installation, postgresql > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) ## Installing PostgreSQL on Docker -Assuming there is no Docker image that suits your needs in [the -index](http://index.docker.io), you can create one yourself. +Assuming there is no Docker image that suits your needs in [the index]( +http://index.docker.io), you can create one yourself. Start by creating a new Dockerfile: @@ -25,7 +25,7 @@ Start by creating a new Dockerfile: > suitably secure. # - # example Dockerfile for http://docs.docker.io/en/latest/examples/postgresql_service/ + # example Dockerfile for http://docs.docker.io/examples/postgresql_service/ # FROM ubuntu @@ -87,7 +87,7 @@ And run the PostgreSQL server container (in the foreground): $ sudo docker run -rm -P -name pg_test eg_postgresql There are 2 ways to connect to the PostgreSQL server. We can use [*Link -Containers*](../../use/working_with_links_names/#working-with-links-names), +Containers*](/use/working_with_links_names/#working-with-links-names), or we can access it from our host (or the network). > **Note**: @@ -96,8 +96,8 @@ or we can access it from our host (or the network). ### Using container linking -Containers can be linked to another container’s ports directly using -`-link remote_name:local_alias` in the client’s +Containers can be linked to another container's ports directly using +`-link remote_name:local_alias` in the client's `docker run`. This will set a number of environment variables that can then be used to connect: diff --git a/docs/sources/examples/python_web_app.md b/docs/sources/examples/python_web_app.md index 2212f97139..e761003a9e 100644 --- a/docs/sources/examples/python_web_app.md +++ b/docs/sources/examples/python_web_app.md @@ -9,8 +9,8 @@ page_keywords: docker, example, python, web app > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) While using Dockerfiles is the preferred way to create maintainable and repeatable images, its useful to know how you can try things out and @@ -18,13 +18,13 @@ then commit your live changes to an image. The goal of this example is to show you how you can modify your own Docker images by making changes to a running container, and then saving -the results as a new image. We will do that by making a simple ‘hello -world’ Flask web application image. +the results as a new image. We will do that by making a simple `hello +world` Flask web application image. ## Download the initial image -Download the `shykes/pybuilder` Docker image from -the `http://index.docker.io` registry. +Download the `shykes/pybuilder` Docker image from the `http://index.docker.io` +registry. This image contains a `buildapp` script to download the web app and then `pip install` any required @@ -36,7 +36,7 @@ modules, and a `runapp` script that finds the > **Note**: > This container was built with a very old version of docker (May 2013 - > see [shykes/pybuilder](https://github.com/shykes/pybuilder) ), when the -> `Dockerfile` format was different, but the image can +> Dockerfile format was different, but the image can > still be used now. ## Interactively make some modifications @@ -49,7 +49,7 @@ the `$URL` variable. The container is given a name `pybuilder_run` which we will use in the next steps. While this example is simple, you could run any number of interactive -commands, try things out, and then exit when you’re done. +commands, try things out, and then exit when you're done. $ sudo docker run -i -t -name pybuilder_run shykes/pybuilder bash @@ -76,11 +76,11 @@ mapped to a local port $ sudo docker run -d -p 5000 --name web_worker /builds/github.com/shykes/helloflask/master /usr/local/bin/runapp -- **"docker run -d "** run a command in a new container. We pass "-d" - so it runs as a daemon. -- **"-p 5000"** the web app is going to listen on this port, so it - must be mapped from the container to the host system. -- **/usr/local/bin/runapp** is the command which starts the web app. + - **"docker run -d "** run a command in a new container. We pass "-d" + so it runs as a daemon. + - **"-p 5000"** the web app is going to listen on this port, so it + must be mapped from the container to the host system. + - **/usr/local/bin/runapp** is the command which starts the web app. ## View the container logs @@ -93,7 +93,7 @@ another terminal and continue with the example while watching the result in the logs. $ sudo docker logs -f web_worker - * Running on http://0.0.0.0:5000/ + * Running on http://0.0.0.0:5000/ ## See the webapp output @@ -117,7 +117,7 @@ everything worked as planned you should see the line List `--all` the Docker containers. If this container had already finished running, it will still be listed here -with a status of ‘Exit 0’. +with a status of `Exit 0`. $ sudo docker stop web_worker $ sudo docker rm web_worker pybuilder_run diff --git a/docs/sources/examples/running_redis_service.md b/docs/sources/examples/running_redis_service.md index b67937fab5..2bfa8a05bc 100644 --- a/docs/sources/examples/running_redis_service.md +++ b/docs/sources/examples/running_redis_service.md @@ -9,8 +9,8 @@ page_keywords: docker, example, package installation, networking, redis > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) Very simple, no frills, Redis service attached to a web application using a link. @@ -33,26 +33,24 @@ Replace `` with your own user name. ## Run the service -Use the image we’ve just created and name your container -`redis`. +Use the image we've just created and name your container `redis`. -Running the service with `-d` runs the container in -detached mode, leaving the container running in the background. +Running the service with `-d` runs the container in detached mode, leaving +the container running in the background. -Importantly, we’re not exposing any ports on our container. Instead -we’re going to use a container link to provide access to our Redis +Importantly, we're not exposing any ports on our container. Instead +we're going to use a container link to provide access to our Redis database. sudo docker run --name redis -d /redis ## Create your web application container -Next we can create a container for our application. We’re going to use -the `-link` flag to create a link to the -`redis` container we’ve just created with an alias -of `db`. This will create a secure tunnel to the -`redis` container and expose the Redis instance -running inside that container to only this container. +Next we can create a container for our application. We're going to use +the `-link` flag to create a link to the `redis` container we've just +created with an alias of `db`. This will create a secure tunnel to the +`redis` container and expose the Redis instance running inside that +container to only this container. sudo docker run --link redis:db -i -t ubuntu:12.10 /bin/bash @@ -63,7 +61,7 @@ get the `redis-cli` binary to test our connection. apt-get -y install redis-server service redis-server stop -As we’ve used the `--link redis:db` option, Docker +As we've used the `--link redis:db` option, Docker has created some environment variables in our web application container. env | grep DB_ @@ -76,11 +74,10 @@ has created some environment variables in our web application container. DB_PORT_6379_TCP_ADDR=172.17.0.33 DB_PORT_6379_TCP_PROTO=tcp -We can see that we’ve got a small list of environment variables prefixed -with `DB`. The `DB` comes from -the link alias specified when we launched the container. Let’s use the -`DB_PORT_6379_TCP_ADDR` variable to connect to our -Redis container. +We can see that we've got a small list of environment variables prefixed +with `DB`. The `DB` comes from the link alias specified when we launched +the container. Let's use the `DB_PORT_6379_TCP_ADDR` variable to connect to +our Redis container. redis-cli -h $DB_PORT_6379_TCP_ADDR redis 172.17.0.33:6379> diff --git a/docs/sources/examples/running_riak_service.md b/docs/sources/examples/running_riak_service.md index ad0b20a628..61594f9cd8 100644 --- a/docs/sources/examples/running_riak_service.md +++ b/docs/sources/examples/running_riak_service.md @@ -9,20 +9,20 @@ page_keywords: docker, example, package installation, networking, riak > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) The goal of this example is to show you how to build a Docker image with Riak pre-installed. -## Creating a `Dockerfile` +## Creating a Dockerfile -Create an empty file called `Dockerfile`: +Create an empty file called Dockerfile: touch Dockerfile Next, define the parent image you want to use to build your image on top -of. We’ll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag: +of. We'll use [Ubuntu](https://index.docker.io/_/ubuntu/) (tag: `latest`), which is available on the [docker index](http://index.docker.io): @@ -43,13 +43,13 @@ Next, we update the APT cache and apply any updates: After that, we install and setup a few dependencies: -- `curl` is used to download Basho’s APT + - `curl` is used to download Basho's APT repository key -- `lsb-release` helps us derive the Ubuntu release + - `lsb-release` helps us derive the Ubuntu release codename -- `openssh-server` allows us to login to + - `openssh-server` allows us to login to containers remotely and join Riak nodes to form a cluster -- `supervisor` is used manage the OpenSSH and Riak + - `supervisor` is used manage the OpenSSH and Riak processes @@ -66,7 +66,7 @@ After that, we install and setup a few dependencies: RUN echo 'root:basho' | chpasswd -Next, we add Basho’s APT repository: +Next, we add Basho's APT repository: RUN curl -s http://apt.basho.com/gpg/basho.apt.key | apt-key add -- RUN echo "deb http://apt.basho.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/basho.list @@ -98,10 +98,10 @@ are started: CMD ["/usr/bin/supervisord"] -## Create a `supervisord` configuration file +## Create a supervisord configuration file Create an empty file called `supervisord.conf`. Make -sure it’s at the same directory level as your `Dockerfile`: +sure it's at the same directory level as your Dockerfile: touch supervisord.conf @@ -131,7 +131,7 @@ Now you should be able to build a Docker image for Riak: ## Next steps Riak is a distributed database. Many production deployments consist of -[at least five -nodes](http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/). +[at least five nodes]( +http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/). See the [docker-riak](https://github.com/hectcastro/docker-riak) project details on how to deploy a Riak cluster using Docker and Pipework. diff --git a/docs/sources/examples/running_ssh_service.md b/docs/sources/examples/running_ssh_service.md index 112b9fa441..864d10c726 100644 --- a/docs/sources/examples/running_ssh_service.md +++ b/docs/sources/examples/running_ssh_service.md @@ -8,11 +8,11 @@ page_keywords: docker, example, package installation, networking > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) The following Dockerfile sets up an sshd service in a container that you -can use to connect to and inspect other container’s volumes, or to get +can use to connect to and inspect other container's volumes, or to get quick access to a test container. # sshd @@ -38,14 +38,14 @@ Build the image using: $ sudo docker build -rm -t eg_sshd . Then run it. You can then use `docker port` to find -out what host port the container’s port 22 is mapped to: +out what host port the container's port 22 is mapped to: $ sudo docker run -d -P -name test_sshd eg_sshd $ sudo docker port test_sshd 22 0.0.0.0:49154 And now you can ssh to port `49154` on the Docker -daemon’s host IP address (`ip address` or +daemon's host IP address (`ip address` or `ifconfig` can tell you that): $ ssh root@192.168.1.2 -p 49154 diff --git a/docs/sources/examples/using_supervisord.md b/docs/sources/examples/using_supervisord.md index 3a0793710f..8e85ae05d2 100644 --- a/docs/sources/examples/using_supervisord.md +++ b/docs/sources/examples/using_supervisord.md @@ -9,25 +9,25 @@ page_keywords: docker, supervisor, process management > - This example assumes you have Docker running in daemon mode. For > more information please see [*Check your Docker > install*](../hello_world/#running-examples). -> - **If you don’t like sudo** then see [*Giving non-root -> access*](../../installation/binaries/#dockergroup) +> - **If you don't like sudo** then see [*Giving non-root +> access*](/installation/binaries/#dockergroup) Traditionally a Docker container runs a single process when it is launched, for example an Apache daemon or a SSH server daemon. Often though you want to run more than one process in a container. There are a number of ways you can achieve this ranging from using a simple Bash -script as the value of your container’s `CMD` +script as the value of your container's `CMD` instruction to installing a process management tool. -In this example we’re going to make use of the process management tool, +In this example we're going to make use of the process management tool, [Supervisor](http://supervisord.org/), to manage multiple processes in our container. Using Supervisor allows us to better control, manage, and -restart the processes we want to run. To demonstrate this we’re going to +restart the processes we want to run. To demonstrate this we're going to install and manage both an SSH daemon and an Apache daemon. ## Creating a Dockerfile -Let’s start by creating a basic `Dockerfile` for our +Let's start by creating a basic `Dockerfile` for our new image. FROM ubuntu:latest @@ -45,20 +45,20 @@ our container. RUN mkdir -p /var/run/sshd RUN mkdir -p /var/log/supervisor -Here we’re installing the `openssh-server`, +Here we're installing the `openssh-server`, `apache2` and `supervisor` -(which provides the Supervisor daemon) packages. We’re also creating two +(which provides the Supervisor daemon) packages. We're also creating two new directories that are needed to run our SSH daemon and Supervisor. -## Adding Supervisor’s configuration file +## Adding Supervisor's configuration file -Now let’s add a configuration file for Supervisor. The default file is +Now let's add a configuration file for Supervisor. The default file is called `supervisord.conf` and is located in `/etc/supervisor/conf.d/`. ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf -Let’s see what is inside our `supervisord.conf` +Let's see what is inside our `supervisord.conf` file. [supervisord] @@ -73,7 +73,7 @@ file. The `supervisord.conf` configuration file contains directives that configure Supervisor and the processes it manages. The first block `[supervisord]` provides configuration -for Supervisor itself. We’re using one directive, `nodaemon` +for Supervisor itself. We're using one directive, `nodaemon` which tells Supervisor to run interactively rather than daemonize. @@ -84,14 +84,14 @@ start each process. ## Exposing ports and running Supervisor -Now let’s finish our `Dockerfile` by exposing some +Now let's finish our `Dockerfile` by exposing some required ports and specifying the `CMD` instruction to start Supervisor when our container launches. EXPOSE 22 80 CMD ["/usr/bin/supervisord"] -Here we’ve exposed ports 22 and 80 on the container and we’re running +Here We've exposed ports 22 and 80 on the container and we're running the `/usr/bin/supervisord` binary when the container launches. @@ -103,7 +103,7 @@ We can now build our new container. ## Running our Supervisor container -Once we’ve got a built image we can launch a container from it. +Once We've got a built image we can launch a container from it. sudo docker run -p 22 -p 80 -t -i /supervisord 2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file) @@ -113,9 +113,8 @@ Once we’ve got a built image we can launch a container from it. 2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7 . . . -We’ve launched a new container interactively using the -`docker run` command. That container has run -Supervisor and launched the SSH and Apache daemons with it. We’ve -specified the `-p` flag to expose ports 22 and 80. -From here we can now identify the exposed ports and connect to one or -both of the SSH and Apache daemons. +We've launched a new container interactively using the `docker run` command. +That container has run Supervisor and launched the SSH and Apache daemons with +it. We've specified the `-p` flag to expose ports 22 and 80. From here we can +now identify the exposed ports and connect to one or both of the SSH and Apache +daemons. diff --git a/docs/sources/faq.md b/docs/sources/faq.md index 563e07a1c7..2494f33e9c 100644 --- a/docs/sources/faq.md +++ b/docs/sources/faq.md @@ -4,129 +4,126 @@ page_keywords: faq, questions, documentation, docker # FAQ -## Most frequently asked questions. +## Most frequently asked questions ### How much does Docker cost? -> Docker is 100% free, it is open source, so you can use it without -> paying. +Docker is 100% free, it is open source, so you can use it without +paying. ### What open source license are you using? -> We are using the Apache License Version 2.0, see it here: -> [https://github.com/dotcloud/docker/blob/master/LICENSE](https://github.com/dotcloud/docker/blob/master/LICENSE) +We are using the Apache License Version 2.0, see it here: +[https://github.com/dotcloud/docker/blob/master/LICENSE]( +https://github.com/dotcloud/docker/blob/master/LICENSE) ### Does Docker run on Mac OS X or Windows? -> Not at this time, Docker currently only runs on Linux, but you can use -> VirtualBox to run Docker in a virtual machine on your box, and get the -> best of both worlds. Check out the [*Mac OS -> X*](../installation/mac/#macosx) and [*Microsoft -> Windows*](../installation/windows/#windows) installation guides. The -> small Linux distribution boot2docker can be run inside virtual -> machines on these two operating systems. +Not at this time, Docker currently only runs on Linux, but you can use +VirtualBox to run Docker in a virtual machine on your box, and get the +best of both worlds. Check out the [*Mac OS X*](../installation/mac/#macosx) +and [*Microsoft Windows*](../installation/windows/#windows) installation +guides. The small Linux distribution boot2docker can be run inside virtual +machines on these two operating systems. ### How do containers compare to virtual machines? -> They are complementary. VMs are best used to allocate chunks of -> hardware resources. Containers operate at the process level, which -> makes them very lightweight and perfect as a unit of software -> delivery. +They are complementary. VMs are best used to allocate chunks of +hardware resources. Containers operate at the process level, which +makes them very lightweight and perfect as a unit of software +delivery. ### What does Docker add to just plain LXC? -> Docker is not a replacement for LXC. "LXC" refers to capabilities of -> the Linux kernel (specifically namespaces and control groups) which -> allow sandboxing processes from one another, and controlling their -> resource allocations. On top of this low-level foundation of kernel -> features, Docker offers a high-level tool with several powerful -> functionalities: -> -> - *Portable deployment across machines.* -> : Docker defines a format for bundling an application and all -> its dependencies into a single object which can be transferred -> to any Docker-enabled machine, and executed there with the -> guarantee that the execution environment exposed to the -> application will be the same. LXC implements process -> sandboxing, which is an important pre-requisite for portable -> deployment, but that alone is not enough for portable -> deployment. If you sent me a copy of your application -> installed in a custom LXC configuration, it would almost -> certainly not run on my machine the way it does on yours, -> because it is tied to your machine’s specific configuration: -> networking, storage, logging, distro, etc. Docker defines an -> abstraction for these machine-specific settings, so that the -> exact same Docker container can run - unchanged - on many -> different machines, with many different configurations. -> -> - *Application-centric.* -> : Docker is optimized for the deployment of applications, as -> opposed to machines. This is reflected in its API, user -> interface, design philosophy and documentation. By contrast, -> the `lxc` helper scripts focus on -> containers as lightweight machines - basically servers that -> boot faster and need less RAM. We think there’s more to -> containers than just that. -> -> - *Automatic build.* -> : Docker includes [*a tool for developers to automatically -> assemble a container from their source -> code*](../reference/builder/#dockerbuilder), with full control -> over application dependencies, build tools, packaging etc. -> They are free to use -> `make, maven, chef, puppet, salt,` Debian -> packages, RPMs, source tarballs, or any combination of the -> above, regardless of the configuration of the machines. -> -> - *Versioning.* -> : Docker includes git-like capabilities for tracking successive -> versions of a container, inspecting the diff between versions, -> committing new versions, rolling back etc. The history also -> includes how a container was assembled and by whom, so you get -> full traceability from the production server all the way back -> to the upstream developer. Docker also implements incremental -> uploads and downloads, similar to `git pull` -> , so new versions of a container can be transferred -> by only sending diffs. -> -> - *Component re-use.* -> : Any container can be used as a [*"base -> image"*](../terms/image/#base-image-def) to create more -> specialized components. This can be done manually or as part -> of an automated build. For example you can prepare the ideal -> Python environment, and use it as a base for 10 different -> applications. Your ideal Postgresql setup can be re-used for -> all your future projects. And so on. -> -> - *Sharing.* -> : Docker has access to a [public -> registry](http://index.docker.io) where thousands of people -> have uploaded useful containers: anything from Redis, CouchDB, -> Postgres to IRC bouncers to Rails app servers to Hadoop to -> base images for various Linux distros. The -> [*registry*](../reference/api/registry_index_spec/#registryindexspec) -> also includes an official "standard library" of useful -> containers maintained by the Docker team. The registry itself -> is open-source, so anyone can deploy their own registry to -> store and transfer private containers, for internal server -> deployments for example. -> -> - *Tool ecosystem.* -> : Docker defines an API for automating and customizing the -> creation and deployment of containers. There are a huge number -> of tools integrating with Docker to extend its capabilities. -> PaaS-like deployment (Dokku, Deis, Flynn), multi-node -> orchestration (Maestro, Salt, Mesos, Openstack Nova), -> management dashboards (docker-ui, Openstack Horizon, -> Shipyard), configuration management (Chef, Puppet), continuous -> integration (Jenkins, Strider, Travis), etc. Docker is rapidly -> establishing itself as the standard for container-based -> tooling. -> +Docker is not a replacement for LXC. "LXC" refers to capabilities of +the Linux kernel (specifically namespaces and control groups) which +allow sandboxing processes from one another, and controlling their +resource allocations. On top of this low-level foundation of kernel +features, Docker offers a high-level tool with several powerful +functionalities: + + - *Portable deployment across machines.* + Docker defines a format for bundling an application and all + its dependencies into a single object which can be transferred + to any Docker-enabled machine, and executed there with the + guarantee that the execution environment exposed to the + application will be the same. LXC implements process + sandboxing, which is an important pre-requisite for portable + deployment, but that alone is not enough for portable + deployment. If you sent me a copy of your application + installed in a custom LXC configuration, it would almost + certainly not run on my machine the way it does on yours, + because it is tied to your machine's specific configuration: + networking, storage, logging, distro, etc. Docker defines an + abstraction for these machine-specific settings, so that the + exact same Docker container can run - unchanged - on many + different machines, with many different configurations. + + - *Application-centric.* + Docker is optimized for the deployment of applications, as + opposed to machines. This is reflected in its API, user + interface, design philosophy and documentation. By contrast, + the `lxc` helper scripts focus on + containers as lightweight machines - basically servers that + boot faster and need less RAM. We think there's more to + containers than just that. + + - *Automatic build.* + Docker includes [*a tool for developers to automatically + assemble a container from their source + code*](../reference/builder/#dockerbuilder), with full control + over application dependencies, build tools, packaging etc. + They are free to use `make`, `maven`, `chef`, `puppet`, `salt,` + Debian packages, RPMs, source tarballs, or any combination of the + above, regardless of the configuration of the machines. + + - *Versioning.* + Docker includes git-like capabilities for tracking successive + versions of a container, inspecting the diff between versions, + committing new versions, rolling back etc. The history also + includes how a container was assembled and by whom, so you get + full traceability from the production server all the way back + to the upstream developer. Docker also implements incremental + uploads and downloads, similar to `git pull`, so new versions + of a container can be transferred by only sending diffs. + + - *Component re-use.* + Any container can be used as a [*"base image"*]( + ../terms/image/#base-image-def) to create more specialized components. + This can be done manually or as part of an automated build. For example + you can prepare the ideal Python environment, and use it as a base for + 10 different applications. Your ideal Postgresql setup can be re-used for + all your future projects. And so on. + + - *Sharing.* + Docker has access to a [public registry](http://index.docker.io) where + thousands of people have uploaded useful containers: anything from Redis, + CouchDB, Postgres to IRC bouncers to Rails app servers to Hadoop to + base images for various Linux distros. The + [*registry*](../reference/api/registry_index_spec/#registryindexspec) + also includes an official "standard library" of useful + containers maintained by the Docker team. The registry itself + is open-source, so anyone can deploy their own registry to + store and transfer private containers, for internal server + deployments for example. + + - *Tool ecosystem.* + Docker defines an API for automating and customizing the + creation and deployment of containers. There are a huge number + of tools integrating with Docker to extend its capabilities. + PaaS-like deployment (Dokku, Deis, Flynn), multi-node + orchestration (Maestro, Salt, Mesos, Openstack Nova), + management dashboards (docker-ui, Openstack Horizon, + Shipyard), configuration management (Chef, Puppet), continuous + integration (Jenkins, Strider, Travis), etc. Docker is rapidly + establishing itself as the standard for container-based + tooling. + ### What is different between a Docker container and a VM? -There’s a great StackOverflow answer [showing the -differences](http://stackoverflow.com/questions/16047306/how-is-docker-io-different-from-a-normal-virtual-machine). +There's a great StackOverflow answer [showing the differences]( +http://stackoverflow.com/questions/16047306/ +how-is-docker-io-different-from-a-normal-virtual-machine). ### Do I lose my data when the container exits? @@ -145,74 +142,70 @@ running in parallel. ### How do I connect Docker containers? Currently the recommended way to link containers is via the link -primitive. You can see details of how to [work with links -here](http://docs.docker.io/en/latest/use/working_with_links_names/). +primitive. You can see details of how to [work with links here]( +http://docs.docker.io/use/working_with_links_names/). Also of useful when enabling more flexible service portability is the -[Ambassador linking -pattern](http://docs.docker.io/en/latest/use/ambassador_pattern_linking/). +[Ambassador linking pattern]( +http://docs.docker.io/use/ambassador_pattern_linking/). ### How do I run more than one process in a Docker container? -Any capable process supervisor such as -[http://supervisord.org/](http://supervisord.org/), runit, s6, or -daemontools can do the trick. Docker will start up the process -management daemon which will then fork to run additional processes. As -long as the processor manager daemon continues to run, the container -will continue to as well. You can see a more substantial example [that -uses supervisord -here](http://docs.docker.io/en/latest/examples/using_supervisord/). +Any capable process supervisor such as [http://supervisord.org/]( +http://supervisord.org/), runit, s6, or daemontools can do the trick. +Docker will start up the process management daemon which will then fork +to run additional processes. As long as the processor manager daemon continues +to run, the container will continue to as well. You can see a more substantial +example [that uses supervisord here]( +http://docs.docker.io/examples/using_supervisord/). ### What platforms does Docker run on? Linux: -- Ubuntu 12.04, 13.04 et al -- Fedora 19/20+ -- RHEL 6.5+ -- Centos 6+ -- Gentoo -- ArchLinux -- openSUSE 12.3+ -- CRUX 3.0+ + - Ubuntu 12.04, 13.04 et al + - Fedora 19/20+ + - RHEL 6.5+ + - Centos 6+ + - Gentoo + - ArchLinux + - openSUSE 12.3+ + - CRUX 3.0+ Cloud: -- Amazon EC2 -- Google Compute Engine -- Rackspace + - Amazon EC2 + - Google Compute Engine + - Rackspace ### How do I report a security issue with Docker? -You can learn about the project’s security policy -[here](http://www.docker.io/security/) and report security issues to -this [mailbox](mailto:security%40docker.com). +You can learn about the project's security policy +[here](https://www.docker.io/security/) and report security issues to +this [mailbox](mailto:security@docker.com). ### Why do I need to sign my commits to Docker with the DCO? -Please read [our blog -post](http://blog.docker.io/2014/01/docker-code-contributions-require-developer-certificate-of-origin/) +Please read [our blog post]( +http://blog.docker.io/2014/01/ +docker-code-contributions-require-developer-certificate-of-origin/) on the introduction of the DCO. ### Can I help by adding some questions and answers? -Definitely! You can fork [the -repo](http://www.github.com/dotcloud/docker) and edit the documentation -sources. +Definitely! You can fork [the repo](https://github.com/dotcloud/docker) and +edit the documentation sources. ### Where can I find more answers? You can find more answers on: -- [Docker user - mailinglist](https://groups.google.com/d/forum/docker-user) -- [Docker developer - mailinglist](https://groups.google.com/d/forum/docker-dev) +- [Docker user mailinglist](https://groups.google.com/d/forum/docker-user) +- [Docker developer mailinglist](https://groups.google.com/d/forum/docker-dev) - [IRC, docker on freenode](irc://chat.freenode.net#docker) -- [GitHub](http://www.github.com/dotcloud/docker) -- [Ask questions on - Stackoverflow](http://stackoverflow.com/search?q=docker) +- [GitHub](https://github.com/dotcloud/docker) +- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker) - [Join the conversation on Twitter](http://twitter.com/docker) -Looking for something else to read? Checkout the [*Hello -World*](../examples/hello_world/#hello-world) example. +Looking for something else to read? Checkout the [*Hello World*]( +../examples/hello_world/#hello-world) example. diff --git a/docs/sources/index.md b/docs/sources/index.md index 42f3286352..d582321563 100644 --- a/docs/sources/index.md +++ b/docs/sources/index.md @@ -8,7 +8,7 @@ page_keywords: docker, introduction, documentation, about, technology, understan ## Introduction -[**Docker**](http://www.docker.io) is a container based virtualization +[**Docker**](https://www.docker.io) is a container based virtualization framework. Unlike traditional virtualization Docker is fast, lightweight and easy to use. Docker allows you to create containers holding all the dependencies for an application. Each container is kept isolated diff --git a/docs/sources/index/accounts.md b/docs/sources/index/accounts.md index 216b0c17ee..c3138b61da 100644 --- a/docs/sources/index/accounts.md +++ b/docs/sources/index/accounts.md @@ -6,23 +6,23 @@ page_keywords: Docker, docker, index, accounts, plans, Dockerfile, Docker.io, do ## Docker IO and Docker Index Accounts -You can `search` for Docker images and `pull` them from the [Docker Index] -(https://index.docker.io) without signing in or even having an account. However, +You can `search` for Docker images and `pull` them from the [Docker Index]( +https://index.docker.io) without signing in or even having an account. However, in order to `push` images, leave comments or to *star* a repository, you are going to need a [Docker IO](https://www.docker.io) account. ### Registration for a Docker IO Account -You can get a Docker IO account by [signing up for one here] -(https://index.docker.io/account/signup/). A valid email address is required to +You can get a Docker IO account by [signing up for one here]( +https://index.docker.io/account/signup/). A valid email address is required to register, which you will need to verify for account activation. ### Email activation process You need to have at least one verified email address to be able to use your Docker IO account. If you can't find the validation email, you can request -another by visiting the [Resend Email Confirmation] -(https://index.docker.io/account/resend-email-confirmation/) page. +another by visiting the [Resend Email Confirmation]( +https://index.docker.io/account/resend-email-confirmation/) page. ### Password reset process diff --git a/docs/sources/installation.md b/docs/sources/installation.md index 0ee7b2f903..66b28b2b3c 100644 --- a/docs/sources/installation.md +++ b/docs/sources/installation.md @@ -9,17 +9,17 @@ techniques for installing Docker all the time. ## Contents: -- [Ubuntu](ubuntulinux/) -- [Red Hat Enterprise Linux](rhel/) -- [Fedora](fedora/) -- [Arch Linux](archlinux/) -- [CRUX Linux](cruxlinux/) -- [Gentoo](gentoolinux/) -- [openSUSE](openSUSE/) -- [FrugalWare](frugalware/) -- [Mac OS X](mac/) -- [Windows](windows/) -- [Amazon EC2](amazon/) -- [Rackspace Cloud](rackspace/) -- [Google Cloud Platform](google/) -- [Binaries](binaries/) \ No newline at end of file + - [Ubuntu](ubuntulinux/) + - [Red Hat Enterprise Linux](rhel/) + - [Fedora](fedora/) + - [Arch Linux](archlinux/) + - [CRUX Linux](cruxlinux/) + - [Gentoo](gentoolinux/) + - [openSUSE](openSUSE/) + - [FrugalWare](frugalware/) + - [Mac OS X](mac/) + - [Windows](windows/) + - [Amazon EC2](amazon/) + - [Rackspace Cloud](rackspace/) + - [Google Cloud Platform](google/) + - [Binaries](binaries/) \ No newline at end of file diff --git a/docs/sources/installation/amazon.md b/docs/sources/installation/amazon.md index f97c8fde9e..61a12d6b43 100644 --- a/docs/sources/installation/amazon.md +++ b/docs/sources/installation/amazon.md @@ -5,47 +5,47 @@ page_keywords: amazon ec2, virtualization, cloud, docker, documentation, install # Amazon EC2 > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) There are several ways to install Docker on AWS EC2: -- [*Amazon QuickStart (Release Candidate - March - 2014)*](#amazon-quickstart-release-candidate-march-2014) or -- [*Amazon QuickStart*](#amazon-quickstart) or -- [*Standard Ubuntu Installation*](#standard-ubuntu-installation) + - [*Amazon QuickStart (Release Candidate - March 2014)*]( + #amazon-quickstart-release-candidate-march-2014) or + - [*Amazon QuickStart*](#amazon-quickstart) or + - [*Standard Ubuntu Installation*](#standard-ubuntu-installation) -**You’ll need an** [AWS account](http://aws.amazon.com/) **first, of +**You'll need an** [AWS account](http://aws.amazon.com/) **first, of course.** ## Amazon QuickStart -1. **Choose an image:** - - Launch the [Create Instance - Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:) - menu on your AWS Console. - - Click the `Select` button for a 64Bit Ubuntu - image. For example: Ubuntu Server 12.04.3 LTS - - For testing you can use the default (possibly free) - `t1.micro` instance (more info on - [pricing](http://aws.amazon.com/en/ec2/pricing/)). - - Click the `Next: Configure Instance Details` - button at the bottom right. +1. **Choose an image:** + - Launch the [Create Instance + Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:) + menu on your AWS Console. + - Click the `Select` button for a 64Bit Ubuntu + image. For example: Ubuntu Server 12.04.3 LTS + - For testing you can use the default (possibly free) + `t1.micro` instance (more info on + [pricing](http://aws.amazon.com/ec2/pricing/)). + - Click the `Next: Configure Instance Details` + button at the bottom right. -2. **Tell CloudInit to install Docker:** - - When you’re on the "Configure Instance Details" step, expand the - "Advanced Details" section. - - Under "User data", select "As text". - - Enter `#include https://get.docker.io` into - the instance *User Data*. - [CloudInit](https://help.ubuntu.com/community/CloudInit) is part - of the Ubuntu image you chose; it will bootstrap Docker by - running the shell script located at this URL. +2. **Tell CloudInit to install Docker:** + - When you're on the "Configure Instance Details" step, expand the + "Advanced Details" section. + - Under "User data", select "As text". + - Enter `#include https://get.docker.io` into + the instance *User Data*. + [CloudInit](https://help.ubuntu.com/community/CloudInit) is part + of the Ubuntu image you chose; it will bootstrap Docker by + running the shell script located at this URL. -3. After a few more standard choices where defaults are probably ok, - your AWS Ubuntu instance with Docker should be running! +3. After a few more standard choices where defaults are probably ok, + your AWS Ubuntu instance with Docker should be running! **If this is your first AWS instance, you may need to set up your Security Group to allow SSH.** By default all incoming ports to your new @@ -55,39 +55,39 @@ get timeouts when you try to connect. Installing with `get.docker.io` (as above) will create a service named `lxc-docker`. It will also set up a [*docker group*](../binaries/#dockergroup) and you may want to -add the *ubuntu* user to it so that you don’t have to use +add the *ubuntu* user to it so that you don't have to use `sudo` for every Docker command. -Once you’ve got Docker installed, you’re ready to try it out – head on -over to the [*First steps with Docker*](../../use/basics/) or -[*Examples*](../../examples/) section. +Once you`ve got Docker installed, you're ready to try it out – head on +over to the [*First steps with Docker*](/use/basics/) or +[*Examples*](/examples/) section. ## Amazon QuickStart (Release Candidate - March 2014) Amazon just published new Docker-ready AMIs (2014.03 Release Candidate). -Docker packages can now be installed from Amazon’s provided Software +Docker packages can now be installed from Amazon's provided Software Repository. -1. **Choose an image:** - - Launch the [Create Instance - Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:) - menu on your AWS Console. - - Click the `Community AMI` menu option on the - left side - - Search for ‘2014.03’ and select one of the Amazon provided AMI, - for example `amzn-ami-pv-2014.03.rc-0.x86_64-ebs` - - For testing you can use the default (possibly free) - `t1.micro` instance (more info on - [pricing](http://aws.amazon.com/en/ec2/pricing/)). - - Click the `Next: Configure Instance Details` - button at the bottom right. +1. **Choose an image:** + - Launch the [Create Instance + Wizard](https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:) + menu on your AWS Console. + - Click the `Community AMI` menu option on the + left side + - Search for `2014.03` and select one of the Amazon provided AMI, + for example `amzn-ami-pv-2014.03.rc-0.x86_64-ebs` + - For testing you can use the default (possibly free) + `t1.micro` instance (more info on + [pricing](http://aws.amazon.com/ec2/pricing/)). + - Click the `Next: Configure Instance Details` + button at the bottom right. -2. After a few more standard choices where defaults are probably ok, - your Amazon Linux instance should be running! -3. SSH to your instance to install Docker : - `ssh -i ec2-user@` +2. After a few more standard choices where defaults are probably ok, + your Amazon Linux instance should be running! +3. SSH to your instance to install Docker : + `ssh -i ec2-user@` -4. Once connected to the instance, type +4. Once connected to the instance, type `sudo yum install -y docker ; sudo service docker start` to install and start Docker @@ -100,5 +100,4 @@ QuickStart*](#amazon-quickstart) to pick an image (or use one of your own) and skip the step with the *User Data*. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) instructions. -Continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +Continue with the [*Hello World*](/examples/hello_world/#hello-world) example. diff --git a/docs/sources/installation/archlinux.md b/docs/sources/installation/archlinux.md index 3eebdecdc8..6e970b96f6 100644 --- a/docs/sources/installation/archlinux.md +++ b/docs/sources/installation/archlinux.md @@ -5,24 +5,24 @@ page_keywords: arch linux, virtualization, docker, documentation, installation # Arch Linux > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published Installing on Arch Linux can be handled via the package in community: -- [docker](https://www.archlinux.org/packages/community/x86_64/docker/) + - [docker](https://www.archlinux.org/packages/community/x86_64/docker/) or the following AUR package: -- [docker-git](https://aur.archlinux.org/packages/docker-git/) + - [docker-git](https://aur.archlinux.org/packages/docker-git/) The docker package will install the latest tagged version of docker. The docker-git package will build from the current master branch. @@ -32,11 +32,11 @@ docker-git package will build from the current master branch. Docker depends on several packages which are specified as dependencies in the packages. The core dependencies are: -- bridge-utils -- device-mapper -- iproute2 -- lxc -- sqlite + - bridge-utils + - device-mapper + - iproute2 + - lxc + - sqlite ## Installation diff --git a/docs/sources/installation/binaries.md b/docs/sources/installation/binaries.md index b62e2d071b..b02c28828b 100644 --- a/docs/sources/installation/binaries.md +++ b/docs/sources/installation/binaries.md @@ -5,8 +5,8 @@ page_keywords: binaries, installation, docker, documentation, linux # Binaries > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) @@ -22,16 +22,16 @@ packages for many distributions, and more keep showing up all the time! To run properly, docker needs the following software to be installed at runtime: -- iptables version 1.4 or later -- Git version 1.7 or later -- procps (or similar provider of a "ps" executable) -- XZ Utils 4.9 or later -- a [properly - mounted](https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount) - cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount - point [is](https://github.com/dotcloud/docker/issues/2683) - [not](https://github.com/dotcloud/docker/issues/3485) - [sufficient](https://github.com/dotcloud/docker/issues/4568)) + - iptables version 1.4 or later + - Git version 1.7 or later + - procps (or similar provider of a "ps" executable) + - XZ Utils 4.9 or later + - a [properly mounted]( + https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount) + cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount + point [is](https://github.com/dotcloud/docker/issues/2683) + [not](https://github.com/dotcloud/docker/issues/3485) + [sufficient](https://github.com/dotcloud/docker/issues/4568)) ## Check kernel dependencies @@ -52,7 +52,7 @@ Linux kernel (it even builds on OSX!). > **Note**: > If you have trouble downloading the binary, you can also get the smaller > compressed release file: -> [https://get.docker.io/builds/Linux/x86\_64/docker-latest.tgz]( +> [https://get.docker.io/builds/Linux/x86_64/docker-latest.tgz]( > https://get.docker.io/builds/Linux/x86_64/docker-latest.tgz) ## Run the docker daemon @@ -74,13 +74,13 @@ Unix group called *docker* and add users to it, then the socket read/writable by the *docker* group when the daemon starts. The `docker` daemon must always run as the root user, but if you run the `docker` client as a user in the -*docker* group then you don’t need to add `sudo` to +*docker* group then you don't need to add `sudo` to all the client commands. > **Warning**: > The *docker* group (or the group specified with `-G`) is root-equivalent; > see [*Docker Daemon Attack Surface*]( -> ../../articles/security/#dockersecurity-daemon) details. +> /articles/security/#dockersecurity-daemon) details. ## Upgrades @@ -99,5 +99,4 @@ Then follow the regular installation steps. # run a container and open an interactive shell in the container sudo ./docker run -i -t ubuntu /bin/bash -Continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +Continue with the [*Hello World*](/examples/hello_world/#hello-world) example. diff --git a/docs/sources/installation/cruxlinux.md b/docs/sources/installation/cruxlinux.md index 9bb336a6f5..f37d720389 100644 --- a/docs/sources/installation/cruxlinux.md +++ b/docs/sources/installation/cruxlinux.md @@ -5,13 +5,13 @@ page_keywords: crux linux, virtualization, Docker, documentation, installation # CRUX Linux > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published. @@ -19,9 +19,9 @@ page_keywords: crux linux, virtualization, Docker, documentation, installation Installing on CRUX Linux can be handled via the ports from [James Mills](http://prologic.shortcircuit.net.au/): -- [docker](https://bitbucket.org/prologic/ports/src/tip/docker/) -- [docker-bin](https://bitbucket.org/prologic/ports/src/tip/docker-bin/) -- [docker-git](https://bitbucket.org/prologic/ports/src/tip/docker-git/) +- [docker](https://bitbucket.org/prologic/ports/src/tip/docker/) +- [docker-bin](https://bitbucket.org/prologic/ports/src/tip/docker-bin/) +- [docker-git](https://bitbucket.org/prologic/ports/src/tip/docker-git/) The `docker` port will install the latest tagged version of Docker. The `docker-bin` port will @@ -33,7 +33,7 @@ master branch. For the time being (*until the CRUX Docker port(s) get into the official contrib repository*) you will need to install [James -Mills’](https://bitbucket.org/prologic/ports) ports repository. You can +Mills`](https://bitbucket.org/prologic/ports) ports repository. You can do so via: Download the `httpup` file to @@ -87,7 +87,5 @@ There is a rc script created for Docker. To start the Docker service: To start on system boot: -- Edit `/etc/rc.conf` -- Put `docker` into the `SERVICES=(...)` - array after `net`. - + - Edit `/etc/rc.conf` + - Put `docker` into the `SERVICES=(...)` array after `net`. diff --git a/docs/sources/installation/fedora.md b/docs/sources/installation/fedora.md index 0718df032c..70d8c1462e 100644 --- a/docs/sources/installation/fedora.md +++ b/docs/sources/installation/fedora.md @@ -5,13 +5,13 @@ page_keywords: Docker, Docker documentation, Fedora, requirements, virtualbox, v # Fedora > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published. @@ -25,7 +25,7 @@ bit** architecture. The `docker-io` package provides Docker on Fedora. If you have the (unrelated) `docker` package installed already, it will -conflict with `docker-io`. There’s a [bug +conflict with `docker-io`. There's a [bug report](https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for it. To proceed with `docker-io` installation on Fedora 19, please remove `docker` first. @@ -48,7 +48,7 @@ To update the `docker-io` package: sudo yum -y update docker-io -Now that it’s installed, let’s start the Docker daemon. +Now that it's installed, let's start the Docker daemon. sudo systemctl start docker @@ -56,9 +56,9 @@ If we want Docker to start at boot, we should also: sudo systemctl enable docker -Now let’s verify that Docker is working. +Now let's verify that Docker is working. sudo docker run -i -t fedora /bin/bash **Done!**, now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +World*](/examples/hello_world/#hello-world) example. diff --git a/docs/sources/installation/frugalware.md b/docs/sources/installation/frugalware.md index 0e9f9c9f1b..1d640cf8fd 100644 --- a/docs/sources/installation/frugalware.md +++ b/docs/sources/installation/frugalware.md @@ -5,21 +5,21 @@ page_keywords: frugalware linux, virtualization, docker, documentation, installa # FrugalWare > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published Installing on FrugalWare is handled via the official packages: -- [lxc-docker i686](http://www.frugalware.org/packages/200141) -- [lxc-docker x86\_64](http://www.frugalware.org/packages/200130) + - [lxc-docker i686](http://www.frugalware.org/packages/200141) + - [lxc-docker x86_64](http://www.frugalware.org/packages/200130) The lxc-docker package will install the latest tagged version of Docker. @@ -28,13 +28,13 @@ The lxc-docker package will install the latest tagged version of Docker. Docker depends on several packages which are specified as dependencies in the packages. The core dependencies are: -- systemd -- lvm2 -- sqlite3 -- libguestfs -- lxc -- iproute2 -- bridge-utils + - systemd + - lvm2 + - sqlite3 + - libguestfs + - lxc + - iproute2 + - bridge-utils ## Installation diff --git a/docs/sources/installation/gentoolinux.md b/docs/sources/installation/gentoolinux.md index 87e1c78e84..49700ea563 100644 --- a/docs/sources/installation/gentoolinux.md +++ b/docs/sources/installation/gentoolinux.md @@ -5,23 +5,23 @@ page_keywords: gentoo linux, virtualization, docker, documentation, installation # Gentoo > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published Installing Docker on Gentoo Linux can be accomplished using one of two -methods. The first and best way if you’re looking for a stable +methods. The first and best way if you're looking for a stable experience is to use the official app-emulation/docker package directly in the portage tree. -If you’re looking for a `-bin` ebuild, a live +If you're looking for a `-bin` ebuild, a live ebuild, or bleeding edge ebuild changes/fixes, the second installation method is to use the overlay provided at [https://github.com/tianon/docker-overlay](https://github.com/tianon/docker-overlay) @@ -31,8 +31,8 @@ using the overlay can be found in [the overlay README](https://github.com/tianon/docker-overlay/blob/master/README.md#using-this-overlay). Note that sometimes there is a disparity between the latest version and -what’s in the overlay, and between the latest version in the overlay and -what’s in the portage tree. Please be patient, and the latest version +what's in the overlay, and between the latest version in the overlay and +what's in the portage tree. Please be patient, and the latest version should propagate shortly. ## Installation @@ -47,15 +47,15 @@ since that is the simplest installation path. If any issues arise from this ebuild or the resulting binary, including and especially missing kernel configuration flags and/or dependencies, -[open an issue on the docker-overlay -repository](https://github.com/tianon/docker-overlay/issues) or ping -tianon directly in the \#docker IRC channel on the freenode network. +[open an issue on the docker-overlay repository]( +https://github.com/tianon/docker-overlay/issues) or ping +tianon directly in the #docker IRC channel on the freenode network. ## Starting Docker Ensure that you are running a kernel that includes all the necessary modules and/or configuration for LXC (and optionally for device-mapper -and/or AUFS, depending on the storage driver you’ve decided to use). +and/or AUFS, depending on the storage driver you`ve decided to use). ### OpenRC diff --git a/docs/sources/installation/google.md b/docs/sources/installation/google.md index 611e9bb7bc..bec7d0ba13 100644 --- a/docs/sources/installation/google.md +++ b/docs/sources/installation/google.md @@ -2,22 +2,22 @@ page_title: Installation on Google Cloud Platform page_description: Please note this project is currently under heavy development. It should not be used in production. page_keywords: Docker, Docker documentation, installation, google, Google Compute Engine, Google Cloud Platform -# [Google Cloud Platform](https://cloud.google.com/) +# Google Cloud Platform > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) -## [Compute Engine](https://developers.google.com/compute) QuickStart for [Debian](https://www.debian.org) +## Compute Engine QuickStart for Debian -1. Go to [Google Cloud Console](https://cloud.google.com/console) and - create a new Cloud Project with [Compute Engine - enabled](https://developers.google.com/compute/docs/signup). -2. Download and configure the [Google Cloud - SDK](https://developers.google.com/cloud/sdk/) to use your project - with the following commands: +1. Go to [Google Cloud Console](https://cloud.google.com/console) and + create a new Cloud Project with [Compute Engine + enabled](https://developers.google.com/compute/docs/signup). +2. Download and configure the [Google Cloud SDK]( + https://developers.google.com/cloud/sdk/) to use your project + with the following commands: diff --git a/docs/sources/installation/mac.md b/docs/sources/installation/mac.md index 4b70ef8371..1cef06b55b 100644 --- a/docs/sources/installation/mac.md +++ b/docs/sources/installation/mac.md @@ -9,8 +9,8 @@ page_keywords: Docker, Docker documentation, requirements, virtualbox, ssh, linu > 0.8). However, they are subject to change. > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) @@ -87,7 +87,7 @@ Run the following commands to get it downloaded and set up: sudo mkdir -p /usr/local/bin sudo cp docker /usr/local/bin/ -And that’s it! Let’s check out how to use it. +And that's it! Let's check out how to use it. ## How To Use Docker On Mac OS X @@ -124,7 +124,7 @@ application. ### Forwarding VM Port Range to Host If we take the port range that docker uses by default with the -P option -(49000-49900), and forward same range from host to vm, we’ll be able to +(49000-49900), and forward same range from host to vm, we'll be able to interact with our containers as if they were running locally: # vm must be powered off @@ -143,7 +143,7 @@ If you feel the need to connect to the VM, you can simply run: # Pwd: tcuser You can now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +World*](/examples/hello_world/#hello-world) example. ## Learn More @@ -159,7 +159,7 @@ See the GitHub page for ### Upgrading to a newer release of boot2docker To upgrade an initialised VM, you can use the following 3 commands. Your -persistence disk will not be changed, so you won’t lose your images and +persistence disk will not be changed, so you won't lose your images and containers: ./boot2docker stop @@ -168,12 +168,11 @@ containers: ### About the way Docker works on Mac OS X: -Docker has two key components: the `docker` daemon -and the `docker` client. The tool works by client -commanding the daemon. In order to work and do its magic, the daemon -makes use of some Linux Kernel features (e.g. LXC, name spaces etc.), -which are not supported by OS X. Therefore, the solution of getting -Docker to run on OS X consists of running it inside a lightweight +Docker has two key components: the `docker` daemon and the `docker` client. +The tool works by client commanding the daemon. In order to work and do its +magic, the daemon makes use of some Linux Kernel features (e.g. LXC, name +spaces etc.), which are not supported by OS X. Therefore, the solution of +getting Docker to run on OS X consists of running it inside a lightweight virtual machine. In order to simplify things, Docker comes with a bash script to make this whole process as easy as possible (i.e. boot2docker). diff --git a/docs/sources/installation/openSUSE.md b/docs/sources/installation/openSUSE.md index ebd8ea6f6e..2d7804d291 100644 --- a/docs/sources/installation/openSUSE.md +++ b/docs/sources/installation/openSUSE.md @@ -5,13 +5,13 @@ page_keywords: openSUSE, virtualbox, docker, documentation, installation # openSUSE > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published @@ -39,13 +39,13 @@ Install the Docker package. sudo zypper in docker -It’s also possible to install Docker using openSUSE’s 1-click install. +It's also possible to install Docker using openSUSE's1-click install. Just visit [this](http://software.opensuse.org/package/docker) page, select your openSUSE version and click on the installation link. This will add the right repository to your system and it will also install the docker package. -Now that it’s installed, let’s start the Docker daemon. +Now that it's installed, let's start the Docker daemon. sudo systemctl start docker @@ -59,5 +59,6 @@ Docker daemon. sudo usermod -G docker -**Done!**, now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +**Done!** +Now continue with the [*Hello World*]( +/examples/hello_world/#hello-world) example. diff --git a/docs/sources/installation/rackspace.md b/docs/sources/installation/rackspace.md index 2d213a7fc9..8cce292b79 100644 --- a/docs/sources/installation/rackspace.md +++ b/docs/sources/installation/rackspace.md @@ -5,7 +5,7 @@ page_keywords: Rackspace Cloud, installation, docker, linux, ubuntu # Rackspace Cloud > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published @@ -20,8 +20,8 @@ If you are using any Linux not already shipping with the 3.8 kernel you will need to install it. And this is a little more difficult on Rackspace. -Rackspace boots their servers using grub’s `menu.lst` -and does not like non ‘virtual’ packages (e.g. Xen compatible) +Rackspace boots their servers using grub's `menu.lst` +and does not like non `virtual` packages (e.g. Xen compatible) kernels there, although they do work. This results in `update-grub` not having the expected result, and you will need to set the kernel manually. diff --git a/docs/sources/installation/rhel.md b/docs/sources/installation/rhel.md index d7df63920d..874e92adc8 100644 --- a/docs/sources/installation/rhel.md +++ b/docs/sources/installation/rhel.md @@ -5,20 +5,20 @@ page_keywords: Docker, Docker documentation, requirements, linux, rhel, centos # Red Hat Enterprise Linux > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) > **Note**: -> This is a community contributed installation path. The only ‘official’ +> This is a community contributed installation path. The only `official` > installation is using the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) > installation path. This version may be out of date because it depends on > some binaries to be updated and published Docker is available for **RHEL** on EPEL. These instructions should work for both RHEL and CentOS. They will likely work for other binary -compatible EL6 distributions as well, but they haven’t been tested. +compatible EL6 distributions as well, but they haven't been tested. Please note that this package is part of [Extra Packages for Enterprise Linux (EPEL)](https://fedoraproject.org/wiki/EPEL), a community effort @@ -42,12 +42,11 @@ The `docker-io` package provides Docker on EPEL. If you already have the (unrelated) `docker` package installed, it will conflict with `docker-io`. -There’s a [bug -report](https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for -it. To proceed with `docker-io` installation, please -remove `docker` first. +There's a [bug report]( +https://bugzilla.redhat.com/show_bug.cgi?id=1043676) filed for it. +To proceed with `docker-io` installation, please remove `docker` first. -Next, let’s install the `docker-io` package which +Next, let's install the `docker-io` package which will install Docker on our host. sudo yum -y install docker-io @@ -56,7 +55,7 @@ To update the `docker-io` package sudo yum -y update docker-io -Now that it’s installed, let’s start the Docker daemon. +Now that it's installed, let's start the Docker daemon. sudo service docker start @@ -64,15 +63,15 @@ If we want Docker to start at boot, we should also: sudo chkconfig docker on -Now let’s verify that Docker is working. +Now let's verify that Docker is working. sudo docker run -i -t fedora /bin/bash -**Done!**, now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +**Done!** +Now continue with the [*Hello World*](/examples/hello_world/#hello-world) example. ## Issues? -If you have any issues - please report them directly in the [Red Hat -Bugzilla for docker-io -component](https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL&component=docker-io). +If you have any issues - please report them directly in the +[Red Hat Bugzilla for docker-io component]( +https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL&component=docker-io). diff --git a/docs/sources/installation/softlayer.md b/docs/sources/installation/softlayer.md index 0b14ac567d..11a192c61a 100644 --- a/docs/sources/installation/softlayer.md +++ b/docs/sources/installation/softlayer.md @@ -5,32 +5,32 @@ page_keywords: IBM SoftLayer, virtualization, cloud, docker, documentation, inst # IBM SoftLayer > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) ## IBM SoftLayer QuickStart -1. Create an [IBM SoftLayer - account](https://www.softlayer.com/cloudlayer/). -2. Log in to the [SoftLayer - Console](https://control.softlayer.com/devices/). -3. Go to [Order Hourly Computing Instance - Wizard](https://manage.softlayer.com/Sales/orderHourlyComputingInstance) - on your SoftLayer Console. -4. Create a new *CloudLayer Computing Instance* (CCI) using the default - values for all the fields and choose: +1. Create an [IBM SoftLayer account]( + https://www.softlayer.com/cloud-servers/). +2. Log in to the [SoftLayer Console]( + https://control.softlayer.com/devices/). +3. Go to [Order Hourly Computing Instance Wizard]( + https://manage.softlayer.com/Sales/orderHourlyComputingInstance) + on your SoftLayer Console. +4. Create a new *CloudLayer Computing Instance* (CCI) using the default + values for all the fields and choose: -- *First Available* as `Datacenter` and -- *Ubuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit)* - as `Operating System`. + - *First Available* as `Datacenter` and + - *Ubuntu Linux 12.04 LTS Precise Pangolin - Minimal Install (64 bit)* + as `Operating System`. -5. Click the *Continue Your Order* button at the bottom right and - select *Go to checkout*. -6. Insert the required *User Metadata* and place the order. -7. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) - instructions. +5. Click the *Continue Your Order* button at the bottom right and + select *Go to checkout*. +6. Insert the required *User Metadata* and place the order. +7. Then continue with the [*Ubuntu*](../ubuntulinux/#ubuntu-linux) + instructions. -Continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +Continue with the [*Hello World*]( +/examples/hello_world/#hello-world) example. diff --git a/docs/sources/installation/ubuntulinux.md b/docs/sources/installation/ubuntulinux.md index 07d6072b5d..40dc541b6a 100644 --- a/docs/sources/installation/ubuntulinux.md +++ b/docs/sources/installation/ubuntulinux.md @@ -9,16 +9,16 @@ page_keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, > earlier version, you will need to follow them again. > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) Docker is supported on the following versions of Ubuntu: -- [*Ubuntu Precise 12.04 (LTS) (64-bit)*](#ubuntu-precise-1204-lts-64-bit) -- [*Ubuntu Raring 13.04 and Saucy 13.10 (64 - bit)*](#ubuntu-raring-1304-and-saucy-1310-64-bit) + - [*Ubuntu Precise 12.04 (LTS) (64-bit)*](#ubuntu-precise-1204-lts-64-bit) + - [*Ubuntu Raring 13.04 and Saucy 13.10 (64 + bit)*](#ubuntu-raring-1304-and-saucy-1310-64-bit) Please read [*Docker and UFW*](#docker-and-ufw), if you plan to use [UFW (Uncomplicated Firewall)](https://help.ubuntu.com/community/UFW) @@ -32,12 +32,12 @@ This installation path should work at all times. **Linux kernel 3.8** Due to a bug in LXC, Docker works best on the 3.8 kernel. Precise comes -with a 3.2 kernel, so we need to upgrade it. The kernel you’ll install +with a 3.2 kernel, so we need to upgrade it. The kernel you'll install when following these steps comes with AUFS built in. We also include the generic headers to enable packages that depend on them, like ZFS and the -VirtualBox guest additions. If you didn’t install the headers for your +VirtualBox guest additions. If you didn't install the headers for your "precise" kernel, then you can skip these headers for the "raring" -kernel. But it is safer to include them if you’re not sure. +kernel. But it is safer to include them if you're not sure. # install the backported kernel sudo apt-get update @@ -59,7 +59,7 @@ faster for you to install. First, check that your APT system can deal with `https` URLs: the file `/usr/lib/apt/methods/https` -should exist. If it doesn’t, you need to install the package +should exist. If it doesn't, you need to install the package `apt-transport-https`. [ -e /usr/lib/apt/methods/https ] || { @@ -74,7 +74,7 @@ Then, add the Docker repository key to your local keychain. Add the Docker repository to your apt sources list, update and install the `lxc-docker` package. -*You may receive a warning that the package isn’t trusted. Answer yes to +*You may receive a warning that the package isn't trusted. Answer yes to continue installation.* sudo sh -c "echo deb https://get.docker.io/ubuntu docker main\ @@ -96,7 +96,7 @@ Now verify that the installation has worked by downloading the Type `exit` to exit **Done!**, now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +World*](/examples/hello_world/#hello-world) example. ## Ubuntu Raring 13.04 and Saucy 13.10 (64 bit) @@ -106,9 +106,9 @@ These instructions cover both Ubuntu Raring 13.04 and Saucy 13.10. **Optional AUFS filesystem support** -Ubuntu Raring already comes with the 3.8 kernel, so we don’t need to +Ubuntu Raring already comes with the 3.8 kernel, so we don't need to install it. However, not all systems have AUFS filesystem support -enabled. AUFS support is optional as of version 0.7, but it’s still +enabled. AUFS support is optional as of version 0.7, but it's still available as a driver and we recommend using it if you can. To make sure AUFS is installed, run the following commands: @@ -144,7 +144,7 @@ Now verify that the installation has worked by downloading the Type `exit` to exit **Done!**, now continue with the [*Hello -World*](../../examples/hello_world/#hello-world) example. +World*](/examples/hello_world/#hello-world) example. ### Giving non-root access @@ -160,7 +160,7 @@ Unix group called *docker* and add users to it, then the socket read/writable by the *docker* group when the daemon starts. The `docker` daemon must always run as the root user, but if you run the `docker` client as a user in the -*docker* group then you don’t need to add `sudo` to +*docker* group then you don't need to add `sudo` to all the client commands. As of 0.9.0, you can specify that a group other than `docker` should own the Unix socket with the `-G` option. @@ -168,7 +168,7 @@ than `docker` should own the Unix socket with the > **Warning**: > The *docker* group (or the group specified with `-G`) is > root-equivalent; see [*Docker Daemon Attack Surface*]( -> ../../articles/security/#dockersecurity-daemon) details. +> /articles/security/#dockersecurity-daemon) details. **Example:** @@ -245,7 +245,7 @@ Then reload UFW: sudo ufw reload -UFW’s default set of rules denies all incoming traffic. If you want to +UFW's default set of rules denies all incoming traffic. If you want to be able to reach your containers from another host then you should allow incoming connections on the Docker port (default 4243): @@ -263,7 +263,7 @@ warning: WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4] -This warning is shown because the containers can’t use the local DNS +This warning is shown because the containers can't use the local DNS nameserver and Docker will default to using an external nameserver. This can be worked around by specifying a DNS server to be used by the @@ -281,7 +281,7 @@ The Docker daemon has to be restarted: sudo restart docker > **Warning**: -> If you’re doing this on a laptop which connects to various networks, +> If you're doing this on a laptop which connects to various networks, > make sure to choose a public DNS server. An alternative solution involves disabling dnsmasq in NetworkManager by @@ -310,10 +310,10 @@ you. ### Yandex [Yandex](http://yandex.ru/) in Russia is mirroring the Docker Debian -packages, updating every 6 hours. Substitute -`http://mirror.yandex.ru/mirrors/docker/` for -`http://get.docker.io/ubuntu` in the instructions -above. For example: +packages, updating every 6 hours. +Substitute `http://mirror.yandex.ru/mirrors/docker/` for +`http://get.docker.io/ubuntu` in the instructions above. +For example: sudo sh -c "echo deb http://mirror.yandex.ru/mirrors/docker/ docker main\ > /etc/apt/sources.list.d/docker.list" diff --git a/docs/sources/installation/windows.md b/docs/sources/installation/windows.md index cadecdaddb..a5730862ad 100644 --- a/docs/sources/installation/windows.md +++ b/docs/sources/installation/windows.md @@ -6,57 +6,54 @@ page_keywords: Docker, Docker documentation, Windows, requirements, virtualbox, Docker can run on Windows using a virtualization platform like VirtualBox. A Linux distribution is run inside a virtual machine and -that’s where Docker will run. +that's where Docker will run. ## Installation > **Note**: -> Docker is still under heavy development! We don’t recommend using it in -> production yet, but we’re getting closer with each release. Please see +> Docker is still under heavy development! We don't recommend using it in +> production yet, but we're getting closer with each release. Please see > our blog post, [Getting to Docker 1.0]( > http://blog.docker.io/2013/08/getting-to-docker-1-0/) -1. Install virtualbox from - [https://www.virtualbox.org](https://www.virtualbox.org) - or follow - this - [tutorial](http://www.slideshare.net/julienbarbier42/install-virtualbox-on-windows-7). -2. Download the latest boot2docker.iso from - [https://github.com/boot2docker/boot2docker/releases](https://github.com/boot2docker/boot2docker/releases). -3. Start VirtualBox. -4. Create a new Virtual machine with the following settings: +1. Install virtualbox from [https://www.virtualbox.org]( + https://www.virtualbox.org) - or follow this [tutorial]( + http://www.slideshare.net/julienbarbier42/install-virtualbox-on-windows-7). +2. Download the latest boot2docker.iso from + [https://github.com/boot2docker/boot2docker/releases]( + https://github.com/boot2docker/boot2docker/releases). +3. Start VirtualBox. +4. Create a new Virtual machine with the following settings: -> - Name: boot2docker -> - Type: Linux -> - Version: Linux 2.6 (64 bit) -> - Memory size: 1024 MB -> - Hard drive: Do not add a virtual hard drive + - Name: boot2docker + - Type: Linux + - Version: Linux 2.6 (64 bit) + - Memory size: 1024 MB + - Hard drive: Do not add a virtual hard drive -5. Open the settings of the virtual machine: +5. Open the settings of the virtual machine: 5.1. go to Storage - 5.2. click the empty slot below Controller: IDE - 5.3. click the disc icon on the right of IDE Secondary Master - 5.4. click Choose a virtual CD/DVD disk file -6. Browse to the path where you’ve saved the boot2docker.iso, select - the boot2docker.iso and click open. +6. Browse to the path where you`ve saved the boot2docker.iso, select + the boot2docker.iso and click open. -7. Click OK on the Settings dialog to save the changes and close the - window. +7. Click OK on the Settings dialog to save the changes and close the + window. -8. Start the virtual machine by clicking the green start button. +8. Start the virtual machine by clicking the green start button. -9. The boot2docker virtual machine should boot now. +9. The boot2docker virtual machine should boot now. ## Running Docker boot2docker will log you in automatically so you can start using Docker right away. -Let’s try the “hello world” example. Run +Let's try the “hello world” example. Run docker run busybox echo hello world diff --git a/docs/sources/introduction/technology.md b/docs/sources/introduction/technology.md index 6ae7445595..346a118c39 100644 --- a/docs/sources/introduction/technology.md +++ b/docs/sources/introduction/technology.md @@ -80,7 +80,7 @@ servers. > **Note:** To learn more about the [*Docker Image Index*]( > http://index.docker.io) (public *and* private), check out the [Registry & -> Index Spec](http://docs.docker.io/en/latest/api/registry_index_spec/). +> Index Spec](http://docs.docker.io/api/registry_index_spec/). ### Summary @@ -246,7 +246,7 @@ results and only you and your users can pull them down and use them to build containers. You can [sign up for a plan here](https://index.docker.io/plans). To learn more, check out the [Working With Repositories]( -http://docs.docker.io/en/latest/use/workingwithrepository) section of our +http://docs.docker.io/use/workingwithrepository) section of our [User's Manual](http://docs.docker.io). ## Where to go from here diff --git a/docs/sources/introduction/working-with-docker.md b/docs/sources/introduction/working-with-docker.md index 637030acbc..17ed7ff761 100644 --- a/docs/sources/introduction/working-with-docker.md +++ b/docs/sources/introduction/working-with-docker.md @@ -10,7 +10,7 @@ page_keywords: docker, introduction, documentation, about, technology, understan > If you prefer a summary and would like to see how a specific command > works, check out the glossary of all available client > commands on our [User's Manual: Commands Reference]( -> http://docs.docker.io/en/latest/reference/commandline/cli). +> http://docs.docker.io/reference/commandline/cli). ## Introduction @@ -164,8 +164,8 @@ image is constructed. dockerfiles/django-uwsgi-nginx Dockerfile and configuration files to buil... 2 [OK] . . . -> **Note:** To learn more about trusted builds, check out [this] -(http://blog.docker.io/2013/11/introducing-trusted-builds) blog post. +> **Note:** To learn more about trusted builds, check out [this]( +http://blog.docker.io/2013/11/introducing-trusted-builds) blog post. ### Downloading an image @@ -279,7 +279,7 @@ The `Dockerfile` holds the set of instructions Docker uses to build a Docker ima > **Tip:** Below is a short summary of our full Dockerfile tutorial. In > order to get a better-grasp of how to work with these automation > scripts, check out the [Dockerfile step-by-step -> tutorial](http://www.docker.io/learn/dockerfile). +> tutorial](https://www.docker.io/learn/dockerfile). A `Dockerfile` contains instructions written in the following format: @@ -294,7 +294,7 @@ A `#` sign is used to provide a comment: > **Tip:** The `Dockerfile` is very flexible and provides a powerful set > of instructions for building applications. To learn more about the > `Dockerfile` and it's instructions see the [Dockerfile -> Reference](http://docs.docker.io/en/latest/reference/builder). +> Reference](http://docs.docker.io/reference/builder/). ### First steps with the Dockerfile diff --git a/docs/sources/reference.md b/docs/sources/reference.md index 3cd720c551..6c1ab462d4 100644 --- a/docs/sources/reference.md +++ b/docs/sources/reference.md @@ -2,8 +2,8 @@ ## Contents: -- [Commands](commandline/) -- [Dockerfile Reference](builder/) -- [Docker Run Reference](run/) -- [APIs](api/) + - [Commands](commandline/) + - [Dockerfile Reference](builder/) + - [Docker Run Reference](run/) + - [APIs](api/) diff --git a/docs/sources/reference/api.md b/docs/sources/reference/api.md index 7afa5250b3..254db25e92 100644 --- a/docs/sources/reference/api.md +++ b/docs/sources/reference/api.md @@ -1,100 +1,86 @@ # APIs -Your programs and scripts can access Docker’s functionality via these +Your programs and scripts can access Docker's functionality via these interfaces: -- [Registry & Index Spec](registry_index_spec/) - - [1. The 3 roles](registry_index_spec/#the-3-roles) - - [1.1 Index](registry_index_spec/#index) - - [1.2 Registry](registry_index_spec/#registry) - - [1.3 Docker](registry_index_spec/#docker) + - [Registry & Index Spec](registry_index_spec/) + - [1. The 3 roles](registry_index_spec/#the-3-roles) + - [1.1 Index](registry_index_spec/#index) + - [1.2 Registry](registry_index_spec/#registry) + - [1.3 Docker](registry_index_spec/#docker) - - [2. Workflow](registry_index_spec/#workflow) - - [2.1 Pull](registry_index_spec/#pull) - - [2.2 Push](registry_index_spec/#push) - - [2.3 Delete](registry_index_spec/#delete) + - [2. Workflow](registry_index_spec/#workflow) + - [2.1 Pull](registry_index_spec/#pull) + - [2.2 Push](registry_index_spec/#push) + - [2.3 Delete](registry_index_spec/#delete) - - [3. How to use the Registry in standalone - mode](registry_index_spec/#how-to-use-the-registry-in-standalone-mode) - - [3.1 Without an - Index](registry_index_spec/#without-an-index) - - [3.2 With an Index](registry_index_spec/#with-an-index) + - [3. How to use the Registry in standalone mode](registry_index_spec/#how-to-use-the-registry-in-standalone-mode) + - [3.1 Without an Index](registry_index_spec/#without-an-index) + - [3.2 With an Index](registry_index_spec/#with-an-index) - - [4. The API](registry_index_spec/#the-api) - - [4.1 Images](registry_index_spec/#images) - - [4.2 Users](registry_index_spec/#users) - - [4.3 Tags (Registry)](registry_index_spec/#tags-registry) - - [4.4 Images (Index)](registry_index_spec/#images-index) - - [4.5 Repositories](registry_index_spec/#repositories) + - [4. The API](registry_index_spec/#the-api) + - [4.1 Images](registry_index_spec/#images) + - [4.2 Users](registry_index_spec/#users) + - [4.3 Tags (Registry)](registry_index_spec/#tags-registry) + - [4.4 Images (Index)](registry_index_spec/#images-index) + - [4.5 Repositories](registry_index_spec/#repositories) - - [5. Chaining - Registries](registry_index_spec/#chaining-registries) - - [6. Authentication & - Authorization](registry_index_spec/#authentication-authorization) - - [6.1 On the Index](registry_index_spec/#on-the-index) - - [6.2 On the Registry](registry_index_spec/#on-the-registry) + - [5. Chaining Registries](registry_index_spec/#chaining-registries) + - [6. Authentication & Authorization](registry_index_spec/#authentication-authorization) + - [6.1 On the Index](registry_index_spec/#on-the-index) + - [6.2 On the Registry](registry_index_spec/#on-the-registry) - - [7 Document Version](registry_index_spec/#document-version) + - [7 Document Version](registry_index_spec/#document-version) -- [Docker Registry API](registry_api/) - - [1. Brief introduction](registry_api/#brief-introduction) - - [2. Endpoints](registry_api/#endpoints) - - [2.1 Images](registry_api/#images) - - [2.2 Tags](registry_api/#tags) - - [2.3 Repositories](registry_api/#repositories) - - [2.4 Status](registry_api/#status) + - [Docker Registry API](registry_api/) + - [1. Brief introduction](registry_api/#brief-introduction) + - [2. Endpoints](registry_api/#endpoints) + - [2.1 Images](registry_api/#images) + - [2.2 Tags](registry_api/#tags) + - [2.3 Repositories](registry_api/#repositories) + - [2.4 Status](registry_api/#status) - - [3 Authorization](registry_api/#authorization) + - [3 Authorization](registry_api/#authorization) -- [Docker Index API](index_api/) - - [1. Brief introduction](index_api/#brief-introduction) - - [2. Endpoints](index_api/#endpoints) - - [2.1 Repository](index_api/#repository) - - [2.2 Users](index_api/#users) - - [2.3 Search](index_api/#search) + - [Docker Index API](index_api/) + - [1. Brief introduction](index_api/#brief-introduction) + - [2. Endpoints](index_api/#endpoints) + - [2.1 Repository](index_api/#repository) + - [2.2 Users](index_api/#users) + - [2.3 Search](index_api/#search) -- [Docker Remote API](docker_remote_api/) - - [1. Brief introduction](docker_remote_api/#brief-introduction) - - [2. Versions](docker_remote_api/#versions) - - [v1.11](docker_remote_api/#v1-11) - - [v1.10](docker_remote_api/#v1-10) - - [v1.9](docker_remote_api/#v1-9) - - [v1.8](docker_remote_api/#v1-8) - - [v1.7](docker_remote_api/#v1-7) - - [v1.6](docker_remote_api/#v1-6) - - [v1.5](docker_remote_api/#v1-5) - - [v1.4](docker_remote_api/#v1-4) - - [v1.3](docker_remote_api/#v1-3) - - [v1.2](docker_remote_api/#v1-2) - - [v1.1](docker_remote_api/#v1-1) - - [v1.0](docker_remote_api/#v1-0) + - [Docker Remote API](docker_remote_api/) + - [1. Brief introduction](docker_remote_api/#brief-introduction) + - [2. Versions](docker_remote_api/#versions) + - [v1.11](docker_remote_api/#v1-11) + - [v1.10](docker_remote_api/#v1-10) + - [v1.9](docker_remote_api/#v1-9) + - [v1.8](docker_remote_api/#v1-8) + - [v1.7](docker_remote_api/#v1-7) + - [v1.6](docker_remote_api/#v1-6) + - [v1.5](docker_remote_api/#v1-5) + - [v1.4](docker_remote_api/#v1-4) + - [v1.3](docker_remote_api/#v1-3) + - [v1.2](docker_remote_api/#v1-2) + - [v1.1](docker_remote_api/#v1-1) + - [v1.0](docker_remote_api/#v1-0) -- [Docker Remote API Client Libraries](remote_api_client_libraries/) -- [docker.io OAuth API](docker_io_oauth_api/) - - [1. Brief introduction](docker_io_oauth_api/#brief-introduction) - - [2. Register Your - Application](docker_io_oauth_api/#register-your-application) - - [3. Endpoints](docker_io_oauth_api/#endpoints) - - [3.1 Get an Authorization - Code](docker_io_oauth_api/#get-an-authorization-code) - - [3.2 Get an Access - Token](docker_io_oauth_api/#get-an-access-token) - - [3.3 Refresh a Token](docker_io_oauth_api/#refresh-a-token) + - [Docker Remote API Client Libraries](remote_api_client_libraries/) + - [docker.io OAuth API](docker_io_oauth_api/) + - [1. Brief introduction](docker_io_oauth_api/#brief-introduction) + - [2. Register Your Application](docker_io_oauth_api/#register-your-application) + - [3. Endpoints](docker_io_oauth_api/#endpoints) + - [3.1 Get an Authorization Code](docker_io_oauth_api/#get-an-authorization-code) + - [3.2 Get an Access Token](docker_io_oauth_api/#get-an-access-token) + - [3.3 Refresh a Token](docker_io_oauth_api/#refresh-a-token) - - [4. Use an Access Token with the - API](docker_io_oauth_api/#use-an-access-token-with-the-api) + - [4. Use an Access Token with the API](docker_io_oauth_api/#use-an-access-token-with-the-api) -- [docker.io Accounts API](docker_io_accounts_api/) - - [1. Endpoints](docker_io_accounts_api/#endpoints) - - [1.1 Get a single - user](docker_io_accounts_api/#get-a-single-user) - - [1.2 Update a single - user](docker_io_accounts_api/#update-a-single-user) - - [1.3 List email addresses for a - user](docker_io_accounts_api/#list-email-addresses-for-a-user) - - [1.4 Add email address for a - user](docker_io_accounts_api/#add-email-address-for-a-user) - - [1.5 Update an email address for a - user](docker_io_accounts_api/#update-an-email-address-for-a-user) - - [1.6 Delete email address for a - user](docker_io_accounts_api/#delete-email-address-for-a-user) \ No newline at end of file + - [docker.io Accounts API](docker_io_accounts_api/) + - [1. Endpoints](docker_io_accounts_api/#endpoints) + - [1.1 Get a single user](docker_io_accounts_api/#get-a-single-user) + - [1.2 Update a single user](docker_io_accounts_api/#update-a-single-user) + - [1.3 List email addresses for a user](docker_io_accounts_api/#list-email-addresses-for-a-user) + - [1.4 Add email address for a user](docker_io_accounts_api/#add-email-address-for-a-user) + - [1.5 Update an email address for a user](docker_io_accounts_api/#update-an-email-address-for-a-user) + - [1.6 Delete email address for a user](docker_io_accounts_api/#delete-email-address-for-a-user) \ No newline at end of file diff --git a/docs/sources/reference/api/README.md b/docs/sources/reference/api/README.md index ec42b89733..a7b8ae1b44 100644 --- a/docs/sources/reference/api/README.md +++ b/docs/sources/reference/api/README.md @@ -1,6 +1,9 @@ This directory holds the authoritative specifications of APIs defined and implemented by Docker. Currently this includes: -* The remote API by which a docker node can be queried over HTTP -* The registry API by which a docker node can download and upload container images for storage and sharing -* The index search API by which a docker node can search the public index for images to download -* The docker.io OAuth and accounts API which 3rd party services can use to access account information + * The remote API by which a docker node can be queried over HTTP + * The registry API by which a docker node can download and upload + container images for storage and sharing + * The index search API by which a docker node can search the public + index for images to download + * The docker.io OAuth and accounts API which 3rd party services can + use to access account information diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.0.md b/docs/sources/reference/api/archive/docker_remote_api_v1.0.md index 8f94733584..dffee87dca 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.0.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.0.md @@ -2,9 +2,9 @@ page_title: Remote API v1.0 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.0](#id1) +# Docker Remote API v1.0 -## [1. Brief introduction](#id2) +# 1. Brief introduction - The Remote API is replacing rcli - Default port in the docker daemon is 4243 @@ -12,14 +12,15 @@ page_keywords: API, Docker, rcli, REST, documentation or pull, the HTTP connection is hijacked to transport stdout stdin and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -80,10 +81,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -126,7 +128,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -135,10 +137,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -202,10 +205,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id8) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -237,10 +241,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id9) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -259,10 +264,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id10) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -278,10 +284,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id11) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -303,10 +310,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id12) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -328,10 +336,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id13) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -347,10 +356,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id14) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -385,11 +395,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id15) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -408,10 +418,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id16) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -435,13 +446,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id17) +## 2.2 Images -#### [List Images](#id18) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -507,11 +518,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id19) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -539,11 +550,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id20) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -560,10 +571,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id21) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -607,10 +619,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id22) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -640,10 +653,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id23) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry > **Example request**: > @@ -668,10 +682,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id24) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -695,10 +710,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Remove an image](#id25) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -714,10 +730,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Search images](#id26) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -747,12 +764,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id27) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id28) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -778,10 +796,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Get default username and email](#id29) +### Get default username and email - `GET /auth` -: Get the default username and email +`GET /auth` + +Get the default username and email **Example request**: @@ -802,10 +821,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration and store it](#id30) +### Check auth configuration and store it - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -828,10 +848,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id31) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -857,10 +878,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id32) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -882,10 +904,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id33) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes > > **Example request**: @@ -913,7 +936,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -921,28 +944,28 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -## [3. Going further](#id34) +# 3. Going further -### [3.1 Inside ‘docker run’](#id35) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's -### [3.2 Hijacking](#id36) +## 3.2 Hijacking In this first version of the API, some of the endpoints, like /attach, /pull or /push uses hijacking to transport stdin, stdout and stderr on diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.1.md b/docs/sources/reference/api/archive/docker_remote_api_v1.1.md index 71d2f91d37..32220e79cf 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.1.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.1.md @@ -2,9 +2,9 @@ page_title: Remote API v1.1 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.1](#id1) +# Docker Remote API v1.1 -## [1. Brief introduction](#id2) +# 1. Brief introduction - The Remote API is replacing rcli - Default port in the docker daemon is 4243 @@ -12,14 +12,15 @@ page_keywords: API, Docker, rcli, REST, documentation or pull, the HTTP connection is hijacked to transport stdout stdin and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -80,10 +81,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -126,7 +128,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -135,10 +137,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -202,10 +205,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id8) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -237,10 +241,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id9) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -259,10 +264,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id10) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -278,10 +284,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id11) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -303,10 +310,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id12) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -328,10 +336,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id13) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -347,10 +356,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id14) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -385,11 +395,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id15) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -408,10 +418,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id16) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -435,13 +446,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id17) +## 2.2 Images -#### [List Images](#id18) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -507,11 +518,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id19) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -542,11 +553,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id20) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -567,10 +578,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id21) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -614,10 +626,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id22) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -647,10 +660,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id23) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry > **Example request**: > @@ -678,10 +692,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id24) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -706,10 +721,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id25) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -725,10 +741,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Search images](#id26) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -758,12 +775,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id27) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id28) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -789,10 +807,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Get default username and email](#id29) +### Get default username and email - `GET /auth` -: Get the default username and email +`GET /auth` + +Get the default username and email **Example request**: @@ -813,10 +832,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration and store it](#id30) +### Check auth configuration and store it - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -839,10 +859,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id31) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -868,10 +889,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id32) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -893,10 +915,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id33) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -924,7 +947,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -932,28 +955,28 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -## [3. Going further](#id34) +# 3. Going further -### [3.1 Inside ‘docker run’](#id35) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's -### [3.2 Hijacking](#id36) +## 3.2 Hijacking In this version of the API, /attach uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.2.md b/docs/sources/reference/api/archive/docker_remote_api_v1.2.md index 0239de6681..19703a0028 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.2.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.2.md @@ -2,24 +2,25 @@ page_title: Remote API v1.2 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.2](#id1) +# Docker Remote API v1.2 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API is replacing rcli -- Default port in the docker daemon is 4243 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr +- The Remote API is replacing rcli +- Default port in the docker daemon is 4243 +- The API tends to be REST, but for some complex commands, like attach + or pull, the HTTP connection is hijacked to transport stdout stdin + and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -92,10 +93,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -138,7 +140,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -147,10 +149,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -214,10 +217,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id8) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -249,10 +253,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id9) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -271,10 +276,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id10) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -290,10 +296,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id11) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -315,10 +322,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id12) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -340,10 +348,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id13) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -359,10 +368,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id14) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -397,11 +407,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id15) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -420,10 +430,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id16) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -447,13 +458,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id17) +## 2.2 Images -#### [List Images](#id18) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -523,11 +534,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id19) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -558,11 +569,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id20) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -583,10 +594,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id21) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -631,10 +643,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id22) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -665,10 +678,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id23) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry > **Example request**: > @@ -697,10 +711,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id24) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -725,10 +740,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id25) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -752,10 +768,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id26) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -785,12 +802,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id27) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id28) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile +`POST /build` + +Build an image from Dockerfile **Example request**: @@ -820,10 +838,11 @@ page_keywords: API, Docker, rcli, REST, documentation {{ STREAM }} is the raw text output of the build command. It uses the HTTP Hijack method in order to stream. -#### [Check auth configuration](#id29) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -853,10 +872,11 @@ HTTP Hijack method in order to stream. - **403** – forbidden - **500** – server error -#### [Display system-wide information](#id30) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -882,10 +902,11 @@ HTTP Hijack method in order to stream. - **200** – no error - **500** – server error -#### [Show the docker version information](#id31) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -907,10 +928,11 @@ HTTP Hijack method in order to stream. - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id32) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -938,7 +960,7 @@ HTTP Hijack method in order to stream. - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -946,33 +968,33 @@ HTTP Hijack method in order to stream. - **404** – no such container - **500** – server error -## [3. Going further](#id33) +# 3. Going further -### [3.1 Inside ‘docker run’](#id34) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's -### [3.2 Hijacking](#id35) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id36) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.3.md b/docs/sources/reference/api/archive/docker_remote_api_v1.3.md index d83b9d85b1..510719ee00 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.3.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.3.md @@ -2,24 +2,25 @@ page_title: Remote API v1.3 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.3](#id1) +# Docker Remote API v1.3 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API is replacing rcli -- Default port in the docker daemon is 4243 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr +- The Remote API is replacing rcli +- Default port in the docker daemon is 4243 +- The API tends to be REST, but for some complex commands, like attach + or pull, the HTTP connection is hijacked to transport stdout stdin + and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -94,10 +95,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -140,7 +142,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -149,10 +151,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -216,10 +219,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -251,10 +255,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -286,10 +291,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -308,10 +314,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -331,7 +338,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -339,10 +346,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -364,10 +372,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -389,10 +398,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -408,10 +418,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -446,11 +457,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -469,10 +480,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -496,13 +508,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id18) +## 2.2 Images -#### [List Images](#id19) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -572,11 +584,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id20) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -607,11 +619,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id21) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -632,10 +644,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id22) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -680,10 +693,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id23) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -713,10 +727,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id24) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry > **Example request**: > @@ -745,10 +760,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id25) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -773,10 +789,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id26) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -800,10 +817,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id27) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -833,12 +851,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id28) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id29) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -873,10 +892,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id30) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -899,10 +919,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id31) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -931,10 +952,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id32) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -956,10 +978,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id33) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -987,7 +1010,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -995,11 +1018,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id34) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1026,33 +1050,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## [3. Going further](#id35) +# 3. Going further -### [3.1 Inside ‘docker run’](#id36) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's id -### [3.2 Hijacking](#id37) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id38) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.4.md b/docs/sources/reference/api/archive/docker_remote_api_v1.4.md index 32733708b3..a7d52de871 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.4.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.4.md @@ -2,24 +2,25 @@ page_title: Remote API v1.4 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.4](#id1) +# Docker Remote API v1.4 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API is replacing rcli -- Default port in the docker daemon is 4243 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr +- The Remote API is replacing rcli +- Default port in the docker daemon is 4243 +- The API tends to be REST, but for some complex commands, like attach + or pull, the HTTP connection is hijacked to transport stdout stdin + and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -94,10 +95,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -143,7 +145,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -152,10 +154,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -222,10 +225,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict between containers and images - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -260,7 +264,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -268,10 +272,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -303,10 +308,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -325,10 +331,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -349,7 +356,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -357,10 +364,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -382,10 +390,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -407,10 +416,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -426,10 +436,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -464,11 +475,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -487,10 +498,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -514,10 +526,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Copy files or folders from a container](#id18) +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -541,13 +554,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id19) +## 2.2 Images -#### [List Images](#id20) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -617,11 +630,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id21) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -652,11 +665,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id22) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -677,10 +690,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id23) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -727,10 +741,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict between containers and images - **500** – server error -#### [Get the history of an image](#id24) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -760,10 +775,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id25) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -789,10 +805,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error :statuscode 404: no such image :statuscode 500: server error -#### [Tag an image into a repository](#id26) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -817,10 +834,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id27) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -844,10 +862,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id28) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -877,12 +896,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id29) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id30) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -918,10 +938,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id31) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -945,10 +966,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id32) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -975,35 +997,38 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id33) +### Show the docker version information - `GET /version` -: Show the docker version information - > - > **Example request**: - > - > GET /version HTTP/1.1 - > - > **Example response**: - > - > HTTP/1.1 200 OK - > Content-Type: application/json - > - > { - > "Version":"0.2.2", - > "GitCommit":"5a2a5cc+CHANGES", - > "GoVersion":"go1.0.3" - > } +`GET /version` + +Show the docker version information + + + **Example request**: + + GET /version HTTP/1.1 + + **Example response**: + + HTTP/1.1 200 OK + Content-Type: application/json + + { + "Version":"0.2.2", + "GitCommit":"5a2a5cc+CHANGES", + "GoVersion":"go1.0.3" + } Status Codes: - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id34) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1031,7 +1056,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -1039,11 +1064,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id35) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1070,33 +1096,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## [3. Going further](#id36) +# 3. Going further -### [3.1 Inside ‘docker run’](#id37) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's id -### [3.2 Hijacking](#id38) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id39) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.5.md b/docs/sources/reference/api/archive/docker_remote_api_v1.5.md index adef571b3f..c9fd854f44 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.5.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.5.md @@ -2,24 +2,25 @@ page_title: Remote API v1.5 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.5](#id1) +# Docker Remote API v1.5 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API is replacing rcli -- Default port in the docker daemon is 4243 -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr +- The Remote API is replacing rcli +- Default port in the docker daemon is 4243 +- The API tends to be REST, but for some complex commands, like attach + or pull, the HTTP connection is hijacked to transport stdout stdin + and stderr -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -94,10 +95,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -142,7 +144,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -151,10 +153,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -219,10 +222,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -257,7 +261,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -265,10 +269,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -300,10 +305,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -322,10 +328,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -346,7 +353,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -354,10 +361,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -379,10 +387,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -404,10 +413,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -423,10 +433,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -461,11 +472,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -484,10 +495,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -511,10 +523,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Copy files or folders from a container](#id18) +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -538,13 +551,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id19) +## 2.2 Images -#### [List Images](#id20) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -614,11 +627,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id21) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -653,11 +666,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id22) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -678,10 +691,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id23) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -727,10 +741,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id24) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -760,10 +775,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id25) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -794,10 +810,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id26) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -822,10 +839,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id27) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -849,10 +867,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id28) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -889,12 +908,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -### [2.3 Misc](#id29) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id30) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -931,10 +951,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id31) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -958,10 +979,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id32) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -988,10 +1010,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id33) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1013,10 +1036,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id34) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1044,7 +1068,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -1052,11 +1076,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id35) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1083,28 +1108,28 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## [3. Going further](#id36) +# 3. Going further -### [3.1 Inside ‘docker run’](#id37) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run`: -- Create the container -- If the status code is 404, it means the image doesn’t exists: \* Try - to pull it \* Then retry to create the container -- Start the container -- If you are not in detached mode: \* Attach to the container, using - logs=1 (to have stdout and stderr from the container’s start) and - stream=1 -- If in detached mode or only stdin is attached: \* Display the - container’s id + - Create the container + - If the status code is 404, it means the image doesn't exists: + Try to pull it - Then retry to create the container + - Start the container + - If you are not in detached mode: + Attach to the container, using logs=1 (to have stdout and stderr + from the container's start) and stream=1 + - If in detached mode or only stdin is attached: + Display the container's id -### [3.2 Hijacking](#id38) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id39) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.6.md b/docs/sources/reference/api/archive/docker_remote_api_v1.6.md index 5bd0e46d50..2ec7336a75 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.6.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.6.md @@ -2,27 +2,27 @@ page_title: Remote API v1.6 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.6](#id1) +# Docker Remote API v1.6 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -144,7 +145,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Query Parameters: @@ -202,10 +203,11 @@ page_keywords: API, Docker, rcli, REST, documentation **Now you can ssh into your new container on port 11022.** -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -271,10 +273,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -309,7 +312,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -317,10 +320,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -352,10 +356,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -374,10 +379,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -403,7 +409,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -411,10 +417,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -436,10 +443,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -461,10 +469,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -488,10 +497,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -530,8 +540,8 @@ page_keywords: API, Docker, rcli, REST, documentation When using the TTY setting is enabled in [`POST /containers/create` -](../../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), - the stream is the raw data from the process PTY and client’s stdin. + ](/api/docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), + the stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -570,11 +580,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -593,10 +603,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -620,10 +631,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Copy files or folders from a container](#id18) +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -647,13 +659,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id19) +## 2.2 Images -#### [List Images](#id20) +### List Images - `GET /images/`(*format*) -: List images `format` could be json or viz (json - default) +`GET /images/(format)` + +List images `format` could be json or viz (json default) **Example request**: @@ -723,11 +735,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create an image](#id21) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -762,11 +774,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id22) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -787,10 +799,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id23) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -836,10 +849,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id24) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -869,10 +883,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id25) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -900,10 +915,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error :statuscode 404: no such image :statuscode 500: server error -#### [Tag an image into a repository](#id26) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -928,10 +944,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id27) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -955,10 +972,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id28) +### Search images - `GET /images/search` -: Search for an image in the docker index +`GET /images/search` + +Search for an image in the docker index **Example request**: @@ -988,12 +1006,13 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -### [2.3 Misc](#id29) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id30) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -1029,10 +1048,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id31) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1056,10 +1076,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id32) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1086,10 +1107,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id33) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1111,10 +1133,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id34) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1142,7 +1165,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") Status Codes: @@ -1150,11 +1173,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id35) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1181,33 +1205,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## [3. Going further](#id36) +# 3. Going further -### [3.1 Inside ‘docker run’](#id37) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it +- If the status code is 404, it means the image doesn't exists: + - Try to pull it - Then retry to create the container - Start the container - If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 - If in detached mode or only stdin is attached: - : - Display the container’s id + - Display the container's id -### [3.2 Hijacking](#id38) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id39) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.7.md b/docs/sources/reference/api/archive/docker_remote_api_v1.7.md index ac02aa5d0e..cf748a7f9b 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.7.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.7.md @@ -2,27 +2,27 @@ page_title: Remote API v1.7 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.7](#id1) +# Docker Remote API v1.7 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -149,7 +150,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Status Codes: @@ -158,10 +159,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -227,10 +229,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -265,7 +268,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -273,10 +276,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -308,10 +312,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -330,10 +335,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -360,7 +366,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -368,10 +374,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -393,10 +400,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -418,10 +426,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -437,10 +446,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -479,8 +489,8 @@ page_keywords: API, Docker, rcli, REST, documentation When using the TTY setting is enabled in [`POST /containers/create` -](../../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), - the stream is the raw data from the process PTY and client’s stdin. + ](/api/docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), + the stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -519,11 +529,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -542,10 +552,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -569,10 +580,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Copy files or folders from a container](#id18) +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -596,12 +608,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id19) +## 2.2 Images -#### [List Images](#id20) +### List Images - `GET /images/json` -: **Example request**: +`GET /images/json` + +**Example request**: GET /images/json?all=0 HTTP/1.1 @@ -635,11 +648,11 @@ page_keywords: API, Docker, rcli, REST, documentation } ] -#### [Create an image](#id21) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -680,11 +693,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id22) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -705,10 +718,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id23) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -754,10 +768,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id24) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -787,10 +802,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id25) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -825,10 +841,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id26) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -853,10 +870,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id27) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -880,14 +898,15 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id28) +### Search images - `GET /images/search` -: Search for an image in the docker index. +`GET /images/search` + +Search for an image in the docker index. > **Note**: > The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon’s request. +> sent by the registry server to the docker daemon's request. **Example request**: @@ -934,12 +953,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -### [2.3 Misc](#id29) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id30) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -958,9 +978,9 @@ page_keywords: API, Docker, rcli, REST, documentation following algorithms: identity (no compression), gzip, bzip2, xz. The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, + at its root. It may include any number of other files, which will be accessible in the build context (See the [*ADD build - command*](../../../builder/#dockerbuilder)). + command*](/builder/#dockerbuilder)). Query Parameters: @@ -983,10 +1003,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id31) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1010,10 +1031,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id32) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1040,10 +1062,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id33) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1065,10 +1088,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id34) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1090,7 +1114,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - **run** – config automatically applied when the image is run. (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) @@ -1100,11 +1124,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id35) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1131,11 +1156,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Get a tarball containing all images and tags in a repository](#id36) +### Get a tarball containing all images and tags in a repository - `GET /images/`(*name*)`/get` -: Get a tarball containing all images and metadata for the repository - specified by `name`. +`GET /images/(name)/get` + +Get a tarball containing all images and metadata for the repository +specified by `name`. **Example request** @@ -1152,10 +1178,11 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -#### [Load a tarball with a set of images and tags into docker](#id37) +### Load a tarball with a set of images and tags into docker - `POST /images/load` -: Load a set of images and tags into the docker repository. +`POST /images/load` + +Load a set of images and tags into the docker repository. **Example request** @@ -1172,33 +1199,33 @@ page_keywords: API, Docker, rcli, REST, documentation :statuscode 200: no error :statuscode 500: server error -## [3. Going further](#id38) +# 3. Going further -### [3.1 Inside ‘docker run’](#id39) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it +- If the status code is 404, it means the image doesn't exists: + - Try to pull it - Then retry to create the container - Start the container - If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 - If in detached mode or only stdin is attached: - : - Display the container’s id + - Display the container's id -### [3.2 Hijacking](#id40) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id41) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/archive/docker_remote_api_v1.8.md b/docs/sources/reference/api/archive/docker_remote_api_v1.8.md index eb29699e62..8520e9f1e5 100644 --- a/docs/sources/reference/api/archive/docker_remote_api_v1.8.md +++ b/docs/sources/reference/api/archive/docker_remote_api_v1.8.md @@ -2,27 +2,27 @@ page_title: Remote API v1.8 page_description: API Documentation for Docker page_keywords: API, Docker, rcli, REST, documentation -# [Docker Remote API v1.8](#id1) +# Docker Remote API v1.8 -## [1. Brief introduction](#id2) +# 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## [2. Endpoints](#id3) +# 2. Endpoints -### [2.1 Containers](#id4) +## 2.1 Containers -#### [List containers](#id5) +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### [Create a container](#id6) +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -179,11 +180,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### [Inspect a container](#id7) +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` +Return low-level information on the container `id` **Example request**: @@ -264,10 +265,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [List processes running inside a container](#id8) +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -302,7 +304,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -310,10 +312,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Inspect changes on a container’s filesystem](#id9) +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -345,10 +348,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Export a container](#id10) +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -367,10 +371,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Start a container](#id11) +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -411,10 +416,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Stop a container](#id12) +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -436,10 +442,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Restart a container](#id13) +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -461,10 +468,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Kill a container](#id14) +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -480,10 +488,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Attach to a container](#id15) +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -522,8 +531,8 @@ page_keywords: API, Docker, rcli, REST, documentation When using the TTY setting is enabled in [`POST /containers/create` -](../../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), - the stream is the raw data from the process PTY and client’s stdin. + ](/api/docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), + the stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -562,11 +571,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### [Wait a container](#id16) +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -585,10 +594,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Remove a container](#id17) +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -612,10 +622,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Copy files or folders from a container](#id18) +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -639,12 +650,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### [2.2 Images](#id19) +## 2.2 Images -#### [List Images](#id20) +### List Images - `GET /images/json` -: **Example request**: +`GET /images/json` + +**Example request**: GET /images/json?all=0 HTTP/1.1 @@ -678,11 +690,11 @@ page_keywords: API, Docker, rcli, REST, documentation } ] -#### [Create an image](#id21) +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -723,11 +735,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Insert a file in an image](#id22) +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -748,10 +760,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Inspect an image](#id23) +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -797,10 +810,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Get the history of an image](#id24) +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -830,10 +844,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Push an image on the registry](#id25) +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -868,10 +883,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### [Tag an image into a repository](#id26) +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -896,10 +912,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Remove an image](#id27) +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -923,14 +940,15 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### [Search images](#id28) +### Search images - `GET /images/search` -: Search for an image in the docker index. +`GET /images/search` + +Search for an image in the docker index. > **Note**: > The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon’s request. +> sent by the registry server to the docker daemon's request. **Example request**: @@ -977,12 +995,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -### [2.3 Misc](#id29) +## 2.3 Misc -#### [Build an image from Dockerfile via stdin](#id30) +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -1003,9 +1022,9 @@ page_keywords: API, Docker, rcli, REST, documentation following algorithms: identity (no compression), gzip, bzip2, xz. The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, + at its root. It may include any number of other files, which will be accessible in the build context (See the [*ADD build - command*](../../../builder/#dockerbuilder)). + command*](/reference/builder/#dockerbuilder)). Query Parameters: @@ -1029,10 +1048,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Check auth configuration](#id31) +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1056,10 +1076,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### [Display system-wide information](#id32) +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1086,10 +1107,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Show the docker version information](#id33) +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1111,10 +1133,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Create a new image from a container’s changes](#id34) +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1136,7 +1159,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - **run** – config automatically applied when the image is run. (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) @@ -1146,11 +1169,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### [Monitor Docker’s events](#id35) +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, +or via polling (using since) **Example request**: @@ -1177,11 +1201,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Get a tarball containing all images and tags in a repository](#id36) +### Get a tarball containing all images and tags in a repository - `GET /images/`(*name*)`/get` -: Get a tarball containing all images and metadata for the repository - specified by `name`. +`GET /images/(name)/get` + +Get a tarball containing all images and metadata for the repository +specified by `name`. **Example request** @@ -1199,10 +1224,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### [Load a tarball with a set of images and tags into docker](#id37) +### Load a tarball with a set of images and tags into docker - `POST /images/load` -: Load a set of images and tags into the docker repository. +`POST /images/load` + +Load a set of images and tags into the docker repository. **Example request** @@ -1219,33 +1245,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## [3. Going further](#id38) +# 3. Going further -### [3.1 Inside ‘docker run’](#id39) +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run`: -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's id -### [3.2 Hijacking](#id40) +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### [3.3 CORS Requests](#id41) +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/docker_io_accounts_api.md b/docs/sources/reference/api/docker_io_accounts_api.md index e5e77dc421..8186e306f8 100644 --- a/docs/sources/reference/api/docker_io_accounts_api.md +++ b/docs/sources/reference/api/docker_io_accounts_api.md @@ -8,8 +8,9 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.1 Get a single user - `GET /api/v1.1/users/:username/` -: Get profile info for the specified user. +`GET /api/v1.1/users/:username/` + +Get profile info for the specified user. Parameters: @@ -61,8 +62,9 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.2 Update a single user - `PATCH /api/v1.1/users/:username/` -: Update profile info for the specified user. +`PATCH /api/v1.1/users/:username/` + +Update profile info for the specified user. Parameters: @@ -73,11 +75,11 @@ page_keywords: API, Docker, accounts, REST, documentation   - - **full\_name** (*string*) – (optional) the new name of the user. + - **full_name** (*string*) – (optional) the new name of the user. - **location** (*string*) – (optional) the new location. - **company** (*string*) – (optional) the new company of the user. - - **profile\_url** (*string*) – (optional) the new profile url. - - **gravatar\_email** (*string*) – (optional) the new Gravatar + - **profile_url** (*string*) – (optional) the new profile url. + - **gravatar_email** (*string*) – (optional) the new Gravatar email address. Request Headers: @@ -134,8 +136,9 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.3 List email addresses for a user - `GET /api/v1.1/users/:username/emails/` -: List email info for the specified user. +`GET /api/v1.1/users/:username/emails/` + +List email info for the specified user. Parameters: @@ -180,10 +183,11 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.4 Add email address for a user - `POST /api/v1.1/users/:username/emails/` -: Add a new email address to the specified user’s account. The email - address must be verified separately, a confirmation email is not - automatically sent. +`POST /api/v1.1/users/:username/emails/` + +Add a new email address to the specified user's account. The email +address must be verified separately, a confirmation email is not +automatically sent. Json Parameters: @@ -235,12 +239,13 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.5 Update an email address for a user - `PATCH /api/v1.1/users/:username/emails/` -: Update an email address for the specified user to either verify an - email address or set it as the primary email for the user. You - cannot use this endpoint to un-verify an email address. You cannot - use this endpoint to unset the primary email, only set another as - the primary. +`PATCH /api/v1.1/users/:username/emails/` + +Update an email address for the specified user to either verify an +email address or set it as the primary email for the user. You +cannot use this endpoint to un-verify an email address. You cannot +use this endpoint to unset the primary email, only set another as +the primary. Parameters: @@ -269,7 +274,7 @@ page_keywords: API, Docker, accounts, REST, documentation Status Codes: - - **200** – success, user’s email updated. + - **200** – success, user's email updated. - **400** – data validation error. - **401** – authentication error. - **403** – permission error, authenticated user must be the user @@ -305,9 +310,10 @@ page_keywords: API, Docker, accounts, REST, documentation ### 1.6 Delete email address for a user - `DELETE /api/v1.1/users/:username/emails/` -: Delete an email address from the specified user’s account. You - cannot delete a user’s primary email address. +`DELETE /api/v1.1/users/:username/emails/` + +Delete an email address from the specified user's account. You +cannot delete a user's primary email address. Json Parameters: @@ -351,5 +357,3 @@ page_keywords: API, Docker, accounts, REST, documentation HTTP/1.1 204 NO CONTENT Content-Length: 0 - - diff --git a/docs/sources/reference/api/docker_io_oauth_api.md b/docs/sources/reference/api/docker_io_oauth_api.md index 3009f08e20..dd2f6d75ec 100644 --- a/docs/sources/reference/api/docker_io_oauth_api.md +++ b/docs/sources/reference/api/docker_io_oauth_api.md @@ -27,46 +27,47 @@ request registration of your application send an email to [support-accounts@docker.com](mailto:support-accounts%40docker.com) with the following information: -- The name of your application -- A description of your application and the service it will provide to - docker.io users. -- A callback URI that we will use for redirecting authorization - requests to your application. These are used in the step of getting - an Authorization Code. The domain name of the callback URI will be - visible to the user when they are requested to authorize your - application. + - The name of your application + - A description of your application and the service it will provide to + docker.io users. + - A callback URI that we will use for redirecting authorization + requests to your application. These are used in the step of getting + an Authorization Code. The domain name of the callback URI will be + visible to the user when they are requested to authorize your + application. When your application is approved you will receive a response from the docker.io team with your `client_id` and `client_secret` which your application will use in the steps of getting an Authorization Code and getting an Access Token. -## 3. Endpoints +# 3. Endpoints -### 3.1 Get an Authorization Code +## 3.1 Get an Authorization Code Once You have registered you are ready to start integrating docker.io accounts into your application! The process is usually started by a user following a link in your application to an OAuth Authorization endpoint. - `GET /api/v1.1/o/authorize/` -: Request that a docker.io user authorize your application. If the - user is not already logged in, they will be prompted to login. The - user is then presented with a form to authorize your application for - the requested access scope. On submission, the user will be - redirected to the specified `redirect_uri` with - an Authorization Code. +`GET /api/v1.1/o/authorize/` + +Request that a docker.io user authorize your application. If the +user is not already logged in, they will be prompted to login. The +user is then presented with a form to authorize your application for +the requested access scope. On submission, the user will be +redirected to the specified `redirect_uri` with +an Authorization Code. Query Parameters:   - - **client\_id** – The `client_id` given to + - **client_id** – The `client_id` given to your application at registration. - - **response\_type** – MUST be set to `code`. + - **response_type** – MUST be set to `code`. This specifies that you would like an Authorization Code returned. - - **redirect\_uri** – The URI to redirect back to after the user + - **redirect_uri** – The URI to redirect back to after the user has authorized your application. If omitted, the first of your registered `response_uris` is used. If included, it must be one of the URIs which were submitted when @@ -95,7 +96,7 @@ following a link in your application to an OAuth Authorization endpoint. prompt which asks the user to authorize your application with a description of the requested scopes. - ![](../../../_images/io_oauth_authorization_page.png) + ![](/reference/api/_static/io_oauth_authorization_page.png) Once the user allows or denies your Authorization Request the user will be redirected back to your application. Included in that @@ -113,34 +114,35 @@ following a link in your application to an OAuth Authorization endpoint. : An error message in the event of the user denying the authorization or some other kind of error with the request. -### 3.2 Get an Access Token +## 3.2 Get an Access Token Once the user has authorized your application, a request will be made to -your application’s specified `redirect_uri` which +your application'sspecified `redirect_uri` which includes a `code` parameter that you must then use to get an Access Token. - `POST /api/v1.1/o/token/` -: Submit your newly granted Authorization Code and your application’s - credentials to receive an Access Token and Refresh Token. The code - is valid for 60 seconds and cannot be used more than once. +`POST /api/v1.1/o/token/` + +Submit your newly granted Authorization Code and your application's +credentials to receive an Access Token and Refresh Token. The code +is valid for 60 seconds and cannot be used more than once. Request Headers:   - **Authorization** – HTTP basic authentication using your - application’s `client_id` and + application's `client_id` and `client_secret` Form Parameters:   - - **grant\_type** – MUST be set to `authorization_code` - - **code** – The authorization code received from the user’s + - **grant_type** – MUST be set to `authorization_code` + - **code** – The authorization code received from the user's redirect request. - - **redirect\_uri** – The same `redirect_uri` + - **redirect_uri** – The same `redirect_uri` used in the authentication request. **Example Request** @@ -177,31 +179,32 @@ to get an Access Token. In the case of an error, there will be a non-200 HTTP Status and and data detailing the error. -### 3.3 Refresh a Token +## 3.3 Refresh a Token Once the Access Token expires you can use your `refresh_token` to have docker.io issue your application a new Access Token, if the user has not revoked access from your application. - `POST /api/v1.1/o/token/` -: Submit your `refresh_token` and application’s - credentials to receive a new Access Token and Refresh Token. The - `refresh_token` can be used only once. +`POST /api/v1.1/o/token/` + +Submit your `refresh_token` and application's +credentials to receive a new Access Token and Refresh Token. The +`refresh_token` can be used only once. Request Headers:   - **Authorization** – HTTP basic authentication using your - application’s `client_id` and + application's `client_id` and `client_secret` Form Parameters:   - - **grant\_type** – MUST be set to `refresh_token` - - **refresh\_token** – The `refresh_token` + - **grant_type** – MUST be set to `refresh_token` + - **refresh_token** – The `refresh_token` which was issued to your application. - **scope** – (optional) The scope of the access token to be returned. Must not include any scope not originally granted by @@ -241,11 +244,10 @@ if the user has not revoked access from your application. In the case of an error, there will be a non-200 HTTP Status and and data detailing the error. -## 4. Use an Access Token with the API +# 4. Use an Access Token with the API Many of the docker.io API requests will require a Authorization request -header field. Simply ensure you add this header with "Bearer -\<`access_token`\>": +header field. Simply ensure you add this header with "Bearer <`access_token`>": GET /api/v1.1/resource HTTP/1.1 Host: docker.io diff --git a/docs/sources/reference/api/docker_remote_api.md b/docs/sources/reference/api/docker_remote_api.md index 5df7d8938c..3c58b1b990 100644 --- a/docs/sources/reference/api/docker_remote_api.md +++ b/docs/sources/reference/api/docker_remote_api.md @@ -6,31 +6,30 @@ page_keywords: API, Docker, rcli, REST, documentation ## 1. Brief introduction -- The Remote API is replacing rcli -- By default the Docker daemon listens on unix:///var/run/docker.sock - and the client must have root access to interact with the daemon -- If a group named *docker* exists on your system, docker will apply - ownership of the socket to the group -- The API tends to be REST, but for some complex commands, like attach - or pull, the HTTP connection is hijacked to transport stdout stdin - and stderr -- Since API version 1.2, the auth configuration is now handled client - side, so the client has to send the authConfig as POST in - /images/(name)/push -- authConfig, set as the `X-Registry-Auth` header, - is currently a Base64 encoded (json) string with credentials: - `{'username': string, 'password': string, 'email': string, 'serveraddress' : string}` + - The Remote API is replacing rcli + - By default the Docker daemon listens on unix:///var/run/docker.sock + and the client must have root access to interact with the daemon + - If a group named *docker* exists on your system, docker will apply + ownership of the socket to the group + - The API tends to be REST, but for some complex commands, like attach + or pull, the HTTP connection is hijacked to transport stdout stdin + and stderr + - Since API version 1.2, the auth configuration is now handled client + side, so the client has to send the authConfig as POST in /images/(name)/push + - authConfig, set as the `X-Registry-Auth` header, is currently a Base64 + encoded (json) string with credentials: + `{'username': string, 'password': string, 'email': string, 'serveraddress' : string}` ## 2. Versions The current version of the API is 1.11 -Calling /images/\/insert is the same as calling -/v1.11/images/\/insert +Calling /images//insert is the same as calling +/v1.11/images//insert You can still call an old version of the api using -/v1.11/images/\/insert +/v1.11/images//insert ### v1.11 @@ -38,11 +37,13 @@ You can still call an old version of the api using [*Docker Remote API v1.11*](../docker_remote_api_v1.11/) -#### What’s new +#### What's new - `GET /events` -: **New!** You can now use the `-until` parameter - to close connection after timestamp. +`GET /events` + +**New!** +You can now use the `-until` parameter to close connection +after timestamp. ### v1.10 @@ -50,16 +51,21 @@ You can still call an old version of the api using [*Docker Remote API v1.10*](../docker_remote_api_v1.10/) -#### What’s new +#### What's new - `DELETE /images/`(*name*) -: **New!** You can now use the force parameter to force delete of an - image, even if it’s tagged in multiple repositories. **New!** You +`DELETE /images/(name)` + +**New!** +You can now use the force parameter to force delete of an + image, even if it's tagged in multiple repositories. **New!** + You can now use the noprune parameter to prevent the deletion of parent images - `DELETE /containers/`(*id*) -: **New!** You can now use the force paramter to force delete a +`DELETE /containers/(id)` + +**New!** +You can now use the force paramter to force delete a container, even if it is currently running ### v1.9 @@ -68,51 +74,58 @@ You can still call an old version of the api using [*Docker Remote API v1.9*](../docker_remote_api_v1.9/) -#### What’s new +#### What's new - `POST /build` -: **New!** This endpoint now takes a serialized ConfigFile which it - uses to resolve the proper registry auth credentials for pulling the - base image. Clients which previously implemented the version - accepting an AuthConfig object must be updated. +`POST /build` + +**New!** +This endpoint now takes a serialized ConfigFile which it +uses to resolve the proper registry auth credentials for pulling the +base image. Clients which previously implemented the version +accepting an AuthConfig object must be updated. ### v1.8 #### Full Documentation -#### What’s new +#### What's new - `POST /build` -: **New!** This endpoint now returns build status as json stream. In - case of a build error, it returns the exit status of the failed - command. +`POST /build` - `GET /containers/`(*id*)`/json` -: **New!** This endpoint now returns the host config for the - container. +**New!** +This endpoint now returns build status as json stream. In +case of a build error, it returns the exit status of the failed +command. - `POST /images/create` -: +`GET /containers/(id)/json` - `POST /images/`(*name*)`/insert` -: +**New!** +This endpoint now returns the host config for the +container. - `POST /images/`(*name*)`/push` -: **New!** progressDetail object was added in the JSON. It’s now - possible to get the current value and the total of the progress - without having to parse the string. +`POST /images/create` + +`POST /images/(name)/insert` + +`POST /images/(name)/push` + +**New!** +progressDetail object was added in the JSON. It's now +possible to get the current value and the total of the progress +without having to parse the string. ### v1.7 #### Full Documentation -#### What’s new +#### What's new - `GET /images/json` -: The format of the json returned from this uri changed. Instead of an - entry for each repo/tag on an image, each image is only represented - once, with a nested attribute indicating the repo/tags that apply to - that image. +`GET /images/json` + +The format of the json returned from this uri changed. Instead of an +entry for each repo/tag on an image, each image is only represented +once, with a nested attribute indicating the repo/tags that apply to +that image. Instead of: @@ -192,60 +205,74 @@ You can still call an old version of the api using } ] - `GET /images/viz` -: This URI no longer exists. The `images --viz` - output is now generated in the client, using the - `/images/json` data. +`GET /images/viz` + +This URI no longer exists. The `images --viz` +output is now generated in the client, using the +`/images/json` data. ### v1.6 #### Full Documentation -#### What’s new +#### What's new - `POST /containers/`(*id*)`/attach` -: **New!** You can now split stderr from stdout. This is done by - prefixing a header to each transmition. See - [`POST /containers/(id)/attach` -](../docker_remote_api_v1.9/#post--containers-(id)-attach "POST /containers/(id)/attach"). - The WebSocket attach is unchanged. Note that attach calls on the - previous API version didn’t change. Stdout and stderr are merged. +`POST /containers/(id)/attach` + +**New!** +You can now split stderr from stdout. This is done by +prefixing a header to each transmition. See +[`POST /containers/(id)/attach`]( +../docker_remote_api_v1.9/#post--containers-(id)-attach "POST /containers/(id)/attach"). +The WebSocket attach is unchanged. Note that attach calls on the +previous API version didn't change. Stdout and stderr are merged. ### v1.5 #### Full Documentation -#### What’s new +#### What's new - `POST /images/create` -: **New!** You can now pass registry credentials (via an AuthConfig +`POST /images/create` + +**New!** +You can now pass registry credentials (via an AuthConfig object) through the X-Registry-Auth header - `POST /images/`(*name*)`/push` -: **New!** The AuthConfig object now needs to be passed through the +`POST /images/(name)/push` + +**New!** +The AuthConfig object now needs to be passed through the X-Registry-Auth header - `GET /containers/json` -: **New!** The format of the Ports entry has been changed to a list of - dicts each containing PublicPort, PrivatePort and Type describing a - port mapping. +`GET /containers/json` + +**New!** +The format of the Ports entry has been changed to a list of +dicts each containing PublicPort, PrivatePort and Type describing a +port mapping. ### v1.4 #### Full Documentation -#### What’s new +#### What's new - `POST /images/create` -: **New!** When pulling a repo, all images are now downloaded in - parallel. +`POST /images/create` - `GET /containers/`(*id*)`/top` -: **New!** You can now use ps args with docker top, like docker top - \ aux +**New!** +When pulling a repo, all images are now downloaded in parallel. - `GET /events:` -: **New!** Image’s name added in the events +`GET /containers/(id)/top` + +**New!** +You can now use ps args with docker top, like docker top + aux + +`GET /events` + +**New!** +Image's name added in the events ### v1.3 @@ -254,20 +281,23 @@ docker v0.5.0 #### Full Documentation -#### What’s new +#### What's new - `GET /containers/`(*id*)`/top` -: List the processes running inside a container. +`GET /containers/(id)/top` - `GET /events:` -: **New!** Monitor docker’s events via streaming or via polling +List the processes running inside a container. + +`GET /events` + +**New!** +Monitor docker's events via streaming or via polling Builder (/build): -- Simplify the upload of the build context -- Simply stream a tarball instead of multipart upload with 4 - intermediary buffers -- Simpler, less memory usage, less disk usage and faster + - Simplify the upload of the build context + - Simply stream a tarball instead of multipart upload with 4 + intermediary buffers + - Simpler, less memory usage, less disk usage and faster > **Warning**: > The /build improvements are not reverse-compatible. Pre 1.3 clients will @@ -275,12 +305,12 @@ Builder (/build): List containers (/containers/json): -- You can use size=1 to get the size of the containers + - You can use size=1 to get the size of the containers -Start containers (/containers/\/start): +Start containers (/containers//start): -- You can now pass host-specific configuration (e.g. bind mounts) in - the POST body for start calls + - You can now pass host-specific configuration (e.g. bind mounts) in + the POST body for start calls ### v1.2 @@ -289,25 +319,28 @@ docker v0.4.2 #### Full Documentation -#### What’s new +#### What's new The auth configuration is now handled by the client. -The client should send it’s authConfig as POST on each call of -/images/(name)/push +The client should send it's authConfig as POST on each call of +`/images/(name)/push` - `GET /auth` -: **Deprecated.** +`GET /auth` - `POST /auth` -: Only checks the configuration but doesn’t store it on the server +**Deprecated.** + +`POST /auth` + +Only checks the configuration but doesn't store it on the server Deleting an image is now improved, will only untag the image if it has children and remove all the untagged parents if has any. - `POST /images//delete` -: Now returns a JSON structure with the list of images - deleted/untagged. +`POST /images//delete` + +Now returns a JSON structure with the list of images +deleted/untagged. ### v1.1 @@ -316,24 +349,23 @@ docker v0.4.0 #### Full Documentation -#### What’s new +#### What's new - `POST /images/create` -: +`POST /images/create` - `POST /images/`(*name*)`/insert` -: +`POST /images/(name)/insert` - `POST /images/`(*name*)`/push` -: Uses json stream instead of HTML hijack, it looks like this: +`POST /images/(name)/push` - > HTTP/1.1 200 OK - > Content-Type: application/json - > - > {"status":"Pushing..."} - > {"status":"Pushing", "progress":"1/? (n/a)"} - > {"error":"Invalid..."} - > ... +Uses json stream instead of HTML hijack, it looks like this: + + HTTP/1.1 200 OK + Content-Type: application/json + + {"status":"Pushing..."} + {"status":"Pushing", "progress":"1/? (n/a)"} + {"error":"Invalid..."} + ... ### v1.0 @@ -342,6 +374,6 @@ docker v0.3.4 #### Full Documentation -#### What’s new +#### What's new Initial version diff --git a/docs/sources/reference/api/docker_remote_api_v1.10.md b/docs/sources/reference/api/docker_remote_api_v1.10.md index 02d13403ef..c07f96f384 100644 --- a/docs/sources/reference/api/docker_remote_api_v1.10.md +++ b/docs/sources/reference/api/docker_remote_api_v1.10.md @@ -6,23 +6,23 @@ page_keywords: API, Docker, rcli, REST, documentation ## 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## 2. Endpoints +# 2. Endpoints -### 2.1 Containers +## 2.1 Containers -#### List containers +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### Create a container +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -149,7 +150,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Query Parameters: @@ -165,11 +166,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### Inspect a container +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` +Return low-level information on the container `id` **Example request**: @@ -248,10 +249,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### List processes running inside a container +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -294,10 +296,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Inspect changes on a container’s filesystem +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id` 's filesystem **Example request**: @@ -329,10 +332,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Export a container +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -351,10 +355,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Start a container +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -380,7 +385,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -388,10 +393,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Stop a container +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -413,10 +419,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Restart a container +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -438,10 +445,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Kill a container +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -457,10 +465,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Attach to a container +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -500,7 +509,7 @@ page_keywords: API, Docker, rcli, REST, documentation When using the TTY setting is enabled in [`POST /containers/create` ](../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), - the stream is the raw data from the process PTY and client’s stdin. + the stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -539,10 +548,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### Wait a container +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -562,9 +572,9 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Remove a container +### Remove a container - `DELETE /containers/`(*id*) + `DELETE /containers/(id*) : Remove the container `id` from the filesystem **Example request**: @@ -591,10 +601,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Copy files or folders from a container +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -620,10 +631,11 @@ page_keywords: API, Docker, rcli, REST, documentation ### 2.2 Images -#### List Images +### List Images - `GET /images/json` -: **Example request**: +`GET /images/json` + +**Example request**: GET /images/json?all=0 HTTP/1.1 @@ -657,10 +669,11 @@ page_keywords: API, Docker, rcli, REST, documentation } ] -#### Create an image +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -702,10 +715,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Insert a file in an image +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -727,10 +741,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Inspect an image +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -774,10 +789,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Get the history of an image +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -807,10 +823,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Push an image on the registry +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -845,10 +862,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Tag an image into a repository +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -873,9 +891,9 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Remove an image +### Remove an image - `DELETE /images/`(*name*) + `DELETE /images/(name*) : Remove the image `name` from the filesystem **Example request**: @@ -907,14 +925,15 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Search images +### Search images - `GET /images/search` -: Search for an image in the docker index. +`GET /images/search` + +Search for an image in the docker index. > **Note**: > The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon’s request. +> sent by the registry server to the docker daemon's request. **Example request**: @@ -963,10 +982,11 @@ page_keywords: API, Docker, rcli, REST, documentation ### 2.3 Misc -#### Build an image from Dockerfile via stdin +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -989,7 +1009,7 @@ page_keywords: API, Docker, rcli, REST, documentation The archive must include a file called `Dockerfile` at its root. It may include any number of other files, which will be accessible in the build context (See the [*ADD build - command*](../../builder/#dockerbuilder)). + command*](/reference/builder/#dockerbuilder)). Query Parameters: @@ -1013,10 +1033,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Check auth configuration +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1040,10 +1061,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### Display system-wide information +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1070,10 +1092,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Show the docker version information +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1095,10 +1118,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Create a new image from a container’s changes +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1120,7 +1144,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - **run** – config automatically applied when the image is run. (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) @@ -1130,10 +1154,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Monitor Docker’s events +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via +`GET /events` + +Get events from docker, either in real time via streaming, or via polling (using since) **Example request**: @@ -1161,10 +1186,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Get a tarball containing all images and tags in a repository +### Get a tarball containing all images and tags in a repository - `GET /images/`(*name*)`/get` -: Get a tarball containing all images and metadata for the repository +`GET /images/(name)/get` + +Get a tarball containing all images and metadata for the repository specified by `name`. **Example request** @@ -1183,10 +1209,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Load a tarball with a set of images and tags into docker +### Load a tarball with a set of images and tags into docker - `POST /images/load` -: Load a set of images and tags into the docker repository. +`POST /images/load` + +Load a set of images and tags into the docker repository. **Example request** @@ -1203,33 +1230,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## 3. Going further +# 3. Going further -### 3.1 Inside ‘docker run’ +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container + - Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If in detached mode or only stdin is attached: + - Display the container's id -### 3.2 Hijacking +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### 3.3 CORS Requests +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/docker_remote_api_v1.11.md b/docs/sources/reference/api/docker_remote_api_v1.11.md index 6e038acd82..5e3fdcb0a8 100644 --- a/docs/sources/reference/api/docker_remote_api_v1.11.md +++ b/docs/sources/reference/api/docker_remote_api_v1.11.md @@ -6,23 +6,23 @@ page_keywords: API, Docker, rcli, REST, documentation ## 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## 2. Endpoints +# 2. Endpoints -### 2.1 Containers +## 2.1 Containers -#### List containers +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### Create a container +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -150,7 +151,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **config** – the container’s configuration + - **config** – the container's configuration Query Parameters: @@ -166,10 +167,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### Inspect a container +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` + +Return low-level information on the container `id` **Example request**: @@ -251,10 +253,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### List processes running inside a container +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -289,7 +292,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -297,10 +300,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Inspect changes on a container’s filesystem +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -332,10 +336,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Export a container +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -354,10 +359,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Start a container +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -381,7 +387,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **hostConfig** – the container’s host configuration (optional) + - **hostConfig** – the container's host configuration (optional) Status Codes: @@ -389,10 +395,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Stop a container +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -414,10 +421,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Restart a container +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -439,10 +447,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Kill a container +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -458,10 +467,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Attach to a container +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -500,8 +510,8 @@ page_keywords: API, Docker, rcli, REST, documentation When using the TTY setting is enabled in [`POST /containers/create` -](../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), - the stream is the raw data from the process PTY and client’s stdin. + ](../docker_remote_api_v1.9/#post--containers-create "POST /containers/create"), + the stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -540,11 +550,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### Wait a container +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -563,10 +573,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Remove a container +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -592,10 +603,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Copy files or folders from a container +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -619,12 +631,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### 2.2 Images +## 2.2 Images -#### List Images +### List Images - `GET /images/json` -: **Example request**: +`GET /images/json` + +**Example request**: GET /images/json?all=0 HTTP/1.1 @@ -658,11 +671,11 @@ page_keywords: API, Docker, rcli, REST, documentation } ] -#### Create an image +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -703,11 +716,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Insert a file in an image +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -728,10 +741,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Inspect an image +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -777,10 +791,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Get the history of an image +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -810,10 +825,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Push an image on the registry +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -848,10 +864,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Tag an image into a repository +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -876,10 +893,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Remove an image +### Remove an image - `DELETE /images/`(*name*) -: Remove the image `name` from the filesystem +`DELETE /images/(name)` + +Remove the image `name` from the filesystem **Example request**: @@ -910,14 +928,15 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Search images +### Search images - `GET /images/search` -: Search for an image in the docker index. +`GET /images/search` + +Search for an image in the docker index. > **Note**: > The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon’s request. +> sent by the registry server to the docker daemon's request. **Example request**: @@ -964,12 +983,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -### 2.3 Misc +## 2.3 Misc -#### Build an image from Dockerfile via stdin +### Build an image from Dockerfile via stdin - `POST /build` -: Build an image from Dockerfile via stdin +`POST /build` + +Build an image from Dockerfile via stdin **Example request**: @@ -990,9 +1010,9 @@ page_keywords: API, Docker, rcli, REST, documentation following algorithms: identity (no compression), gzip, bzip2, xz. The archive must include a file called `Dockerfile` - at its root. It may include any number of other files, + at its root. It may include any number of other files, which will be accessible in the build context (See the [*ADD build - command*](../../builder/#dockerbuilder)). + command*](/reference/builder/#dockerbuilder)). Query Parameters: @@ -1016,10 +1036,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Check auth configuration +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1043,10 +1064,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### Display system-wide information +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1073,10 +1095,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Show the docker version information +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1098,10 +1121,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Create a new image from a container’s changes +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1123,7 +1147,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - **run** – config automatically applied when the image is run. (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) @@ -1133,11 +1157,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Monitor Docker’s events +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or +via polling (using since) **Example request**: @@ -1165,11 +1190,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Get a tarball containing all images and tags in a repository +### Get a tarball containing all images and tags in a repository - `GET /images/`(*name*)`/get` -: Get a tarball containing all images and metadata for the repository - specified by `name`. +`GET /images/(name)/get` + +Get a tarball containing all images and metadata for the repository +specified by `name`. **Example request** @@ -1187,10 +1213,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Load a tarball with a set of images and tags into docker +### Load a tarball with a set of images and tags into docker - `POST /images/load` -: Load a set of images and tags into the docker repository. +`POST /images/load` + +Load a set of images and tags into the docker repository. **Example request** @@ -1207,33 +1234,33 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## 3. Going further +# 3. Going further -### 3.1 Inside ‘docker run’ +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run`: -- Create the container +- Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container +- If the status code is 404, it means the image doesn't exists: + - Try to pull it + - Then retry to create the container -- Start the container +- Start the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 +- If you are not in detached mode: + - Attach to the container, using logs=1 (to have stdout and + stderr from the container's start) and stream=1 -- If in detached mode or only stdin is attached: - : - Display the container’s id +- If in detached mode or only stdin is attached: + - Display the container's id -### 3.2 Hijacking +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### 3.3 CORS Requests +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/docker_remote_api_v1.9.md b/docs/sources/reference/api/docker_remote_api_v1.9.md index aaa8dc194b..74e85a7ee6 100644 --- a/docs/sources/reference/api/docker_remote_api_v1.9.md +++ b/docs/sources/reference/api/docker_remote_api_v1.9.md @@ -4,25 +4,25 @@ page_keywords: API, Docker, rcli, REST, documentation # Docker Remote API v1.9 -## 1. Brief introduction +# 1. Brief introduction -- The Remote API has replaced rcli -- The daemon listens on `unix:///var/run/docker.sock` -, but you can [*Bind Docker to another host/port or a Unix - socket*](../../../use/basics/#bind-docker). -- The API tends to be REST, but for some complex commands, like - `attach` or `pull`, the HTTP - connection is hijacked to transport `stdout, stdin` - and `stderr` + - The Remote API has replaced rcli + - The daemon listens on `unix:///var/run/docker.sock` but you can + [*Bind Docker to another host/port or a Unix socket*]( + /use/basics/#bind-docker). + - The API tends to be REST, but for some complex commands, like `attach` + or `pull`, the HTTP connection is hijacked to transport `stdout, stdin` + and `stderr` -## 2. Endpoints +# 2. Endpoints -### 2.1 Containers +## 2.1 Containers -#### List containers +### List containers - `GET /containers/json` -: List containers +`GET /containers/json` + +List containers. **Example request**: @@ -97,10 +97,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **400** – bad parameter - **500** – server error -#### Create a container +### Create a container - `POST /containers/create` -: Create a container +`POST /containers/create` + +Create a container **Example request**: @@ -179,11 +180,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **406** – impossible to attach (container not running) - **500** – server error -#### Inspect a container +### Inspect a container - `GET /containers/`(*id*)`/json` -: Return low-level information on the container `id` +`GET /containers/(id)/json` +Return low-level information on the container `id` **Example request**: @@ -264,10 +265,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### List processes running inside a container +### List processes running inside a container - `GET /containers/`(*id*)`/top` -: List processes running inside the container `id` +`GET /containers/(id)/top` + +List processes running inside the container `id` **Example request**: @@ -302,7 +304,7 @@ page_keywords: API, Docker, rcli, REST, documentation   - - **ps\_args** – ps arguments to use (eg. aux) + - **ps_args** – ps arguments to use (eg. aux) Status Codes: @@ -310,10 +312,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Inspect changes on a container’s filesystem +### Inspect changes on a container's filesystem - `GET /containers/`(*id*)`/changes` -: Inspect changes on container `id` ‘s filesystem +`GET /containers/(id)/changes` + +Inspect changes on container `id`'s filesystem **Example request**: @@ -345,10 +348,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Export a container +### Export a container - `GET /containers/`(*id*)`/export` -: Export the contents of container `id` +`GET /containers/(id)/export` + +Export the contents of container `id` **Example request**: @@ -367,10 +371,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Start a container +### Start a container - `POST /containers/`(*id*)`/start` -: Start the container `id` +`POST /containers/(id)/start` + +Start the container `id` **Example request**: @@ -411,10 +416,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Stop a container +### Stop a container - `POST /containers/`(*id*)`/stop` -: Stop the container `id` +`POST /containers/(id)/stop` + +Stop the container `id` **Example request**: @@ -436,10 +442,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Restart a container +### Restart a container - `POST /containers/`(*id*)`/restart` -: Restart the container `id` +`POST /containers/(id)/restart` + +Restart the container `id` **Example request**: @@ -461,10 +468,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Kill a container +### Kill a container - `POST /containers/`(*id*)`/kill` -: Kill the container `id` +`POST /containers/(id)/kill` + +Kill the container `id` **Example request**: @@ -480,10 +488,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Attach to a container +### Attach to a container - `POST /containers/`(*id*)`/attach` -: Attach to the container `id` +`POST /containers/(id)/attach` + +Attach to the container `id` **Example request**: @@ -521,9 +530,8 @@ page_keywords: API, Docker, rcli, REST, documentation **Stream details**: When using the TTY setting is enabled in - [`POST /containers/create` -](#post--containers-create "POST /containers/create"), the - stream is the raw data from the process PTY and client’s stdin. When + [`POST /containers/create`](#post--containers-create), the + stream is the raw data from the process PTY and client's stdin. When the TTY is disabled, then the stream is multiplexed to separate stdout and stderr. @@ -562,11 +570,11 @@ page_keywords: API, Docker, rcli, REST, documentation 4. Read the extracted size and output it on the correct output 5. Goto 1) -#### Wait a container +### Wait a container - `POST /containers/`(*id*)`/wait` -: Block until container `id` stops, then returns - the exit code +`POST /containers/(id)/wait` + +Block until container `id` stops, then returns the exit code **Example request**: @@ -585,10 +593,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Remove a container +### Remove a container - `DELETE /containers/`(*id*) -: Remove the container `id` from the filesystem +`DELETE /containers/(id)` + +Remove the container `id` from the filesystem **Example request**: @@ -612,10 +621,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Copy files or folders from a container +### Copy files or folders from a container - `POST /containers/`(*id*)`/copy` -: Copy files or folders of container `id` +`POST /containers/(id)/copy` + +Copy files or folders of container `id` **Example request**: @@ -639,12 +649,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -### 2.2 Images +## 2.2 Images -#### List Images +### List Images - `GET /images/json` -: **Example request**: +`GET /images/json` + +**Example request**: GET /images/json?all=0 HTTP/1.1 @@ -678,11 +689,11 @@ page_keywords: API, Docker, rcli, REST, documentation } ] -#### Create an image +### Create an image - `POST /images/create` -: Create an image, either by pull it from the registry or by importing - it +`POST /images/create` + +Create an image, either by pull it from the registry or by importing it **Example request**: @@ -723,11 +734,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Insert a file in an image +### Insert a file in an image - `POST /images/`(*name*)`/insert` -: Insert a file from `url` in the image - `name` at `path` +`POST /images/(name)/insert` + +Insert a file from `url` in the image `name` at `path` **Example request**: @@ -748,10 +759,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Inspect an image +### Inspect an image - `GET /images/`(*name*)`/json` -: Return low-level information on the image `name` +`GET /images/(name)/json` + +Return low-level information on the image `name` **Example request**: @@ -797,10 +809,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Get the history of an image +### Get the history of an image - `GET /images/`(*name*)`/history` -: Return the history of the image `name` +`GET /images/(name)/history` + +Return the history of the image `name` **Example request**: @@ -830,10 +843,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Push an image on the registry +### Push an image on the registry - `POST /images/`(*name*)`/push` -: Push the image `name` on the registry +`POST /images/(name)/push` + +Push the image `name` on the registry **Example request**: @@ -868,10 +882,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such image - **500** – server error -#### Tag an image into a repository +### Tag an image into a repository - `POST /images/`(*name*)`/tag` -: Tag the image `name` into a repository +`POST /images/(name)/tag` + +Tag the image `name` into a repository **Example request**: @@ -896,9 +911,9 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Remove an image +### Remove an image - `DELETE /images/`(*name*) +`DELETE /images/(name*) : Remove the image `name` from the filesystem **Example request**: @@ -923,14 +938,15 @@ page_keywords: API, Docker, rcli, REST, documentation - **409** – conflict - **500** – server error -#### Search images +### Search images - `GET /images/search` -: Search for an image in the docker index. +`GET /images/search` + +Search for an image in the docker index. > **Note**: > The response keys have changed from API v1.6 to reflect the JSON -> sent by the registry server to the docker daemon’s request. +> sent by the registry server to the docker daemon's request. **Example request**: @@ -977,12 +993,13 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -### 2.3 Misc +## 2.3 Misc -#### Build an image from Dockerfile +### Build an image from Dockerfile - `POST /build` -: Build an image from Dockerfile using a POST body. +`POST /build` + +Build an image from Dockerfile using a POST body. **Example request**: @@ -1005,7 +1022,7 @@ page_keywords: API, Docker, rcli, REST, documentation The archive must include a file called `Dockerfile` at its root. It may include any number of other files, which will be accessible in the build context (See the [*ADD build - command*](../../builder/#dockerbuilder)). + command*](/reference/builder/#dockerbuilder)). Query Parameters: @@ -1030,10 +1047,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Check auth configuration +### Check auth configuration - `POST /auth` -: Get the default username and email +`POST /auth` + +Get the default username and email **Example request**: @@ -1057,10 +1075,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **204** – no error - **500** – server error -#### Display system-wide information +### Display system-wide information - `GET /info` -: Display system-wide information +`GET /info` + +Display system-wide information **Example request**: @@ -1087,10 +1106,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Show the docker version information +### Show the docker version information - `GET /version` -: Show the docker version information +`GET /version` + +Show the docker version information **Example request**: @@ -1112,10 +1132,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Create a new image from a container’s changes +### Create a new image from a container's changes - `POST /commit` -: Create a new image from a container’s changes +`POST /commit` + +Create a new image from a container's changes **Example request**: @@ -1137,7 +1158,7 @@ page_keywords: API, Docker, rcli, REST, documentation - **tag** – tag - **m** – commit message - **author** – author (eg. "John Hannibal Smith - \<[hannibal@a-team.com](mailto:hannibal%40a-team.com)\>") + <[hannibal@a-team.com](mailto:hannibal%40a-team.com)>") - **run** – config automatically applied when the image is run. (ex: {"Cmd": ["cat", "/world"], "PortSpecs":["22"]}) @@ -1147,11 +1168,12 @@ page_keywords: API, Docker, rcli, REST, documentation - **404** – no such container - **500** – server error -#### Monitor Docker’s events +### Monitor Docker's events - `GET /events` -: Get events from docker, either in real time via streaming, or via - polling (using since) +`GET /events` + +Get events from docker, either in real time via streaming, or via +polling (using since) **Example request**: @@ -1178,11 +1200,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Get a tarball containing all images and tags in a repository +### Get a tarball containing all images and tags in a repository - `GET /images/`(*name*)`/get` -: Get a tarball containing all images and metadata for the repository - specified by `name`. +`GET /images/(name)/get` + +Get a tarball containing all images and metadata for the repository specified by `name`. **Example request** @@ -1200,10 +1222,11 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -#### Load a tarball with a set of images and tags into docker +### Load a tarball with a set of images and tags into docker - `POST /images/load` -: Load a set of images and tags into the docker repository. +`POST /images/load` + +Load a set of images and tags into the docker repository. **Example request** @@ -1220,33 +1243,36 @@ page_keywords: API, Docker, rcli, REST, documentation - **200** – no error - **500** – server error -## 3. Going further +# 3. Going further -### 3.1 Inside ‘docker run’ +## 3.1 Inside `docker run` -Here are the steps of ‘docker run’ : +Here are the steps of `docker run` : -- Create the container + - Create the container -- If the status code is 404, it means the image doesn’t exists: - : - Try to pull it - - Then retry to create the container + - If the status code is 404, it means the image doesn't exists: -- Start the container + - Try to pull it + - Then retry to create the container -- If you are not in detached mode: - : - Attach to the container, using logs=1 (to have stdout and - stderr from the container’s start) and stream=1 + - Start the container -- If in detached mode or only stdin is attached: - : - Display the container’s id + - If you are not in detached mode: -### 3.2 Hijacking + - Attach to the container, using logs=1 (to have stdout and + - stderr from the container's start) and stream=1 + + - If in detached mode or only stdin is attached: + + - Display the container's id + +## 3.2 Hijacking In this version of the API, /attach, uses hijacking to transport stdin, stdout and stderr on the same socket. This might change in the future. -### 3.3 CORS Requests +## 3.3 CORS Requests To enable cross origin requests to the remote api add the flag "–api-enable-cors" when running docker in daemon mode. diff --git a/docs/sources/reference/api/index_api.md b/docs/sources/reference/api/index_api.md index 8f98513cf5..161b3e0c71 100644 --- a/docs/sources/reference/api/index_api.md +++ b/docs/sources/reference/api/index_api.md @@ -14,11 +14,11 @@ page_keywords: API, Docker, index, REST, documentation ### Repositories -### User Repo +#### User Repo - `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/` -: Create a user repository with the given `namespace` - and `repo_name`. +`PUT /v1/repositories/(namespace)/(repo_name)/` + +Create a user repository with the given `namespace` and `repo_name`. **Example Request**: @@ -34,7 +34,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - **namespace** – the namespace for the repo - - **repo\_name** – the name for the repo + - **repo_name** – the name for the repo **Example Response**: @@ -54,9 +54,9 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active - `DELETE /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/` -: Delete a user repository with the given `namespace` - and `repo_name`. +`DELETE /v1/repositories/(namespace)/(repo_name)/` + +Delete a user repository with the given `namespace` and `repo_name`. **Example Request**: @@ -72,7 +72,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - **namespace** – the namespace for the repo - - **repo\_name** – the name for the repo + - **repo_name** – the name for the repo **Example Response**: @@ -93,12 +93,12 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active -### Library Repo +#### Library Repo - `PUT /v1/repositories/`(*repo\_name*)`/` -: Create a library repository with the given `repo_name` -. This is a restricted feature only available to docker - admins. +`PUT /v1/repositories/(repo_name)/` + +Create a library repository with the given `repo_name`. +This is a restricted feature only available to docker admins. When namespace is missing, it is assumed to be `library` @@ -116,7 +116,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - - **repo\_name** – the library name for the repo + - **repo_name** – the library name for the repo **Example Response**: @@ -136,10 +136,10 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active - `DELETE /v1/repositories/`(*repo\_name*)`/` -: Delete a library repository with the given `repo_name` -. This is a restricted feature only available to docker - admins. +`DELETE /v1/repositories/(repo_name)/` + +Delete a library repository with the given `repo_name`. +This is a restricted feature only available to docker admins. When namespace is missing, it is assumed to be `library` @@ -157,7 +157,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - - **repo\_name** – the library name for the repo + - **repo_name** – the library name for the repo **Example Response**: @@ -180,10 +180,11 @@ page_keywords: API, Docker, index, REST, documentation ### Repository Images -### User Repo Images +#### User Repo Images - `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/images` -: Update the images for a user repo. +`PUT /v1/repositories/(namespace)/(repo_name)/images` + +Update the images for a user repo. **Example Request**: @@ -199,7 +200,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - **namespace** – the namespace for the repo - - **repo\_name** – the name for the repo + - **repo_name** – the name for the repo **Example Response**: @@ -216,8 +217,9 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active or permission denied - `GET /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/images` -: get the images for a user repo. +`GET /v1/repositories/(namespace)/(repo_name)/images` + +Get the images for a user repo. **Example Request**: @@ -228,7 +230,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - **namespace** – the namespace for the repo - - **repo\_name** – the name for the repo + - **repo_name** – the name for the repo **Example Response**: @@ -246,10 +248,11 @@ page_keywords: API, Docker, index, REST, documentation - **200** – OK - **404** – Not found -### Library Repo Images +#### Library Repo Images - `PUT /v1/repositories/`(*repo\_name*)`/images` -: Update the images for a library repo. +`PUT /v1/repositories/(repo_name)/images` + +Update the images for a library repo. **Example Request**: @@ -264,7 +267,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - - **repo\_name** – the library name for the repo + - **repo_name** – the library name for the repo **Example Response**: @@ -281,8 +284,9 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active or permission denied - `GET /v1/repositories/`(*repo\_name*)`/images` -: get the images for a library repo. +`GET /v1/repositories/(repo_name)/images` + +Get the images for a library repo. **Example Request**: @@ -292,7 +296,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - - **repo\_name** – the library name for the repo + - **repo_name** – the library name for the repo **Example Response**: @@ -312,10 +316,11 @@ page_keywords: API, Docker, index, REST, documentation ### Repository Authorization -### Library Repo +#### Library Repo - `PUT /v1/repositories/`(*repo\_name*)`/auth` -: authorize a token for a library repo +`PUT /v1/repositories/(repo_name)/auth` + +Authorize a token for a library repo **Example Request**: @@ -326,7 +331,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - - **repo\_name** – the library name for the repo + - **repo_name** – the library name for the repo **Example Response**: @@ -342,10 +347,11 @@ page_keywords: API, Docker, index, REST, documentation - **403** – Permission denied - **404** – Not found -### User Repo +#### User Repo - `PUT /v1/repositories/`(*namespace*)`/`(*repo\_name*)`/auth` -: authorize a token for a user repo +`PUT /v1/repositories/(namespace)/(repo_name)/auth` + +Authorize a token for a user repo **Example Request**: @@ -357,7 +363,7 @@ page_keywords: API, Docker, index, REST, documentation Parameters: - **namespace** – the namespace for the repo - - **repo\_name** – the name for the repo + - **repo_name** – the name for the repo **Example Response**: @@ -375,10 +381,11 @@ page_keywords: API, Docker, index, REST, documentation ### Users -### User Login +#### User Login - `GET /v1/users` -: If you want to check your login, you can try this endpoint +`GET /v1/users` + +If you want to check your login, you can try this endpoint **Example Request**: @@ -401,10 +408,11 @@ page_keywords: API, Docker, index, REST, documentation - **401** – Unauthorized - **403** – Account is not Active -### User Register +#### User Register - `POST /v1/users` -: Registering a new account. +`POST /v1/users` + +Registering a new account. **Example request**: @@ -423,7 +431,7 @@ page_keywords: API, Docker, index, REST, documentation - **email** – valid email address, that needs to be confirmed - **username** – min 4 character, max 30 characters, must match - the regular expression [a-z0-9\_]. + the regular expression [a-z0-9_]. - **password** – min 5 characters **Example Response**: @@ -439,10 +447,12 @@ page_keywords: API, Docker, index, REST, documentation - **201** – User Created - **400** – Errors (invalid json, missing or invalid fields, etc) -### Update User +#### Update User + +`PUT /v1/users/(username)/` + +Change a password or email address for given user. If you pass in an - `PUT /v1/users/`(*username*)`/` -: Change a password or email address for given user. If you pass in an email, it will add it to your account, it will not remove the old one. Passwords will be updated. @@ -487,8 +497,10 @@ If you need to search the index, this is the endpoint you would use. ### Search - `GET /v1/search` -: Search the Index given a search term. It accepts +`GET /v1/search` + +Search the Index given a search term. It accepts + [GET](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.3) only. @@ -521,5 +533,3 @@ If you need to search the index, this is the endpoint you would use. - **200** – no error - **500** – server error - - diff --git a/docs/sources/reference/api/registry_api.md b/docs/sources/reference/api/registry_api.md index 09035515f5..f8bdd6657d 100644 --- a/docs/sources/reference/api/registry_api.md +++ b/docs/sources/reference/api/registry_api.md @@ -6,51 +6,51 @@ page_keywords: API, Docker, index, registry, REST, documentation ## Introduction -- This is the REST API for the Docker Registry -- It stores the images and the graph for a set of repositories -- It does not have user accounts data -- It has no notion of user accounts or authorization -- It delegates authentication and authorization to the Index Auth - service using tokens -- It supports different storage backends (S3, cloud files, local FS) -- It doesn’t have a local database -- It will be open-sourced at some point + - This is the REST API for the Docker Registry + - It stores the images and the graph for a set of repositories + - It does not have user accounts data + - It has no notion of user accounts or authorization + - It delegates authentication and authorization to the Index Auth + service using tokens + - It supports different storage backends (S3, cloud files, local FS) + - It doesn't have a local database + - It will be open-sourced at some point We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries: -- **sponsor registry**: such a registry is provided by a third-party - hosting infrastructure as a convenience for their customers and the - docker community as a whole. Its costs are supported by the third - party, but the management and operation of the registry are - supported by dotCloud. It features read/write access, and delegates - authentication and authorization to the Index. -- **mirror registry**: such a registry is provided by a third-party - hosting infrastructure but is targeted at their customers only. Some - mechanism (unspecified to date) ensures that public images are - pulled from a sponsor registry to the mirror registry, to make sure - that the customers of the third-party provider can “docker pull” - those images locally. -- **vendor registry**: such a registry is provided by a software - vendor, who wants to distribute docker images. It would be operated - and managed by the vendor. Only users authorized by the vendor would - be able to get write access. Some images would be public (accessible - for anyone), others private (accessible only for authorized users). - Authentication and authorization would be delegated to the Index. - The goal of vendor registries is to let someone do “docker pull - basho/riak1.3” and automatically push from the vendor registry - (instead of a sponsor registry); i.e. get all the convenience of a - sponsor registry, while retaining control on the asset distribution. -- **private registry**: such a registry is located behind a firewall, - or protected by an additional security layer (HTTP authorization, - SSL client-side certificates, IP address authorization...). The - registry is operated by a private entity, outside of dotCloud’s - control. It can optionally delegate additional authorization to the - Index, but it is not mandatory. + - **sponsor registry**: such a registry is provided by a third-party + hosting infrastructure as a convenience for their customers and the + docker community as a whole. Its costs are supported by the third + party, but the management and operation of the registry are + supported by dotCloud. It features read/write access, and delegates + authentication and authorization to the Index. + - **mirror registry**: such a registry is provided by a third-party + hosting infrastructure but is targeted at their customers only. Some + mechanism (unspecified to date) ensures that public images are + pulled from a sponsor registry to the mirror registry, to make sure + that the customers of the third-party provider can “docker pull” + those images locally. + - **vendor registry**: such a registry is provided by a software + vendor, who wants to distribute docker images. It would be operated + and managed by the vendor. Only users authorized by the vendor would + be able to get write access. Some images would be public (accessible + for anyone), others private (accessible only for authorized users). + Authentication and authorization would be delegated to the Index. + The goal of vendor registries is to let someone do “docker pull + basho/riak1.3” and automatically push from the vendor registry + (instead of a sponsor registry); i.e. get all the convenience of a + sponsor registry, while retaining control on the asset distribution. + - **private registry**: such a registry is located behind a firewall, + or protected by an additional security layer (HTTP authorization, + SSL client-side certificates, IP address authorization...). The + registry is operated by a private entity, outside of dotCloud's + control. It can optionally delegate additional authorization to the + Index, but it is not mandatory. > **Note**: > Mirror registries and private registries which do not use the Index -> don’t even need to run the registry code. They can be implemented by any +> don't even need to run the registry code. They can be implemented by any > kind of transport implementing HTTP GET and PUT. Read-only registries > can be powered by a simple static HTTP server. @@ -63,19 +63,19 @@ grasp the context, here are some examples of registries: > - remote docker addressed through SSH. The latter would only require two new commands in docker, e.g. -`registryget` and `registryput`, -wrapping access to the local filesystem (and optionally doing -consistency checks). Authentication and authorization are then delegated -to SSH (e.g. with public keys). +`registryget` and `registryput`, wrapping access to the local filesystem +(and optionally doing consistency checks). Authentication and authorization +are then delegated to SSH (e.g. with public keys). -## Endpoints +# Endpoints -### Images +## Images ### Layer - `GET /v1/images/`(*image\_id*)`/layer` -: get image layer for a given `image_id` +`GET /v1/images/(image_id)/layer` + +Get image layer for a given `image_id` **Example Request**: @@ -87,7 +87,7 @@ to SSH (e.g. with public keys). Parameters: - - **image\_id** – the id for the layer you want to get + - **image_id** – the id for the layer you want to get **Example Response**: @@ -104,8 +104,9 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Image not found - `PUT /v1/images/`(*image\_id*)`/layer` -: put image layer for a given `image_id` +`PUT /v1/images/(image_id)/layer` + +Put image layer for a given `image_id` **Example Request**: @@ -118,7 +119,7 @@ to SSH (e.g. with public keys). Parameters: - - **image\_id** – the id for the layer you want to get + - **image_id** – the id for the layer you want to get **Example Response**: @@ -135,10 +136,11 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Image not found -### Image +## Image - `PUT /v1/images/`(*image\_id*)`/json` -: put image for a given `image_id` +`PUT /v1/images/(image_id)/json` + +Put image for a given `image_id` **Example Request**: @@ -181,7 +183,7 @@ to SSH (e.g. with public keys). Parameters: - - **image\_id** – the id for the layer you want to get + - **image_id** – the id for the layer you want to get **Example Response**: @@ -197,8 +199,9 @@ to SSH (e.g. with public keys). - **200** – OK - **401** – Requires authorization - `GET /v1/images/`(*image\_id*)`/json` -: get image for a given `image_id` +`GET /v1/images/(image_id)/json` + +Get image for a given `image_id` **Example Request**: @@ -210,7 +213,7 @@ to SSH (e.g. with public keys). Parameters: - - **image\_id** – the id for the layer you want to get + - **image_id** – the id for the layer you want to get **Example Response**: @@ -258,10 +261,11 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Image not found -### Ancestry +## Ancestry - `GET /v1/images/`(*image\_id*)`/ancestry` -: get ancestry for an image given an `image_id` +`GET /v1/images/(image_id)/ancestry` + +Get ancestry for an image given an `image_id` **Example Request**: @@ -273,7 +277,7 @@ to SSH (e.g. with public keys). Parameters: - - **image\_id** – the id for the layer you want to get + - **image_id** – the id for the layer you want to get **Example Response**: @@ -293,10 +297,11 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Image not found -### Tags +## Tags - `GET /v1/repositories/`(*namespace*)`/`(*repository*)`/tags` -: get all of the tags for the given repo. +`GET /v1/repositories/(namespace)/(repository)/tags` + +Get all of the tags for the given repo. **Example Request**: @@ -330,8 +335,9 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Repository not found - `GET /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*) -: get a tag for the given repo. +`GET /v1/repositories/(namespace)/(repository)/tags/(tag*): + +Get a tag for the given repo. **Example Request**: @@ -363,8 +369,9 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Tag not found - `DELETE /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*) -: delete the tag for the repo +`DELETE /v1/repositories/(namespace)/(repository)/tags/(tag*): + +Delete the tag for the repo **Example Request**: @@ -395,8 +402,9 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Tag not found - `PUT /v1/repositories/`(*namespace*)`/`(*repository*)`/tags/`(*tag*) -: put a tag for the given repo. +`PUT /v1/repositories/(namespace)/(repository)/tags/(tag*): + +Put a tag for the given repo. **Example Request**: @@ -430,10 +438,11 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Image not found -### Repositories +## Repositories - `DELETE /v1/repositories/`(*namespace*)`/`(*repository*)`/` -: delete a repository +`DELETE /v1/repositories/(namespace)/(repository)/` + +Delete a repository **Example Request**: @@ -465,11 +474,12 @@ to SSH (e.g. with public keys). - **401** – Requires authorization - **404** – Repository not found -### Status +## Status - `GET /v1/_ping` -: Check status of the registry. This endpoint is also used to - determine if the registry supports SSL. +`GET /v1/_ping` + +Check status of the registry. This endpoint is also used to +determine if the registry supports SSL. **Example Request**: diff --git a/docs/sources/reference/api/registry_index_spec.md b/docs/sources/reference/api/registry_index_spec.md index aa18a2e3c5..fb5617d101 100644 --- a/docs/sources/reference/api/registry_index_spec.md +++ b/docs/sources/reference/api/registry_index_spec.md @@ -10,16 +10,16 @@ page_keywords: docker, registry, api, index The Index is responsible for centralizing information about: -- User accounts -- Checksums of the images -- Public namespaces + - User accounts + - Checksums of the images + - Public namespaces The Index has different components: -- Web UI -- Meta-data store (comments, stars, list public repositories) -- Authentication service -- Tokenization + - Web UI + - Meta-data store (comments, stars, list public repositories) + - Authentication service + - Tokenization The index is authoritative for those information. @@ -28,46 +28,46 @@ managed by Docker Inc. ### Registry -- It stores the images and the graph for a set of repositories -- It does not have user accounts data -- It has no notion of user accounts or authorization -- It delegates authentication and authorization to the Index Auth - service using tokens -- It supports different storage backends (S3, cloud files, local FS) -- It doesn’t have a local database -- [Source Code](https://github.com/dotcloud/docker-registry) + - It stores the images and the graph for a set of repositories + - It does not have user accounts data + - It has no notion of user accounts or authorization + - It delegates authentication and authorization to the Index Auth + service using tokens + - It supports different storage backends (S3, cloud files, local FS) + - It doesn't have a local database + - [Source Code](https://github.com/dotcloud/docker-registry) We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries: -- **sponsor registry**: such a registry is provided by a third-party - hosting infrastructure as a convenience for their customers and the - docker community as a whole. Its costs are supported by the third - party, but the management and operation of the registry are - supported by dotCloud. It features read/write access, and delegates - authentication and authorization to the Index. -- **mirror registry**: such a registry is provided by a third-party - hosting infrastructure but is targeted at their customers only. Some - mechanism (unspecified to date) ensures that public images are - pulled from a sponsor registry to the mirror registry, to make sure - that the customers of the third-party provider can “docker pull” - those images locally. -- **vendor registry**: such a registry is provided by a software - vendor, who wants to distribute docker images. It would be operated - and managed by the vendor. Only users authorized by the vendor would - be able to get write access. Some images would be public (accessible - for anyone), others private (accessible only for authorized users). - Authentication and authorization would be delegated to the Index. - The goal of vendor registries is to let someone do “docker pull - basho/riak1.3” and automatically push from the vendor registry - (instead of a sponsor registry); i.e. get all the convenience of a - sponsor registry, while retaining control on the asset distribution. -- **private registry**: such a registry is located behind a firewall, - or protected by an additional security layer (HTTP authorization, - SSL client-side certificates, IP address authorization...). The - registry is operated by a private entity, outside of dotCloud’s - control. It can optionally delegate additional authorization to the - Index, but it is not mandatory. + - **sponsor registry**: such a registry is provided by a third-party + hosting infrastructure as a convenience for their customers and the + docker community as a whole. Its costs are supported by the third + party, but the management and operation of the registry are + supported by dotCloud. It features read/write access, and delegates + authentication and authorization to the Index. + - **mirror registry**: such a registry is provided by a third-party + hosting infrastructure but is targeted at their customers only. Some + mechanism (unspecified to date) ensures that public images are + pulled from a sponsor registry to the mirror registry, to make sure + that the customers of the third-party provider can “docker pull” + those images locally. + - **vendor registry**: such a registry is provided by a software + vendor, who wants to distribute docker images. It would be operated + and managed by the vendor. Only users authorized by the vendor would + be able to get write access. Some images would be public (accessible + for anyone), others private (accessible only for authorized users). + Authentication and authorization would be delegated to the Index. + The goal of vendor registries is to let someone do “docker pull + basho/riak1.3” and automatically push from the vendor registry + (instead of a sponsor registry); i.e. get all the convenience of a + sponsor registry, while retaining control on the asset distribution. + - **private registry**: such a registry is located behind a firewall, + or protected by an additional security layer (HTTP authorization, + SSL client-side certificates, IP address authorization...). The + registry is operated by a private entity, outside of dotCloud's + control. It can optionally delegate additional authorization to the + Index, but it is not mandatory. > **Note:** The latter implies that while HTTP is the protocol > of choice for a registry, multiple schemes are possible (and @@ -88,36 +88,33 @@ to SSH (e.g. with public keys). On top of being a runtime for LXC, Docker is the Registry client. It supports: -- Push / Pull on the registry -- Client authentication on the Index + - Push / Pull on the registry + - Client authentication on the Index ## Workflow ### Pull -![](../../../_images/docker_pull_chart.png) +![](/static_files/docker_pull_chart.png) 1. Contact the Index to know where I should download “samalba/busybox” -2. Index replies: a. `samalba/busybox` is on - Registry A b. here are the checksums for `samalba/busybox` - (for all layers) c. token -3. Contact Registry A to receive the layers for - `samalba/busybox` (all of them to the base - image). Registry A is authoritative for “samalba/busybox” but keeps - a copy of all inherited layers and serve them all from the same +2. Index replies: a. `samalba/busybox` is on Registry A b. here are the + checksums for `samalba/busybox` (for all layers) c. token +3. Contact Registry A to receive the layers for `samalba/busybox` (all of + them to the base image). Registry A is authoritative for “samalba/busybox” + but keeps a copy of all inherited layers and serve them all from the same location. -4. registry contacts index to verify if token/user is allowed to - download images -5. Index returns true/false lettings registry know if it should proceed - or error out +4. registry contacts index to verify if token/user is allowed to download images +5. Index returns true/false lettings registry know if it should proceed or error + out 6. Get the payload for all layers -It’s possible to run: +It's possible to run: docker pull https:///repositories/samalba/busybox In this case, Docker bypasses the Index. However the security is not -guaranteed (in case Registry A is corrupted) because there won’t be any +guaranteed (in case Registry A is corrupted) because there won't be any checksum checks. Currently registry redirects to s3 urls for downloads, going forward all @@ -128,60 +125,61 @@ sub-classes for S3 and local storage. Token is only returned when the `X-Docker-Token` header is sent with request. -Basic Auth is required to pull private repos. Basic auth isn’t required +Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account. -#### API (pulling repository foo/bar): +**API (pulling repository foo/bar):** -1. (Docker -\> Index) GET /v1/repositories/foo/bar/images - : **Headers**: - : Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - X-Docker-Token: true +1. (Docker -> Index) GET /v1/repositories/foo/bar/images: + + **Headers**: + Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== + X-Docker-Token: true + + **Action**: + (looking up the foo/bar in db and gets images and checksums + for that repo (all if no tag is specified, if tag, only + checksums for those tags) see part 4.4.1) + +2. (Index -> Docker) HTTP 200 OK + + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=write + X-Docker-Endpoints: registry.docker.io [,registry2.docker.io] + + **Body**: + Jsonified checksums (see part 4.4.1) + +3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest + + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=write + +4. (Registry -> Index) GET /v1/repositories/foo/bar/images + + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=read + + **Body**: + + + **Action**: + (Lookup token see if they have access to pull.) + + If good: + HTTP 200 OK Index will invalidate the token + + If bad: + HTTP 401 Unauthorized + +5. (Docker -> Registry) GET /v1/images/928374982374/ancestry **Action**: - : (looking up the foo/bar in db and gets images and checksums - for that repo (all if no tag is specified, if tag, only - checksums for those tags) see part 4.4.1) - -2. (Index -\> Docker) HTTP 200 OK - - > **Headers**: - > : - Authorization: Token - > signature=123abc,repository=”foo/bar”,access=write - > - X-Docker-Endpoints: registry.docker.io [, - > registry2.docker.io] - > - > **Body**: - > : Jsonified checksums (see part 4.4.1) - > -3. (Docker -\> Registry) GET /v1/repositories/foo/bar/tags/latest - : **Headers**: - : Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - -4. (Registry -\> Index) GET /v1/repositories/foo/bar/images - - > **Headers**: - > : Authorization: Token - > signature=123abc,repository=”foo/bar”,access=read - > - > **Body**: - > : \ - > - > **Action**: - > : ( Lookup token see if they have access to pull.) - > - > If good: - > : HTTP 200 OK Index will invalidate the token - > - > If bad: - > : HTTP 401 Unauthorized - > -5. (Docker -\> Registry) GET /v1/images/928374982374/ancestry - : **Action**: - : (for each image id returned in the registry, fetch /json + - /layer) + (for each image id returned in the registry, fetch /json + /layer) > **Note**: > If someone makes a second request, then we will always give a new token, @@ -189,7 +187,7 @@ and for an active account. ### Push -![](../../../_images/docker_push_chart.png) +![](/static_files/docker_push_chart.png) 1. Contact the index to allocate the repository name “samalba/busybox” (authentication required with user credentials) @@ -204,7 +202,7 @@ and for an active account. 6. docker contacts the index to give checksums for upload images > **Note:** -> **It’s possible not to use the Index at all!** In this case, a deployed +> **It's possible not to use the Index at all!** In this case, a deployed > version of the Registry is deployed to store and serve images. Those > images are not authenticated and the security is not guaranteed. @@ -218,89 +216,96 @@ the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end). -#### API (pushing repos foo/bar): +**API (pushing repos foo/bar):** -1. (Docker -\> Index) PUT /v1/repositories/foo/bar/ - : **Headers**: - : Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: - true +1. (Docker -> Index) PUT /v1/repositories/foo/bar/ - **Action**:: - : - in index, we allocated a new repository, and set to - initialized + **Headers**: + Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: + true - **Body**:: - : (The body contains the list of images that are going to be - pushed, with empty checksums. The checksums will be set at - the end of the push): - - [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}] - -2. (Index -\> Docker) 200 Created - : **Headers**: - : - WWW-Authenticate: Token - signature=123abc,repository=”foo/bar”,access=write - - X-Docker-Endpoints: registry.docker.io [, - registry2.docker.io] - -3. (Docker -\> Registry) PUT /v1/images/98765432\_parent/json - : **Headers**: - : Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - -4. (Registry-\>Index) GET /v1/repositories/foo/bar/images - : **Headers**: - : Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - - **Action**:: - : - Index: - : will invalidate the token. - - - Registry: - : grants a session (if token is approved) and fetches - the images id - -5. (Docker -\> Registry) PUT /v1/images/98765432\_parent/json - : **Headers**:: - : - Authorization: Token - signature=123abc,repository=”foo/bar”,access=write - - Cookie: (Cookie provided by the Registry) - -6. (Docker -\> Registry) PUT /v1/images/98765432/json - : **Headers**: - : Cookie: (Cookie provided by the Registry) - -7. (Docker -\> Registry) PUT /v1/images/98765432\_parent/layer - : **Headers**: - : Cookie: (Cookie provided by the Registry) - -8. (Docker -\> Registry) PUT /v1/images/98765432/layer - : **Headers**: - : X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh - -9. (Docker -\> Registry) PUT /v1/repositories/foo/bar/tags/latest - : **Headers**: - : Cookie: (Cookie provided by the Registry) + **Action**: + - in index, we allocated a new repository, and set to + initialized **Body**: - : “98765432” + (The body contains the list of images that are going to be + pushed, with empty checksums. The checksums will be set at + the end of the push): -10. (Docker -\> Index) PUT /v1/repositories/foo/bar/images + [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”}] - **Headers**: - : Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: +2. (Index -> Docker) 200 Created + + **Headers**: + - WWW-Authenticate: Token + signature=123abc,repository=”foo/bar”,access=write + - X-Docker-Endpoints: registry.docker.io [, + registry2.docker.io] + +3. (Docker -> Registry) PUT /v1/images/98765432_parent/json + + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=write + +4. (Registry->Index) GET /v1/repositories/foo/bar/images + + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=write + + **Action**: + - Index: + will invalidate the token. + - Registry: + grants a session (if token is approved) and fetches + the images id + +5. (Docker -> Registry) PUT /v1/images/98765432_parent/json + + **Headers**:: + - Authorization: Token + signature=123abc,repository=”foo/bar”,access=write + - Cookie: (Cookie provided by the Registry) + +6. (Docker -> Registry) PUT /v1/images/98765432/json + + **Headers**: + - Cookie: (Cookie provided by the Registry) + +7. (Docker -> Registry) PUT /v1/images/98765432_parent/layer + + **Headers**: + - Cookie: (Cookie provided by the Registry) + +8. (Docker -> Registry) PUT /v1/images/98765432/layer + + **Headers**: + X-Docker-Checksum: sha256:436745873465fdjkhdfjkgh + +9. (Docker -> Registry) PUT /v1/repositories/foo/bar/tags/latest + + **Headers**: + - Cookie: (Cookie provided by the Registry) + + **Body**: + “98765432” + +10. (Docker -> Index) PUT /v1/repositories/foo/bar/images + + **Headers**: + Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: registry1.docker.io (no validation on this right now) - **Body**: - : (The image, id’s, tags and checksums) - + **Body**: + (The image, id`s, tags and checksums) [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}] - **Return** HTTP 204 + **Return**: HTTP 204 > **Note:** If push fails and they need to start again, what happens in the index, > there will already be a record for the namespace/name, but it will be @@ -308,8 +313,8 @@ the end). > case could be if someone pushes the same thing at the same time with two > different shells. -If it’s a retry on the Registry, Docker has a cookie (provided by the -registry after token validation). So the Index won’t have to provide a +If it's a retry on the Registry, Docker has a cookie (provided by the +registry after token validation). So the Index won't have to provide a new token. ### Delete @@ -318,11 +323,9 @@ If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow. 1. Docker contacts the index to request a delete of a repository - `samalba/busybox` (authentication required with - user credentials) -2. If authentication works and repository is valid, - `samalba/busybox` is marked as deleted and a - temporary token is returned + `samalba/busybox` (authentication required with user credentials) +2. If authentication works and repository is valid, `samalba/busybox` + is marked as deleted and a temporary token is returned 3. Send a delete request to the registry for the repository (along with the token) 4. Registry A contacts the Index to verify the token (token must @@ -334,74 +337,79 @@ nice clean way to do that. Here is the workflow. > **Note**: > The Docker client should present an "Are you sure?" prompt to confirm -> the deletion before starting the process. Once it starts it can’t be +> the deletion before starting the process. Once it starts it can't be > undone. -#### API (deleting repository foo/bar): +**API (deleting repository foo/bar):** -1. (Docker -\> Index) DELETE /v1/repositories/foo/bar/ - : **Headers**: - : Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: - true +1. (Docker -> Index) DELETE /v1/repositories/foo/bar/ - **Action**:: - : - in index, we make sure it is a valid repository, and set - to deleted (logically) + **Headers**: + Authorization: Basic sdkjfskdjfhsdkjfh== X-Docker-Token: + true - **Body**:: - : Empty + **Action**: + - in index, we make sure it is a valid repository, and set + to deleted (logically) -2. (Index -\> Docker) 202 Accepted - : **Headers**: - : - WWW-Authenticate: Token - signature=123abc,repository=”foo/bar”,access=delete - - X-Docker-Endpoints: registry.docker.io [, - registry2.docker.io] \# list of endpoints where this - repo lives. + **Body**: + Empty -3. (Docker -\> Registry) DELETE /v1/repositories/foo/bar/ - : **Headers**: - : Authorization: Token - signature=123abc,repository=”foo/bar”,access=delete +2. (Index -> Docker) 202 Accepted -4. (Registry-\>Index) PUT /v1/repositories/foo/bar/auth - : **Headers**: - : Authorization: Token - signature=123abc,repository=”foo/bar”,access=delete + **Headers**: + - WWW-Authenticate: Token + signature=123abc,repository=”foo/bar”,access=delete + - X-Docker-Endpoints: registry.docker.io [, + registry2.docker.io] + # list of endpoints where this repo lives. - **Action**:: - : - Index: - : will invalidate the token. +3. (Docker -> Registry) DELETE /v1/repositories/foo/bar/ - - Registry: - : deletes the repository (if token is approved) + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=delete -5. (Registry -\> Docker) 200 OK - : 200 If success 403 if forbidden 400 if bad request 404 if - repository isn’t found +4. (Registry->Index) PUT /v1/repositories/foo/bar/auth -6. (Docker -\> Index) DELETE /v1/repositories/foo/bar/ + **Headers**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=delete - > **Headers**: - > : Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: - > registry-1.docker.io (no validation on this right now) - > - > **Body**: - > : Empty - > - > **Return** HTTP 200 + **Action**: + - Index: + will invalidate the token. + - Registry: + deletes the repository (if token is approved) + +5. (Registry -> Docker) 200 OK + + 200 If success 403 if forbidden 400 if bad request 404 + if repository isn't found + +6. (Docker -> Index) DELETE /v1/repositories/foo/bar/ + + **Headers**: + Authorization: Basic 123oislifjsldfj== X-Docker-Endpoints: + registry-1.docker.io (no validation on this right now) + + **Body**: + Empty + + **Return**: HTTP 200 ## How to use the Registry in standalone mode The Index has two main purposes (along with its fancy social features): -- Resolve short names (to avoid passing absolute URLs all the time) - : - username/projectname -\> - https://registry.docker.io/users/\/repositories/\/ - - team/projectname -\> - https://registry.docker.io/team/\/repositories/\/ + - Resolve short names (to avoid passing absolute URLs all the time): -- Authenticate a user as a repos owner (for a central referenced + username/projectname -> + https://registry.docker.io/users//repositories// + team/projectname -> + https://registry.docker.io/team//repositories// + + - Authenticate a user as a repos owner (for a central referenced repository) ### Without an Index @@ -429,17 +437,17 @@ no write access is necessary). The Index data needed by the Registry are simple: -- Serve the checksums -- Provide and authorize a Token + - Serve the checksums + - Provide and authorize a Token In the scenario of a Registry running on a private network with the need -of centralizing and authorizing, it’s easy to use a custom Index. +of centralizing and authorizing, it's easy to use a custom Index. The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a -specific Index, it’ll be the private entity responsibility (basically +specific Index, it'll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain -the Index and the Docker’s configuration among its consumers. +the Index and the Docker's configuration among its consumers. ## The API @@ -453,7 +461,7 @@ JSON), basically because Registry stores exactly the same kind of information as Docker uses to manage them. The format of ancestry is a line-separated list of image ids, in age -order, i.e. the image’s parent is on the last line, the parent of the +order, i.e. the image's parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty. @@ -468,17 +476,18 @@ file is empty. ### Create a user (Index) -POST /v1/users + POST /v1/users: -**Body**: -: {"email": "[sam@dotcloud.com](mailto:sam%40dotcloud.com)", - "password": "toto42", "username": "foobar"’} -**Validation**: -: - **username**: min 4 character, max 30 characters, must match the - regular expression [a-z0-9\_]. + **Body**: + {"email": "[sam@dotcloud.com](mailto:sam%40dotcloud.com)", + "password": "toto42", "username": "foobar"`} + + **Validation**: + - **username**: min 4 character, max 30 characters, must match the + regular expression [a-z0-9_]. - **password**: min 5 characters -**Valid**: return HTTP 200 + **Valid**: return HTTP 200 Errors: HTTP 400 (we should create error codes for possible errors) - invalid json - missing field - wrong format (username, password, email, @@ -490,10 +499,10 @@ etc) - forbidden name - name already exists ### Update a user (Index) -PUT /v1/users/\ + PUT /v1/users/ -**Body**: -: {"password": "toto"} + **Body**: + {"password": "toto"} > **Note**: > We can also update email address, if they do, they will need to reverify @@ -506,44 +515,44 @@ validate credentials. HTTP Basic Auth for now, maybe change in future. GET /v1/users -**Return**: -: - Valid: HTTP 200 - - Invalid login: HTTP 401 - - Account inactive: HTTP 403 Account is not Active + **Return**: + - Valid: HTTP 200 + - Invalid login: HTTP 401 + - Account inactive: HTTP 403 Account is not Active ### Tags (Registry) The Registry does not know anything about users. Even though -repositories are under usernames, it’s just a namespace for the +repositories are under usernames, it's just a namespace for the registry. Allowing us to implement organizations or different namespaces -per user later, without modifying the Registry’s API. +per user later, without modifying the Registry'sAPI. The following naming restrictions apply: -- Namespaces must match the same regular expression as usernames (See + - Namespaces must match the same regular expression as usernames (See 4.2.1.) -- Repository names must match the regular expression [a-zA-Z0-9-\_.] + - Repository names must match the regular expression [a-zA-Z0-9-_.] ### Get all tags: -GET /v1/repositories/\/\/tags +GET /v1/repositories///tags -**Return**: HTTP 200 -: { "latest": + **Return**: HTTP 200 + { "latest": "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f", “0.1.1”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087” } -#### 4.3.2 Read the content of a tag (resolve the image id) + **4.3.2 Read the content of a tag (resolve the image id):** -GET /v1/repositories/\/\/tags/\ + GET /v1/repositories///tags/ -**Return**: -: "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f" + **Return**: + "9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f" -#### 4.3.3 Delete a tag (registry) + **4.3.3 Delete a tag (registry):** -DELETE /v1/repositories/\/\/tags/\ + DELETE /v1/repositories///tags/ ### 4.4 Images (Index) @@ -552,12 +561,12 @@ it uses the X-Docker-Endpoints header. In other terms, this requests always add a `X-Docker-Endpoints` to indicate the location of the registry which hosts this repository. -#### 4.4.1 Get the images +**4.4.1 Get the images:** -GET /v1/repositories/\/\/images + GET /v1/repositories///images -**Return**: HTTP 200 -: [{“id”: + **Return**: HTTP 200 + [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “[md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087](md5:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087)”}] @@ -566,22 +575,22 @@ GET /v1/repositories/\/\/images You always add images, you never remove them. -PUT /v1/repositories/\/\/images + PUT /v1/repositories///images -**Body**: -: [ {“id”: + **Body**: + [ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ] -**Return** 204 + **Return**: 204 ### Repositories ### Remove a Repository (Registry) -DELETE /v1/repositories/\/\ +DELETE /v1/repositories// Return 200 OK @@ -589,16 +598,16 @@ Return 200 OK This starts the delete process. see 2.3 for more details. -DELETE /v1/repositories/\/\ +DELETE /v1/repositories// Return 202 OK ## Chaining Registries -It’s possible to chain Registries server for several reasons: +It's possible to chain Registries server for several reasons: -- Load balancing -- Delegate the next request to another server + - Load balancing + - Delegate the next request to another server When a Registry is a reference for a repository, it should host the entire images chain in order to avoid breaking the chain during the @@ -631,32 +640,30 @@ You have 3 options: 1. Provide user credentials and ask for a token - > **Header**: - > : - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - > - X-Docker-Token: true - > - > In this case, along with the 200 response, you’ll get a new token - > (if user auth is ok): If authorization isn’t correct you get a 401 - > response. If account isn’t active you will get a 403 response. - > - > **Response**: - > : - 200 OK - > - X-Docker-Token: Token - > signature=123abc,repository=”foo/bar”,access=read - > + **Header**: + - Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== + - X-Docker-Token: true + + In this case, along with the 200 response, you'll get a new token + (if user auth is ok): If authorization isn't correct you get a 401 + response. If account isn't active you will get a 403 response. + + **Response**: + - 200 OK + - X-Docker-Token: Token + signature=123abc,repository=”foo/bar”,access=read + 2. Provide user credentials only - > **Header**: - > : Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== - > + **Header**: + Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== 3. Provide Token - > **Header**: - > : Authorization: Token - > signature=123abc,repository=”foo/bar”,access=read - > + **Header**: + Authorization: Token + signature=123abc,repository=”foo/bar”,access=read ### 6.2 On the Registry @@ -684,7 +691,7 @@ Next request: ## Document Version -- 1.0 : May 6th 2013 : initial release -- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new + - 1.0 : May 6th 2013 : initial release + - 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace. diff --git a/docs/sources/reference/api/remote_api_client_libraries.md b/docs/sources/reference/api/remote_api_client_libraries.md index eb1e3a4ee1..4b90afc5b0 100644 --- a/docs/sources/reference/api/remote_api_client_libraries.md +++ b/docs/sources/reference/api/remote_api_client_libraries.md @@ -9,81 +9,124 @@ compatibility. Please file issues with the library owners. If you find more library implementations, please list them in Docker doc bugs and we will add the libraries here. - ------------------------------------------------------------------------- - Language/Framewor Name Repository Status - k - ----------------- ------------ ---------------------------------- ------- - Python docker-py [https://github.com/dotcloud/docke Active - r-py](https://github.com/dotcloud/ - docker-py) - - Ruby docker-clien [https://github.com/geku/docker-cl Outdate - t ient](https://github.com/geku/dock d - er-client) - - Ruby docker-api [https://github.com/swipely/docker Active - -api](https://github.com/swipely/d - ocker-api) - - JavaScript dockerode [https://github.com/apocas/dockero Active - (NodeJS) de](https://github.com/apocas/dock - erode) - Install via NPM: npm install - dockerode - - JavaScript docker.io [https://github.com/appersonlabs/d Active - (NodeJS) ocker.io](https://github.com/apper - sonlabs/docker.io) - Install via NPM: npm install - docker.io - - JavaScript docker-js [https://github.com/dgoujard/docke Outdate - r-js](https://github.com/dgoujard/ d - docker-js) - - JavaScript docker-cp [https://github.com/13W/docker-cp] Active - (Angular) (https://github.com/13W/docker-cp) - **WebUI** - - JavaScript dockerui [https://github.com/crosbymichael/ Active - (Angular) dockerui](https://github.com/crosb - **WebUI** ymichael/dockerui) - - Java docker-java [https://github.com/kpelykh/docker Active - -java](https://github.com/kpelykh/ - docker-java) - - Erlang erldocker [https://github.com/proger/erldock Active - er](https://github.com/proger/erld - ocker) - - Go go-dockercli [https://github.com/fsouza/go-dock Active - ent erclient](https://github.com/fsouz - a/go-dockerclient) - - Go dockerclient [https://github.com/samalba/docker Active - client](https://github.com/samalba - /dockerclient) - - PHP Alvine [http://pear.alvine.io/](http://pe Active - ar.alvine.io/) - (alpha) - - PHP Docker-PHP [http://stage1.github.io/docker-ph Active - p/](http://stage1.github.io/docker - -php/) - - Perl Net::Docker [https://metacpan.org/pod/Net::Doc Active - ker](https://metacpan.org/pod/Net: - :Docker) - - Perl Eixo::Docker [https://github.com/alambike/eixo- Active - docker](https://github.com/alambik - e/eixo-docker) - - Scala reactive-doc [https://github.com/almoehi/reacti Active - ker ve-docker](https://github.com/almo - ehi/reactive-docker) - ------------------------------------------------------------------------- - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Language/FrameworkNameRepositoryStatus
Pythondocker-pyhttps://github.com/dotcloud/docker-pyActive
Rubydocker-clienthttps://github.com/geku/docker-clientOutdated
Rubydocker-apihttps://github.com/swipely/docker-apiActive
JavaScript (NodeJS)dockerodehttps://github.com/apocas/dockerode + Install via NPM: npm install dockerodeActive
JavaScript (NodeJS)docker.iohttps://github.com/appersonlabs/docker.io + Install via NPM: npm install docker.ioActive
JavaScriptdocker-jshttps://github.com/dgoujard/docker-jsOutdated
JavaScript (Angular) WebUIdocker-cphttps://github.com/13W/docker-cpActive
JavaScript (Angular) WebUIdockeruihttps://github.com/crosbymichael/dockeruiActive
Javadocker-javahttps://github.com/kpelykh/docker-javaActive
Erlangerldockerhttps://github.com/proger/erldockerActive
Gogo-dockerclienthttps://github.com/fsouza/go-dockerclientActive
Godockerclienthttps://github.com/samalba/dockerclientActive
PHPAlvinehttp://pear.alvine.io/ (alpha)Active
PHPDocker-PHPhttp://stage1.github.io/docker-php/Active
PerlNet::Dockerhttps://metacpan.org/pod/Net::DockerActive
PerlEixo::Dockerhttps://github.com/alambike/eixo-dockerActive
Scalareactive-dockerhttps://github.com/almoehi/reactive-dockerActive
diff --git a/docs/sources/reference/builder.md b/docs/sources/reference/builder.md index 5c332e5c2f..3e278425c2 100644 --- a/docs/sources/reference/builder.md +++ b/docs/sources/reference/builder.md @@ -4,23 +4,21 @@ page_keywords: builder, docker, Dockerfile, automation, image creation # Dockerfile Reference -**Docker can act as a builder** and read instructions from a text -`Dockerfile` to automate the steps you would -otherwise take manually to create an image. Executing -`docker build` will run your steps and commit them -along the way, giving you a final image. +**Docker can act as a builder** and read instructions from a text *Dockerfile* +to automate the steps you would otherwise take manually to create an image. +Executing `docker build` will run your steps and commit them along the way, +giving you a final image. ## Usage -To [*build*](../commandline/cli/#cli-build) an image from a source -repository, create a description file called `Dockerfile` -at the root of your repository. This file will describe the -steps to assemble the image. +To [*build*](../commandline/cli/#cli-build) an image from a source repository, +create a description file called Dockerfile at the root of your repository. +This file will describe the steps to assemble the image. -Then call `docker build` with the path of your -source repository as argument (for example, `.`): +Then call `docker build` with the path of you source repository as argument +(for example, `.`): -> `sudo docker build .` + sudo docker build . The path to the source repository defines where to find the *context* of the build. The build is run by the Docker daemon, not by the CLI, so the @@ -30,7 +28,7 @@ whole context must be transferred to the daemon. The Docker CLI reports You can specify a repository and tag at which to save the new image if the build succeeds: -> `sudo docker build -t shykes/myapp .` + sudo docker build -t shykes/myapp . The Docker daemon will run your steps one-by-one, committing the result to a new image if necessary, before finally outputting the ID of your @@ -38,12 +36,11 @@ new image. The Docker daemon will automatically clean up the context you sent. Note that each instruction is run independently, and causes a new image -to be created - so `RUN cd /tmp` will not have any -effect on the next instructions. +to be created - so `RUN cd /tmp` will not have any effect on the next +instructions. Whenever possible, Docker will re-use the intermediate images, -accelerating `docker build` significantly (indicated -by `Using cache`): +accelerating `docker build` significantly (indicated by `Using cache`): $ docker build -t SvenDowideit/ambassador . Uploading context 10.24 kB @@ -58,9 +55,9 @@ by `Using cache`): ---> 1a5ffc17324d Successfully built 1a5ffc17324d -When you’re done with your build, you’re ready to look into [*Pushing a -repository to its -registry*](../../use/workingwithrepository/#image-push). +When you're done with your build, you're ready to look into +[*Pushing a repository to its registry*]( +/use/workingwithrepository/#image-push). ## Format @@ -74,7 +71,7 @@ be UPPERCASE in order to distinguish them from arguments more easily. Docker evaluates the instructions in a Dockerfile in order. **The first instruction must be \`FROM\`** in order to specify the [*Base -Image*](../../terms/image/#base-image-def) from which you are building. +Image*](/terms/image/#base-image-def) from which you are building. Docker will treat lines that *begin* with `#` as a comment. A `#` marker anywhere else in the line will @@ -83,84 +80,73 @@ be treated as an argument. This allows statements like: # Comment RUN echo 'we are running some # of cool things' -Here is the set of instructions you can use in a `Dockerfile` +Here is the set of instructions you can use in a Dockerfile for building images. -## `FROM` +## FROM -> `FROM ` + FROM Or -> `FROM :` + FROM : -The `FROM` instruction sets the [*Base -Image*](../../terms/image/#base-image-def) for subsequent instructions. -As such, a valid Dockerfile must have `FROM` as its -first instruction. The image can be any valid image – it is especially -easy to start by **pulling an image** from the [*Public -Repositories*](../../use/workingwithrepository/#using-public-repositories). +The `FROM` instruction sets the [*Base Image*](/terms/image/#base-image-def) +for subsequent instructions. As such, a valid Dockerfile must have `FROM` as +its first instruction. The image can be any valid image – it is especially easy +to start by **pulling an image** from the [*Public Repositories*]( +/use/workingwithrepository/#using-public-repositories). -`FROM` must be the first non-comment instruction in -the `Dockerfile`. +`FROM` must be the first non-comment instruction in the Dockerfile. -`FROM` can appear multiple times within a single -Dockerfile in order to create multiple images. Simply make a note of the -last image id output by the commit before each new `FROM` -command. +`FROM` can appear multiple times within a single Dockerfile in order to create +multiple images. Simply make a note of the last image id output by the commit +before each new `FROM` command. -If no `tag` is given to the `FROM` -instruction, `latest` is assumed. If the +If no `tag` is given to the `FROM` instruction, `latest` is assumed. If the used tag does not exist, an error will be returned. -## `MAINTAINER` +## MAINTAINER -> `MAINTAINER ` + MAINTAINER -The `MAINTAINER` instruction allows you to set the -*Author* field of the generated images. +The `MAINTAINER` instruction allows you to set the *Author* field of the +generated images. -## `RUN` +## RUN RUN has 2 forms: -- `RUN ` (the command is run in a shell - - `/bin/sh -c`) -- `RUN ["executable", "param1", "param2"]` (*exec* - form) +- `RUN ` (the command is run in a shell - `/bin/sh -c`) +- `RUN ["executable", "param1", "param2"]` (*exec* form) -The `RUN` instruction will execute any commands in a -new layer on top of the current image and commit the results. The -resulting committed image will be used for the next step in the -Dockerfile. +The `RUN` instruction will execute any commands in a new layer on top of the +current image and commit the results. The resulting committed image will be +used for the next step in the Dockerfile. -Layering `RUN` instructions and generating commits -conforms to the core concepts of Docker where commits are cheap and -containers can be created from any point in an image’s history, much -like source control. +Layering `RUN` instructions and generating commits conforms to the core +concepts of Docker where commits are cheap and containers can be created from +any point in an image's history, much like source control. -The *exec* form makes it possible to avoid shell string munging, and to -`RUN` commands using a base image that does not -contain `/bin/sh`. +The *exec* form makes it possible to avoid shell string munging, and to `RUN` +commands using a base image that does not contain `/bin/sh`. ### Known Issues (RUN) -- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about - file permissions problems that can occur when using the AUFS file - system. You might notice it during an attempt to `rm` - a file, for example. The issue describes a workaround. -- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale - will not be set automatically. +- [Issue 783](https://github.com/dotcloud/docker/issues/783) is about file + permissions problems that can occur when using the AUFS file system. You + might notice it during an attempt to `rm` a file, for example. The issue + describes a workaround. +- [Issue 2424](https://github.com/dotcloud/docker/issues/2424) Locale will + not be set automatically. -## `CMD` +## CMD CMD has three forms: -- `CMD ["executable","param1","param2"]` (like an - *exec*, preferred form) -- `CMD ["param1","param2"]` (as *default - parameters to ENTRYPOINT*) -- `CMD command param1 param2` (as a *shell*) +- `CMD ["executable","param1","param2"]` (like an *exec*, preferred form) +- `CMD ["param1","param2"]` (as *default parameters to ENTRYPOINT*) +- `CMD command param1 param2` (as a *shell*) There can only be one CMD in a Dockerfile. If you list more than one CMD then only the last CMD will take effect. @@ -169,83 +155,75 @@ then only the last CMD will take effect. container.** These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT as well. -When used in the shell or exec formats, the `CMD` -instruction sets the command to be executed when running the image. +When used in the shell or exec formats, the `CMD` instruction sets the command +to be executed when running the image. -If you use the *shell* form of the CMD, then the `` -will execute in `/bin/sh -c`: +If you use the *shell* form of the CMD, then the `` will execute in +`/bin/sh -c`: FROM ubuntu CMD echo "This is a test." | wc - -If you want to **run your** `` **without a -shell** then you must express the command as a JSON array and give the -full path to the executable. **This array form is the preferred format -of CMD.** Any additional parameters must be individually expressed as -strings in the array: +If you want to **run your** `` **without a shell** then you must +express the command as a JSON array and give the full path to the executable. +**This array form is the preferred format of CMD.** Any additional parameters +must be individually expressed as strings in the array: FROM ubuntu CMD ["/usr/bin/wc","--help"] -If you would like your container to run the same executable every time, -then you should consider using `ENTRYPOINT` in -combination with `CMD`. See +If you would like your container to run the same executable every time, then +you should consider using `ENTRYPOINT` in combination with `CMD`. See [*ENTRYPOINT*](#entrypoint). -If the user specifies arguments to `docker run` then -they will override the default specified in CMD. +If the user specifies arguments to `docker run` then they will override the +default specified in CMD. > **Note**: -> Don’t confuse `RUN` with `CMD`. `RUN` actually runs a command and commits +> don't confuse `RUN` with `CMD`. `RUN` actually runs a command and commits > the result; `CMD` does not execute anything at build time, but specifies > the intended command for the image. -## `EXPOSE` +## EXPOSE -> `EXPOSE [...]` + EXPOSE [...] -The `EXPOSE` instructions informs Docker that the -container will listen on the specified network ports at runtime. Docker -uses this information to interconnect containers using links (see -[*links*](../../use/working_with_links_names/#working-with-links-names)), -and to setup port redirection on the host system (see [*Redirect -Ports*](../../use/port_redirection/#port-redirection)). +The `EXPOSE` instructions informs Docker that the container will listen on the +specified network ports at runtime. Docker uses this information to interconnect +containers using links (see +[*links*](/use/working_with_links_names/#working-with-links-names)), +and to setup port redirection on the host system (see [*Redirect Ports*]( +/use/port_redirection/#port-redirection)). -## `ENV` +## ENV -> `ENV ` + ENV -The `ENV` instruction sets the environment variable -`` to the value ``. -This value will be passed to all future `RUN` -instructions. This is functionally equivalent to prefixing the command -with `=` +The `ENV` instruction sets the environment variable `` to the value +``. This value will be passed to all future `RUN` instructions. This is +functionally equivalent to prefixing the command with `=` -The environment variables set using `ENV` will -persist when a container is run from the resulting image. You can view -the values using `docker inspect`, and change them -using `docker run --env =`. +The environment variables set using `ENV` will persist when a container is run +from the resulting image. You can view the values using `docker inspect`, and +change them using `docker run --env =`. > **Note**: > One example where this can cause unexpected consequenses, is setting -> `ENV DEBIAN_FRONTEND noninteractive`. Which will -> persist when the container is run interactively; for example: -> `docker run -t -i image bash` +> `ENV DEBIAN_FRONTEND noninteractive`. Which will persist when the container +> is run interactively; for example: `docker run -t -i image bash` -## `ADD` +## ADD -> `ADD ` + ADD -The `ADD` instruction will copy new files from -\ and add them to the container’s filesystem at path -``. +The `ADD` instruction will copy new files from `` and add them to the +container's filesystem at path ``. -`` must be the path to a file or directory -relative to the source directory being built (also called the *context* -of the build) or a remote file URL. +`` must be the path to a file or directory relative to the source directory +being built (also called the *context* of the build) or a remote file URL. -`` is the absolute path to which the source -will be copied inside the destination container. +`` is the absolute path to which the source will be copied inside the +destination container. All new files and directories are created with mode 0755, uid and gid 0. @@ -262,79 +240,64 @@ All new files and directories are created with mode 0755, uid and gid 0. The copy obeys the following rules: -- The `` path must be inside the *context* of - the build; you cannot `ADD ../something /something` -, because the first step of a `docker build` - is to send the context directory (and subdirectories) to - the docker daemon. +- The `` path must be inside the *context* of the build; + you cannot `ADD ../something /something`, because the first step of a + `docker build` is to send the context directory (and subdirectories) to the + docker daemon. -- If `` is a URL and `` - does not end with a trailing slash, then a file is - downloaded from the URL and copied to ``. +- If `` is a URL and `` does not end with a trailing slash, then a + file is downloaded from the URL and copied to ``. -- If `` is a URL and `` - does end with a trailing slash, then the filename is - inferred from the URL and the file is downloaded to - `/`. For instance, - `ADD http://example.com/foobar /` would create - the file `/foobar`. The URL must have a - nontrivial path so that an appropriate filename can be discovered in - this case (`http://example.com` will not work). +- If `` is a URL and `` does end with a trailing slash, then the + filename is inferred from the URL and the file is downloaded to + `/`. For instance, `ADD http://example.com/foobar /` would + create the file `/foobar`. The URL must have a nontrivial path so that an + appropriate filename can be discovered in this case (`http://example.com` + will not work). -- If `` is a directory, the entire directory - is copied, including filesystem metadata. +- If `` is a directory, the entire directory is copied, including + filesystem metadata. -- If `` is a *local* tar archive in a - recognized compression format (identity, gzip, bzip2 or xz) then it - is unpacked as a directory. Resources from *remote* URLs are **not** - decompressed. +- If `` is a *local* tar archive in a recognized compression format + (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources + from *remote* URLs are **not** decompressed. When a directory is copied or + unpacked, it has the same behavior as `tar -x`: the result is the union of: - When a directory is copied or unpacked, it has the same behavior as - `tar -x`: the result is the union of + 1. whatever existed at the destination path and + 2. the contents of the source tree, with conflicts resolved in favor of + "2." on a file-by-file basis. - 1. whatever existed at the destination path and - 2. the contents of the source tree, +- If `` is any other kind of file, it is copied individually along with + its metadata. In this case, if `` ends with a trailing slash `/`, it + will be considered a directory and the contents of `` will be written + at `/base()`. - with conflicts resolved in favor of "2." on a file-by-file basis. +- If `` does not end with a trailing slash, it will be considered a + regular file and the contents of `` will be written at ``. -- If `` is any other kind of file, it is - copied individually along with its metadata. In this case, if - `` ends with a trailing slash - `/`, it will be considered a directory and the - contents of `` will be written at - `/base()`. +- If `` doesn't exist, it is created along with all missing directories + in its path. -- If `` does not end with a trailing slash, - it will be considered a regular file and the contents of - `` will be written at `` -. - -- If `` doesn’t exist, it is created along - with all missing directories in its path. - -## `ENTRYPOINT` +## ENTRYPOINT ENTRYPOINT has two forms: -- `ENTRYPOINT ["executable", "param1", "param2"]` - (like an *exec*, preferred form) -- `ENTRYPOINT command param1 param2` (as a - *shell*) +- `ENTRYPOINT ["executable", "param1", "param2"]` + (like an *exec*, preferred form) +- `ENTRYPOINT command param1 param2` + (as a *shell*) -There can only be one `ENTRYPOINT` in a Dockerfile. -If you have more than one `ENTRYPOINT`, then only -the last one in the Dockerfile will have an effect. +There can only be one `ENTRYPOINT` in a Dockerfile. If you have more than one +`ENTRYPOINT`, then only the last one in the Dockerfile will have an effect. -An `ENTRYPOINT` helps you to configure a container -that you can run as an executable. That is, when you specify an -`ENTRYPOINT`, then the whole container runs as if it -was just that executable. +An `ENTRYPOINT` helps you to configure a container that you can run as an +executable. That is, when you specify an `ENTRYPOINT`, then the whole container +runs as if it was just that executable. The `ENTRYPOINT` instruction adds an entry command that will **not** be -overwritten when arguments are passed to `docker run`, unlike the -behavior of `CMD`. This allows arguments to be passed to the entrypoint. -i.e. `docker run -d` will pass the "-d" argument to the -ENTRYPOINT. +overwritten when arguments are passed to `docker run`, unlike the behavior +of `CMD`. This allows arguments to be passed to the entrypoint. i.e. +`docker run -d` will pass the "-d" argument to the ENTRYPOINT. You can specify parameters either in the ENTRYPOINT JSON array (as in "like an exec" above), or by using a CMD statement. Parameters in the @@ -342,13 +305,13 @@ ENTRYPOINT will not be overridden by the `docker run` arguments, but parameters specified via CMD will be overridden by `docker run` arguments. -Like a `CMD`, you can specify a plain string for the -ENTRYPOINT and it will execute in `/bin/sh -c`: +Like a `CMD`, you can specify a plain string for the `ENTRYPOINT` and it will +execute in `/bin/sh -c`: FROM ubuntu ENTRYPOINT wc -l - -For example, that Dockerfile’s image will *always* take stdin as input +For example, that Dockerfile's image will *always* take stdin as input ("-") and print the number of lines ("-l"). If you wanted to make this optional but default, you could use a CMD: @@ -356,44 +319,41 @@ optional but default, you could use a CMD: CMD ["-l", "-"] ENTRYPOINT ["/usr/bin/wc"] -## `VOLUME` +## VOLUME -> `VOLUME ["/data"]` + VOLUME ["/data"] -The `VOLUME` instruction will create a mount point -with the specified name and mark it as holding externally mounted -volumes from native host or other containers. For more -information/examples and mounting instructions via docker client, refer -to [*Share Directories via -Volumes*](../../use/working_with_volumes/#volume-def) documentation. +The `VOLUME` instruction will create a mount point with the specified name +and mark it as holding externally mounted volumes from native host or other +containers. For more information/examples and mounting instructions via docker +client, refer to [*Share Directories via Volumes*]( +/use/working_with_volumes/#volume-def) documentation. -## `USER` +## USER -> `USER daemon` + USER daemon -The `USER` instruction sets the username or UID to -use when running the image. +The `USER` instruction sets the username or UID to use when running the image. -## `WORKDIR` +## WORKDIR -> `WORKDIR /path/to/workdir` + WORKDIR /path/to/workdir -The `WORKDIR` instruction sets the working directory -for the `RUN`, `CMD` and +The `WORKDIR` instruction sets the working directory for the `RUN`, `CMD` and `ENTRYPOINT` Dockerfile commands that follow it. It can be used multiple times in the one Dockerfile. If a relative path -is provided, it will be relative to the path of the previous -`WORKDIR` instruction. For example: +is provided, it will be relative to the path of the previous `WORKDIR` +instruction. For example: -> WORKDIR /a WORKDIR b WORKDIR c RUN pwd + WORKDIR /a WORKDIR b WORKDIR c RUN pwd The output of the final `pwd` command in this Dockerfile would be `/a/b/c`. -## `ONBUILD` +## ONBUILD -> `ONBUILD [INSTRUCTION]` + ONBUILD [INSTRUCTION] The `ONBUILD` instruction adds to the image a "trigger" instruction to be executed at a later time, when the image is @@ -410,7 +370,7 @@ daemon which may be customized with user-specific configuration. For example, if your image is a reusable python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called *after* -that. You can’t just call *ADD* and *RUN* now, because you don’t yet +that. You can't just call *ADD* and *RUN* now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but @@ -420,23 +380,23 @@ mixes with application-specific code. The solution is to use *ONBUILD* to register in advance instructions to run later, during the next build stage. -Here’s how it works: +Here's how it works: -1. When it encounters an *ONBUILD* instruction, the builder adds a - trigger to the metadata of the image being built. The instruction - does not otherwise affect the current build. -2. At the end of the build, a list of all triggers is stored in the - image manifest, under the key *OnBuild*. They can be inspected with - *docker inspect*. -3. Later the image may be used as a base for a new build, using the - *FROM* instruction. As part of processing the *FROM* instruction, - the downstream builder looks for *ONBUILD* triggers, and executes - them in the same order they were registered. If any of the triggers - fail, the *FROM* instruction is aborted which in turn causes the - build to fail. If all triggers succeed, the FROM instruction - completes and the build continues as usual. -4. Triggers are cleared from the final image after being executed. In - other words they are not inherited by "grand-children" builds. +1. When it encounters an *ONBUILD* instruction, the builder adds a + trigger to the metadata of the image being built. The instruction + does not otherwise affect the current build. +2. At the end of the build, a list of all triggers is stored in the + image manifest, under the key *OnBuild*. They can be inspected with + *docker inspect*. +3. Later the image may be used as a base for a new build, using the + *FROM* instruction. As part of processing the *FROM* instruction, + the downstream builder looks for *ONBUILD* triggers, and executes + them in the same order they were registered. If any of the triggers + fail, the *FROM* instruction is aborted which in turn causes the + build to fail. If all triggers succeed, the FROM instruction + completes and the build continues as usual. +4. Triggers are cleared from the final image after being executed. In + other words they are not inherited by "grand-children" builds. For example you might add something like this: @@ -445,7 +405,7 @@ For example you might add something like this: ONBUILD RUN /usr/local/bin/python-build --dir /app/src [...] -> **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isn’t allowed. +> **Warning**: Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed. > **Warning**: ONBUILD may not trigger FROM or MAINTAINER instructions. diff --git a/docs/sources/reference/commandline.md b/docs/sources/reference/commandline.md index 8620a095b9..b15f529394 100644 --- a/docs/sources/reference/commandline.md +++ b/docs/sources/reference/commandline.md @@ -3,5 +3,5 @@ ## Contents: -- [Command Line](cli/) +- [Command Line](cli/) diff --git a/docs/sources/reference/commandline/cli.md b/docs/sources/reference/commandline/cli.md index e0d896755b..6388cb6192 100644 --- a/docs/sources/reference/commandline/cli.md +++ b/docs/sources/reference/commandline/cli.md @@ -4,8 +4,8 @@ page_keywords: Docker, Docker documentation, CLI, command line # Command Line -To list available commands, either run `docker` with -no parameters or execute `docker help`: +To list available commands, either run `docker` with no parameters +or execute `docker help`: $ sudo docker Usage: docker [OPTIONS] COMMAND [arg...] @@ -33,13 +33,11 @@ will set the value to the opposite of the default value. ### Multi -Options like `-a=[]` indicate they can be specified -multiple times: +Options like `-a=[]` indicate they can be specified multiple times: docker run -a stdin -a stdout -a stderr -i -t ubuntu /bin/bash -Sometimes this can use a more complex value string, as for -`-v`: +Sometimes this can use a more complex value string, as for `-v`: docker run -v /host:/container example/mysql @@ -49,9 +47,10 @@ Options like `--name=""` expect a string, and they can only be specified once. Options like `-c=0` expect an integer, and they can only be specified once. -## `daemon` +## daemon Usage of docker: + -D, --debug=false: Enable debug mode -H, --host=[]: Multiple tcp://host:port or unix://path/to/socket to bind in daemon mode, single connection otherwise. systemd socket activation can be used with fd://[socketfd]. -G, --group="docker": Group to assign the unix socket specified by -H when running in daemon mode; use '' (the empty string) to disable setting of a group @@ -95,9 +94,8 @@ To run the daemon with debug output, use `docker -d -D`. To use lxc as the execution driver, use `docker -d -e lxc`. -The docker client will also honor the `DOCKER_HOST` -environment variable to set the `-H` flag for the -client. +The docker client will also honor the `DOCKER_HOST` environment variable to set +the `-H` flag for the client. docker -H tcp://0.0.0.0:4243 ps # or @@ -105,32 +103,32 @@ client. docker ps # both are equal -To run the daemon with [systemd socket -activation](http://0pointer.de/blog/projects/socket-activation.html), -use `docker -d -H fd://`. Using `fd://` -will work perfectly for most setups but you can also specify -individual sockets too `docker -d -H fd://3`. If the -specified socket activated files aren’t found then docker will exit. You +To run the daemon with [systemd socket activation]( +http://0pointer.de/blog/projects/socket-activation.html), use +`docker -d -H fd://`. Using `fd://` will work perfectly for most setups but +you can also specify individual sockets too `docker -d -H fd://3`. If the +specified socket activated files aren't found then docker will exit. You can find examples of using systemd socket activation with docker and -systemd in the [docker source -tree](https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/). +systemd in the [docker source tree]( +https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/). Docker supports softlinks for the Docker data directory -(`/var/lib/docker`) and for `/tmp`. TMPDIR and the data directory can be set like this: +(`/var/lib/docker`) and for `/tmp`. TMPDIR and the data directory can be set +like this: TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 # or export TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1 -## `attach` +## attach + +Attach to a running container. Usage: docker attach CONTAINER - Attach to a running container. - - --no-stdin=false: Do not attach stdin - --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) + --no-stdin=false: Do not attach stdin + --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) The `attach` command will allow you to view or interact with any running container, detached (`-d`) @@ -141,7 +139,7 @@ progress of your daemonized process. You can detach from the container again (and leave it running) with `CTRL-C` (for a quiet exit) or `CTRL-\` to get a stacktrace of the Docker client when it quits. When -you detach from the container’s process the exit code will be returned +you detach from the container's process the exit code will be returned to the client. To stop a container, use `docker stop`. @@ -182,36 +180,36 @@ To kill the container, use `docker kill`. ^C$ $ sudo docker stop $ID -## `build` +## build + +Build a new container image from the source code at PATH Usage: docker build [OPTIONS] PATH | URL | - - Build a new container image from the source code at PATH - -t, --tag="": Repository name (and optionally a tag) to be applied - to the resulting image in case of success. - -q, --quiet=false: Suppress the verbose output generated by the containers. - --no-cache: Do not use the cache when building the image. - --rm=true: Remove intermediate containers after a successful build -Use this command to build Docker images from a `Dockerfile` + -t, --tag="": Repository name (and optionally a tag) to be applied + to the resulting image in case of success. + -q, --quiet=false: Suppress the verbose output generated by the containers. + --no-cache: Do not use the cache when building the image. + --rm=true: Remove intermediate containers after a successful build + +Use this command to build Docker images from a Dockerfile and a "context". -The files at `PATH` or `URL` are -called the "context" of the build. The build process may refer to any of -the files in the context, for example when using an -[*ADD*](../../builder/#dockerfile-add) instruction. When a single -`Dockerfile` is given as `URL`, -then no context is set. +The files at `PATH` or `URL` are called the "context" of the build. The build +process may refer to any of the files in the context, for example when using an +[*ADD*](/reference/builder/#dockerfile-add) instruction. When a single Dockerfile is +given as `URL`, then no context is set. When a Git repository is set as `URL`, then the repository is used as the context. The Git repository is cloned with its submodules (git clone –recursive). A fresh git clone occurs in a temporary directory on your local host, and then this is sent to the Docker daemon as the context. This way, your local user credentials and -vpn’s etc can be used to access private repositories +vpn's etc can be used to access private repositories -See also +See also: -[*Dockerfile Reference*](../../builder/#dockerbuilder). +[*Dockerfile Reference*](/reference/builder/#dockerbuilder). ### Examples: @@ -243,14 +241,14 @@ See also This example specifies that the `PATH` is `.`, and so all the files in the local directory get -tar’d and sent to the Docker daemon. The `PATH` +tar`d and sent to the Docker daemon. The `PATH` specifies where to find the files for the "context" of the build on the Docker daemon. Remember that the daemon could be running on a remote -machine and that no parsing of the `Dockerfile` -happens at the client side (where you’re running +machine and that no parsing of the Dockerfile +happens at the client side (where you're running `docker build`). That means that *all* the files at `PATH` get sent, not just the ones listed to -[*ADD*](../../builder/#dockerfile-add) in the `Dockerfile`. +[*ADD*](/reference/builder/#dockerfile-add) in the Dockerfile. The transfer of context from the local machine to the Docker daemon is what the `docker` client means when you see the @@ -268,30 +266,30 @@ and the tag will be `2.0` $ sudo docker build - < Dockerfile -This will read a `Dockerfile` from *stdin* without +This will read a Dockerfile from *stdin* without context. Due to the lack of a context, no contents of any local directory will be sent to the `docker` daemon. Since -there is no context, a `Dockerfile` `ADD` +there is no context, a Dockerfile `ADD` only works if it refers to a remote URL. $ sudo docker build github.com/creack/docker-firefox This will clone the GitHub repository and use the cloned repository as -context. The `Dockerfile` at the root of the -repository is used as `Dockerfile`. Note that you +context. The Dockerfile at the root of the +repository is used as Dockerfile. Note that you can specify an arbitrary Git repository by using the `git://` schema. -## `commit` +## commit + +Create a new image from a container᾿s changes Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]] - Create a new image from a container᾿s changes + -m, --message="": Commit message + -a, --author="": Author (eg. "John Hannibal Smith " - -m, --message="": Commit message - -a, --author="": Author (eg. "John Hannibal Smith " - -It can be useful to commit a container’s file changes or settings into a +It can be useful to commit a container's file changes or settings into a new image. This allows you debug a container by running an interactive shell, or to export a working dataset to another server. Generally, it is better to use Dockerfiles to manage your images in a documented and @@ -309,27 +307,27 @@ maintainable way. REPOSITORY TAG ID CREATED VIRTUAL SIZE SvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB -## `cp` +## cp + +Copy files/folders from the containers filesystem to the host +path. Paths are relative to the root of the filesystem. Usage: docker cp CONTAINER:PATH HOSTPATH - Copy files/folders from the containers filesystem to the host - path. Paths are relative to the root of the filesystem. - $ sudo docker cp 7bb0e258aefe:/etc/debian_version . $ sudo docker cp blue_frog:/etc/hosts . -## `diff` +## diff + +List the changed files and directories in a container᾿s filesystem Usage: docker diff CONTAINER - List the changed files and directories in a container᾿s filesystem +There are 3 events that are listed in the `diff`: -There are 3 events that are listed in the ‘diff’: - -1. `` `A` `` - Add -2. `` `D` `` - Delete -3. `` `C` `` - Change +1. `A` - Add +2. `D` - Delete +3. `C` - Change For example: @@ -347,12 +345,12 @@ For example: A /go/src/github.com/dotcloud/docker/.git .... -## `events` +## events + +Get real time events from the server Usage: docker events - Get real time events from the server - --since="": Show all events created since timestamp (either seconds since epoch, or date string as below) --until="": Show events created before timestamp @@ -360,24 +358,24 @@ For example: ### Examples -You’ll need two shells for this example. +You'll need two shells for this example. -#### Shell 1: Listening for events +**Shell 1: Listening for events:** $ sudo docker events -#### Shell 2: Start and Stop a Container +**Shell 2: Start and Stop a Container:** $ sudo docker start 4386fb97867d $ sudo docker stop 4386fb97867d -#### Shell 1: (Again .. now showing events) +**Shell 1: (Again .. now showing events):** [2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop -#### Show events in the past from a specified time +**Show events in the past from a specified time:** $ sudo docker events --since 1378216169 [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die @@ -392,24 +390,24 @@ You’ll need two shells for this example. [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop -## `export` +## export + +Export the contents of a filesystem as a tar archive to STDOUT Usage: docker export CONTAINER - Export the contents of a filesystem as a tar archive to STDOUT - For example: $ sudo docker export red_panda > latest.tar -## `history` +## history + +Show the history of an image Usage: docker history [OPTIONS] IMAGE - Show the history of an image - - --no-trunc=false: Don᾿t truncate output - -q, --quiet=false: Only show numeric IDs + --no-trunc=false: Don᾿t truncate output + -q, --quiet=false: Only show numeric IDs To see how the `docker:latest` image was built: @@ -422,15 +420,15 @@ To see how the `docker:latest` image was built: 750d58736b4b6cc0f9a9abe8f258cef269e3e9dceced1146503522be9f985ada 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi - mkimage-debootstrap.sh -t jessie.tar.xz jessie http://http.debian.net/debian 0 B 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 9 months ago 0 B -## `images` +## images + +List images Usage: docker images [OPTIONS] [NAME] - List images - - -a, --all=false: Show all images (by default filter out the intermediate image layers) - --no-trunc=false: Don᾿t truncate output - -q, --quiet=false: Only show numeric IDs + -a, --all=false: Show all images (by default filter out the intermediate image layers) + --no-trunc=false: Don᾿t truncate output + -q, --quiet=false: Only show numeric IDs The default `docker images` will show all top level images, their repository and tags, and their virtual size. @@ -468,7 +466,7 @@ by default. tryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB -## `import` +## import Usage: docker import URL|- [REPOSITORY[:TAG]] @@ -483,19 +481,19 @@ data from *stdin*. ### Examples -#### Import from a remote location +**Import from a remote location:** This will create a new untagged image. $ sudo docker import http://example.com/exampleimage.tgz -#### Import from a local file +**Import from a local file:** Import to docker via pipe and *stdin*. $ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new -#### Import from a local directory +**Import from a local directory:** $ sudo tar -c . | docker import - exampleimagedir @@ -504,12 +502,12 @@ the ownership of the files (especially root ownership) during the archiving with tar. If you are not root (or the sudo command) when you tar, then the ownerships might not get preserved. -## `info` +## info + +Display system-wide information. Usage: docker info - Display system-wide information. - $ sudo docker info Containers: 292 Images: 194 @@ -522,44 +520,43 @@ tar, then the ownerships might not get preserved. Kernel Version: 3.8.0-33-generic WARNING: No swap limit support -When sending issue reports, please use `docker version` -and `docker info` to ensure we know how -your setup is configured. +When sending issue reports, please use `docker version` and `docker info` to +ensure we know how your setup is configured. -## `inspect` +## inspect + +Return low-level information on a container/image Usage: docker inspect CONTAINER|IMAGE [CONTAINER|IMAGE...] - Return low-level information on a container/image - - -f, --format="": Format the output using the given go template. + -f, --format="": Format the output using the given go template. By default, this will render all results in a JSON array. If a format is specified, the given template will be executed for each result. -Go’s [text/template](http://golang.org/pkg/text/template/) package +Go's[text/template](http://golang.org/pkg/text/template/) package describes all the details of the format. ### Examples -#### Get an instance’s IP Address +**Get an instance'sIP Address:** For the most part, you can pick out any field from the JSON in a fairly straightforward manner. $ sudo docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID -#### List All Port Bindings +**List All Port Bindings:** One can loop over arrays and maps in the results to produce simple text output: $ sudo docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID -#### Find a Specific Port Mapping +**Find a Specific Port Mapping:** -The `.Field` syntax doesn’t work when the field name -begins with a number, but the template language’s `index` +The `.Field` syntax doesn't work when the field name +begins with a number, but the template language's `index` function does. The `.NetworkSettings.Ports` section contains a map of the internal port mappings to a list of external address/port objects, so to grab just the numeric public @@ -570,43 +567,43 @@ the public address. $ sudo docker inspect --format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID -#### Get config +**Get config:** -The `.Field` syntax doesn’t work when the field -contains JSON data, but the template language’s custom `json` +The `.Field` syntax doesn't work when the field +contains JSON data, but the template language's custom `json` function does. The `.config` section contains complex json object, so to grab it as JSON, you use `json` to convert config object into JSON $ sudo docker inspect --format='{{json .config}}' $INSTANCE_ID -## `kill` +## kill + +Kill a running container (send SIGKILL, or specified signal) Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...] - Kill a running container (send SIGKILL, or specified signal) - - -s, --signal="KILL": Signal to send to the container + -s, --signal="KILL": Signal to send to the container The main process inside the container will be sent SIGKILL, or any signal specified with option `--signal`. ### Known Issues (kill) -- [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates - that `docker kill` may leave directories behind - and make it difficult to remove the container. -- [Issue 3844](https://github.com/dotcloud/docker/issues/3844) lxc - 1.0.0 beta3 removed `lcx-kill` which is used by - Docker versions before 0.8.0; see the issue for a workaround. +- [Issue 197](https://github.com/dotcloud/docker/issues/197) indicates + that `docker kill` may leave directories behind + and make it difficult to remove the container. +- [Issue 3844](https://github.com/dotcloud/docker/issues/3844) lxc + 1.0.0 beta3 removed `lcx-kill` which is used by + Docker versions before 0.8.0; see the issue for a workaround. -## `load` +## load + +Load an image from a tar archive on STDIN Usage: docker load - Load an image from a tar archive on STDIN - - -i, --input="": Read from a tar archive file, instead of STDIN + -i, --input="": Read from a tar archive file, instead of STDIN Loads a tarred repository from a file or the standard input stream. Restores both images and tags. @@ -626,28 +623,28 @@ Restores both images and tags. fedora heisenbug 58394af37342 7 weeks ago 385.5 MB fedora latest 58394af37342 7 weeks ago 385.5 MB -## `login` +## login + +Register or Login to the docker registry server Usage: docker login [OPTIONS] [SERVER] - Register or Login to the docker registry server - -e, --email="": Email -p, --password="": Password -u, --username="": Username - If you want to login to a private registry you can - specify this by adding the server name. +If you want to login to a private registry you can +specify this by adding the server name. example: docker login localhost:8080 -## `logs` +## logs + +Fetch the logs of a container Usage: docker logs [OPTIONS] CONTAINER - Fetch the logs of a container - -f, --follow=false: Follow log output The `docker logs` command batch-retrieves all logs @@ -655,28 +652,28 @@ present at the time of execution. The `docker logs --follow` command combines `docker logs` and `docker attach`: it will first return all logs from the beginning and then -continue streaming new output from the container’s stdout and stderr. +continue streaming new output from the container'sstdout and stderr. -## `port` +## port Usage: docker port [OPTIONS] CONTAINER PRIVATE_PORT - Lookup the public-facing port which is NAT-ed to PRIVATE_PORT +Lookup the public-facing port which is NAT-ed to PRIVATE_PORT -## `ps` +## ps + +List containers Usage: docker ps [OPTIONS] - List containers - - -a, --all=false: Show all containers. Only running containers are shown by default. - --before="": Show only container created before Id or Name, include non-running ones. - -l, --latest=false: Show only the latest created container, include non-running ones. - -n=-1: Show n last created containers, include non-running ones. - --no-trunc=false: Don᾿t truncate output - -q, --quiet=false: Only display numeric IDs - -s, --size=false: Display sizes, not to be used with -q - --since="": Show only containers created since Id or Name, include non-running ones. + -a, --all=false: Show all containers. Only running containers are shown by default. + --before="": Show only container created before Id or Name, include non-running ones. + -l, --latest=false: Show only the latest created container, include non-running ones. + -n=-1: Show n last created containers, include non-running ones. + --no-trunc=false: Don᾿t truncate output + -q, --quiet=false: Only display numeric IDs + -s, --size=false: Display sizes, not to be used with -q + --since="": Show only containers created since Id or Name, include non-running ones. Running `docker ps` showing 2 linked containers. @@ -685,21 +682,20 @@ Running `docker ps` showing 2 linked containers. 4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db -`docker ps` will show only running containers by -default. To see all containers: `docker ps -a` +`docker ps` will show only running containers by default. To see all containers: +`docker ps -a` -## `pull` +## pull + +Pull an image or a repository from the registry Usage: docker pull NAME[:TAG] - Pull an image or a repository from the registry - Most of your images will be created on top of a base image from the -\([https://index.docker.io](https://index.docker.io)). +Docker Index ([https://index.docker.io](https://index.docker.io)). The Docker Index contains many pre-built images that you can -`pull` and try without needing to define and -configure your own. +`pull` and try without needing to define and configure your own. To download a particular image, or set of images (i.e., a repository), use `docker pull`: @@ -711,31 +707,32 @@ use `docker pull`: # it is based on. (typically the empty `scratch` image, a MAINTAINERs layer, # and the un-tared base. -## `push` +## push + +Push an image or a repository to the registry Usage: docker push NAME[:TAG] - Push an image or a repository to the registry - Use `docker push` to share your images on public or private registries. -## `restart` +## restart + +Restart a running container Usage: docker restart [OPTIONS] NAME - Restart a running container + -t, --time=10: Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default=10 - -t, --time=10: Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default=10 +## rm -## `rm` +Remove one or more containers Usage: docker rm [OPTIONS] CONTAINER - Remove one or more containers - -l, --link="": Remove the link instead of the actual container - -f, --force=false: Force removal of running container - -v, --volumes=false: Remove the volumes associated to the container + -l, --link="": Remove the link instead of the actual container + -f, --force=false: Force removal of running container + -v, --volumes=false: Remove the volumes associated to the container ### Known Issues (rm) @@ -765,18 +762,18 @@ This command will delete all stopped containers. The command IDs and pass them to the `rm` command which will delete them. Any running containers will not be deleted. -## `rmi` +## rmi + +Remove one or more images Usage: docker rmi IMAGE [IMAGE...] - Remove one or more images - - -f, --force=false: Force - --no-prune=false: Do not delete untagged parents + -f, --force=false: Force + --no-prune=false: Do not delete untagged parents ### Removing tagged images -Images can be removed either by their short or long ID’s, or their image +Images can be removed either by their short or long ID`s, or their image names. If an image has more than one name, each of them needs to be removed before the image is removed. @@ -802,86 +799,79 @@ removed before the image is removed. Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 -## `run` +## run + +Run a command in a new container Usage: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] - Run a command in a new container + -a, --attach=map[]: Attach to stdin, stdout or stderr + -c, --cpu-shares=0: CPU shares (relative weight) + --cidfile="": Write the container ID to the file + -d, --detach=false: Detached mode: Run container in the background, print new container id + -e, --env=[]: Set environment variables + --env-file="": Read in a line delimited file of ENV variables + -h, --hostname="": Container host name + -i, --interactive=false: Keep stdin open even if not attached + --privileged=false: Give extended privileges to this container + -m, --memory="": Memory limit (format: , where unit = b, k, m or g) + -n, --networking=true: Enable networking for this container + -p, --publish=[]: Map a network port to the container + --rm=false: Automatically remove the container when it exits (incompatible with -d) + -t, --tty=false: Allocate a pseudo-tty + -u, --user="": Username or UID + --dns=[]: Set custom dns servers for the container + --dns-search=[]: Set custom DNS search domains for the container + -v, --volume=[]: Create a bind mount to a directory or file with: [host-path]:[container-path]:[rw|ro]. If a directory "container-path" is missing, then docker creates a new volume. + --volumes-from="": Mount all volumes from the given container(s) + --entrypoint="": Overwrite the default entrypoint set by the image + -w, --workdir="": Working directory inside the container + --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" + --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) + --expose=[]: Expose a port from the container without publishing it to your host + --link="": Add link to another container (name:alias) + --name="": Assign the specified name to the container. If no name is specific docker will generate a random name + -P, --publish-all=false: Publish all exposed ports to the host interfaces - -a, --attach=map[]: Attach to stdin, stdout or stderr - -c, --cpu-shares=0: CPU shares (relative weight) - --cidfile="": Write the container ID to the file - -d, --detach=false: Detached mode: Run container in the background, print new container id - -e, --env=[]: Set environment variables - --env-file="": Read in a line delimited file of ENV variables - -h, --hostname="": Container host name - -i, --interactive=false: Keep stdin open even if not attached - --privileged=false: Give extended privileges to this container - -m, --memory="": Memory limit (format: , where unit = b, k, m or g) - -n, --networking=true: Enable networking for this container - -p, --publish=[]: Map a network port to the container - --rm=false: Automatically remove the container when it exits (incompatible with -d) - -t, --tty=false: Allocate a pseudo-tty - -u, --user="": Username or UID - --dns=[]: Set custom dns servers for the container - --dns-search=[]: Set custom DNS search domains for the container - -v, --volume=[]: Create a bind mount to a directory or file with: [host-path]:[container-path]:[rw|ro]. If a directory "container-path" is missing, then docker creates a new volume. - --volumes-from="": Mount all volumes from the given container(s) - --entrypoint="": Overwrite the default entrypoint set by the image - -w, --workdir="": Working directory inside the container - --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" - --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) - --expose=[]: Expose a port from the container without publishing it to your host - --link="": Add link to another container (name:alias) - --name="": Assign the specified name to the container. If no name is specific docker will generate a random name - -P, --publish-all=false: Publish all exposed ports to the host interfaces +The `docker run` command first `creates` a writeable container layer over the +specified image, and then `starts` it using the specified command. That is, +`docker run` is equivalent to the API `/containers/create` then +`/containers/(id)/start`. A stopped container can be restarted with all its +previous changes intact using `docker start`. See `docker ps -a` to view a list +of all containers. -The `docker run` command first `creates` -a writeable container layer over the specified image, and then -`starts` it using the specified command. That is, -`docker run` is equivalent to the API -`/containers/create` then -`/containers/(id)/start`. A stopped container can be -restarted with all its previous changes intact using -`docker start`. See `docker ps -a` -to view a list of all containers. +The `docker run` command can be used in combination with `docker commit` to +[*change the command that a container runs*](#commit-an-existing-container). -The `docker run` command can be used in combination -with `docker commit` to [*change the command that a -container runs*](#commit-an-existing-container). - -See [*Redirect Ports*](../../../use/port_redirection/#port-redirection) -for more detailed information about the `--expose`, -`-p`, `-P` and -`--link` parameters, and [*Link -Containers*](../../../use/working_with_links_names/#working-with-links-names) -for specific examples using `--link`. +See [*Redirect Ports*](/use/port_redirection/#port-redirection) +for more detailed information about the `--expose`, `-p`, `-P` and `--link` +parameters, and [*Link Containers*]( +/use/working_with_links_names/#working-with-links-names) for specific +examples using `--link`. ### Known Issues (run –volumes-from) -- [Issue 2702](https://github.com/dotcloud/docker/issues/2702): - "lxc-start: Permission denied - failed to mount" could indicate a - permissions problem with AppArmor. Please see the issue for a - workaround. +- [Issue 2702](https://github.com/dotcloud/docker/issues/2702): + "lxc-start: Permission denied - failed to mount" could indicate a + permissions problem with AppArmor. Please see the issue for a + workaround. ### Examples: $ sudo docker run --cidfile /tmp/docker_test.cid ubuntu echo "test" -This will create a container and print `test` to the -console. The `cidfile` flag makes Docker attempt to -create a new file and write the container ID to it. If the file exists -already, Docker will return an error. Docker will close this file when -`docker run` exits. +This will create a container and print `test` to the console. The `cidfile` +flag makes Docker attempt to create a new file and write the container ID to it. +If the file exists already, Docker will return an error. Docker will close this +file when `docker run` exits. $ sudo docker run -t -i --rm ubuntu bash root@bc338942ef20:/# mount -t tmpfs none /mnt mount: permission denied -This will *not* work, because by default, most potentially dangerous -kernel capabilities are dropped; including `cap_sys_admin` -(which is required to mount filesystems). However, the -`--privileged` flag will allow it to run: +This will *not* work, because by default, most potentially dangerous kernel +capabilities are dropped; including `cap_sys_admin` (which is required to mount +filesystems). However, the `--privileged` flag will allow it to run: $ sudo docker run --privileged ubuntu bash root@50e3f57e16e6:/# mount -t tmpfs none /mnt @@ -889,30 +879,27 @@ kernel capabilities are dropped; including `cap_sys_admin` Filesystem Size Used Avail Use% Mounted on none 1.9G 0 1.9G 0% /mnt -The `--privileged` flag gives *all* capabilities to -the container, and it also lifts all the limitations enforced by the -`device` cgroup controller. In other words, the -container can then do almost everything that the host can do. This flag -exists to allow special use-cases, like running Docker within Docker. +The `--privileged` flag gives *all* capabilities to the container, and it also +lifts all the limitations enforced by the `device` cgroup controller. In other +words, the container can then do almost everything that the host can do. This +flag exists to allow special use-cases, like running Docker within Docker. $ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd -The `-w` lets the command being executed inside -directory given, here `/path/to/dir/`. If the path -does not exists it is created inside the container. +The `-w` lets the command being executed inside directory given, here +`/path/to/dir/`. If the path does not exists it is created inside the container. $ sudo docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd -The `-v` flag mounts the current working directory -into the container. The `-w` lets the command being -executed inside the current working directory, by changing into the -directory to the value returned by `pwd`. So this +The `-v` flag mounts the current working directory into the container. The `-w` +lets the command being executed inside the current working directory, by +changing into the directory to the value returned by `pwd`. So this combination executes the command using the container, but inside the current working directory. $ sudo docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash -When the host directory of a bind-mounted volume doesn’t exist, Docker +When the host directory of a bind-mounted volume doesn't exist, Docker will automatically create this directory on the host for you. In the example above, Docker will create the `/doesnt/exist` folder before starting your container. @@ -920,49 +907,43 @@ folder before starting your container. $ sudo docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v ./static-docker:/usr/bin/docker busybox sh By bind-mounting the docker unix socket and statically linked docker -binary (such as that provided by -[https://get.docker.io](https://get.docker.io)), you give the container -the full access to create and manipulate the host’s docker daemon. +binary (such as that provided by [https://get.docker.io]( +https://get.docker.io)), you give the container the full access to create and +manipulate the host's docker daemon. $ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash -This binds port `8080` of the container to port -`80` on `127.0.0.1` of the host -machine. [*Redirect -Ports*](../../../use/port_redirection/#port-redirection) explains in -detail how to manipulate ports in Docker. +This binds port `8080` of the container to port `80` on `127.0.0.1` of the host +machine. [*Redirect Ports*](/use/port_redirection/#port-redirection) +explains in detail how to manipulate ports in Docker. $ sudo docker run --expose 80 ubuntu bash -This exposes port `80` of the container for use -within a link without publishing the port to the host system’s -interfaces. [*Redirect -Ports*](../../../use/port_redirection/#port-redirection) explains in -detail how to manipulate ports in Docker. +This exposes port `80` of the container for use within a link without publishing +the port to the host system's interfaces. [*Redirect Ports*]( +/use/port_redirection/#port-redirection) explains in detail how to +manipulate ports in Docker. $ sudo docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash -This sets environmental variables in the container. For illustration all -three flags are shown here. Where `-e`, -`--env` take an environment variable and value, or -if no "=" is provided, then that variable’s current value is passed -through (i.e. $MYVAR1 from the host is set to $MYVAR1 in the -container). All three flags, `-e`, `--env` -and `--env-file` can be repeated. +This sets environmental variables in the container. For illustration all three +flags are shown here. Where `-e`, `--env` take an environment variable and +value, or if no "=" is provided, then that variable's current value is passed +through (i.e. $MYVAR1 from the host is set to $MYVAR1 in the container). All +three flags, `-e`, `--env` and `--env-file` can be repeated. -Regardless of the order of these three flags, the `--env-file` -are processed first, and then `-e`, `--env` flags. This way, the -`-e` or `--env` will override variables as needed. +Regardless of the order of these three flags, the `--env-file` are processed +first, and then `-e`, `--env` flags. This way, the `-e` or `--env` will +override variables as needed. $ cat ./env.list TEST_FOO=BAR $ sudo docker run --env TEST_FOO="This is a test" --env-file ./env.list busybox env | grep TEST_FOO TEST_FOO=This is a test -The `--env-file` flag takes a filename as an -argument and expects each line to be in the VAR=VAL format, mimicking -the argument passed to `--env`. Comment lines need -only be prefixed with `#` +The `--env-file` flag takes a filename as an argument and expects each line +to be in the VAR=VAL format, mimicking the argument passed to `--env`. Comment +lines need only be prefixed with `#` An example of a file passed with `--env-file` @@ -991,48 +972,44 @@ This will create and run a new container with the container name being $ sudo docker run --link /redis:redis --name console ubuntu bash -The `--link` flag will link the container named -`/redis` into the newly created container with the -alias `redis`. The new container can access the -network and environment of the redis container via environment -variables. The `--name` flag will assign the name -`console` to the newly created container. +The `--link` flag will link the container named `/redis` into the newly +created container with the alias `redis`. The new container can access the +network and environment of the redis container via environment variables. +The `--name` flag will assign the name `console` to the newly created +container. $ sudo docker run --volumes-from 777f7dc92da7,ba8c0c54f0f2:ro -i -t ubuntu pwd -The `--volumes-from` flag mounts all the defined -volumes from the referenced containers. Containers can be specified by a -comma separated list or by repetitions of the `--volumes-from` -argument. The container ID may be optionally suffixed with -`:ro` or `:rw` to mount the -volumes in read-only or read-write mode, respectively. By default, the -volumes are mounted in the same mode (read write or read only) as the -reference container. +The `--volumes-from` flag mounts all the defined volumes from the referenced +containers. Containers can be specified by a comma separated list or by +repetitions of the `--volumes-from` argument. The container ID may be +optionally suffixed with `:ro` or `:rw` to mount the volumes in read-only +or read-write mode, respectively. By default, the volumes are mounted in +the same mode (read write or read only) as the reference container. -The `-a` flag tells `docker run` -to bind to the container’s stdin, stdout or stderr. This makes it -possible to manipulate the output and input as needed. +The `-a` flag tells `docker run` to bind to the container'sstdin, stdout or +stderr. This makes it possible to manipulate the output and input as needed. $ sudo echo "test" | docker run -i -a stdin ubuntu cat - -This pipes data into a container and prints the container’s ID by -attaching only to the container’s stdin. +This pipes data into a container and prints the container's ID by attaching +only to the container'sstdin. $ sudo docker run -a stderr ubuntu echo test -This isn’t going to print anything unless there’s an error because we’ve -only attached to the stderr of the container. The container’s logs still -store what’s been written to stderr and stdout. +This isn't going to print anything unless there's an error because We've +only attached to the stderr of the container. The container's logs still + store what's been written to stderr and stdout. $ sudo cat somefile | docker run -i -a stdin mybuilder dobuild This is how piping a file into a container could be done for a build. -The container’s ID will be printed after the build is done and the build +The container's ID will be printed after the build is done and the build logs could be retrieved using `docker logs`. This is useful if you need to pipe a file or something else into a container and -retrieve the container’s ID once the container has finished running. +retrieve the container's ID once the container has finished running. -#### A complete example +**A complete example:** $ sudo docker run -d --name static static-web-files sh $ sudo docker run -d --expose=8098 --name riak riakserver @@ -1043,45 +1020,33 @@ retrieve the container’s ID once the container has finished running. This example shows 5 containers that might be set up to test a web application change: -1. Start a pre-prepared volume image `static-web-files` - (in the background) that has CSS, image and static HTML in - it, (with a `VOLUME` instruction in the - `Dockerfile` to allow the web server to use - those files); -2. Start a pre-prepared `riakserver` image, give - the container name `riak` and expose port - `8098` to any containers that link to it; -3. Start the `appserver` image, restricting its - memory usage to 100MB, setting two environment variables - `DEVELOPMENT` and `BRANCH` - and bind-mounting the current directory (`$(pwd)` -) in the container in read-only mode as - `/app/bin`; -4. Start the `webserver`, mapping port - `443` in the container to port `1443` - on the Docker server, setting the DNS server to - `dns.dev.org` and DNS search domain to - `dev.org`, creating a volume to put the log - files into (so we can access it from another container), then - importing the files from the volume exposed by the - `static` container, and linking to all exposed - ports from `riak` and `app`. - Lastly, we set the hostname to `web.sven.dev.org` - so its consistent with the pre-generated SSL certificate; -5. Finally, we create a container that runs - `tail -f access.log` using the logs volume from - the `web` container, setting the workdir to - `/var/log/httpd`. The `--rm` - option means that when the container exits, the container’s layer is - removed. +1. Start a pre-prepared volume image `static-web-files` (in the background) + that has CSS, image and static HTML in it, (with a `VOLUME` instruction in + the Dockerfile to allow the web server to use those files); +2. Start a pre-prepared `riakserver` image, give the container name `riak` and + expose port `8098` to any containers that link to it; +3. Start the `appserver` image, restricting its memory usage to 100MB, setting + two environment variables `DEVELOPMENT` and `BRANCH` and bind-mounting the + current directory (`$(pwd)`) in the container in read-only mode as `/app/bin`; +4. Start the `webserver`, mapping port `443` in the container to port `1443` on + the Docker server, setting the DNS server to `dns.dev.org` and DNS search + domain to `dev.org`, creating a volume to put the log files into (so we can + access it from another container), then importing the files from the volume + exposed by the `static` container, and linking to all exposed ports from + `riak` and `app`. Lastly, we set the hostname to `web.sven.dev.org` so its + consistent with the pre-generated SSL certificate; +5. Finally, we create a container that runs `tail -f access.log` using the logs + volume from the `web` container, setting the workdir to `/var/log/httpd`. The + `--rm` option means that when the container exits, the container's layer is + removed. -## `save` +## save + +Save an image to a tar archive (streamed to stdout by default) Usage: docker save IMAGE - Save an image to a tar archive (streamed to stdout by default) - - -o, --output="": Write to an file, instead of STDOUT + -o, --output="": Write to an file, instead of STDOUT Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag. @@ -1098,65 +1063,65 @@ It is used to create a backup that can then be used with $ sudo docker save -o fedora-all.tar fedora $ sudo docker save -o fedora-latest.tar fedora:latest -## `search` +## search + +Search the docker index for images Usage: docker search TERM - Search the docker index for images - --no-trunc=false: Don᾿t truncate output -s, --stars=0: Only displays with at least xxx stars -t, --trusted=false: Only show trusted builds -See [*Find Public Images on the Central -Index*](../../../use/workingwithrepository/#searching-central-index) for +See [*Find Public Images on the Central Index*]( +/use/workingwithrepository/#searching-central-index) for more details on finding shared images from the commandline. -## `start` +## start + +Start a stopped container Usage: docker start [OPTIONS] CONTAINER - Start a stopped container - -a, --attach=false: Attach container᾿s stdout/stderr and forward all signals to the process -i, --interactive=false: Attach container᾿s stdin -## `stop` +## stop + +Stop a running container (Send SIGTERM, and then SIGKILL after grace period) Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] - Stop a running container (Send SIGTERM, and then SIGKILL after grace period) - - -t, --time=10: Number of seconds to wait for the container to stop before killing it. + -t, --time=10: Number of seconds to wait for the container to stop before killing it. The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL -## `tag` +## tag + +Tag an image into a repository Usage: docker tag [OPTIONS] IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG] - Tag an image into a repository - - -f, --force=false: Force + -f, --force=false: Force You can group your images together using names and tags, and then upload -them to [*Share Images via -Repositories*](../../../use/workingwithrepository/#working-with-the-repository). +them to [*Share Images via Repositories*]( +/use/workingwithrepository/#working-with-the-repository). -## `top` +## top Usage: docker top CONTAINER [ps OPTIONS] - Lookup the running processes of a container +Lookup the running processes of a container -## `version` +## version Show the version of the Docker client, daemon, and latest released version. -## `wait` +## wait Usage: docker wait [OPTIONS] NAME - Block until a container stops, then print its exit code. +Block until a container stops, then print its exit code. diff --git a/docs/sources/reference/run.md b/docs/sources/reference/run.md index 236b8065b8..9de08ec1a6 100644 --- a/docs/sources/reference/run.md +++ b/docs/sources/reference/run.md @@ -2,67 +2,68 @@ page_title: Docker Run Reference page_description: Configure containers at runtime page_keywords: docker, run, configure, runtime -# [Docker Run Reference](#id2) +# Docker Run Reference **Docker runs processes in isolated containers**. When an operator executes `docker run`, she starts a process with its own file system, its own networking, and its own isolated process tree. -The [*Image*](../../terms/image/#image-def) which starts the process may +The [*Image*](/terms/image/#image-def) which starts the process may define defaults related to the binary to run, the networking to expose, and more, but `docker run` gives final control to -the operator who starts the container from the image. That’s the main -reason [*run*](../commandline/cli/#cli-run) has more options than any +the operator who starts the container from the image. That's the main +reason [*run*](/commandline/cli/#cli-run) has more options than any other `docker` command. -Every one of the [*Examples*](../../examples/#example-list) shows +Every one of the [*Examples*](/examples/#example-list) shows running containers, and so here we try to give more in-depth guidance. -## [General Form](#id3) +## General Form -As you’ve seen in the [*Examples*](../../examples/#example-list), the +As you`ve seen in the [*Examples*](/examples/#example-list), the basic run command takes this form: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] To learn how to interpret the types of `[OPTIONS]`, -see [*Option types*](../commandline/cli/#cli-options). +see [*Option types*](/commandline/cli/#cli-options). The list of `[OPTIONS]` breaks down into two groups: -1. Settings exclusive to operators, including: - - Detached or Foreground running, - - Container Identification, - - Network settings, and - - Runtime Constraints on CPU and Memory - - Privileges and LXC Configuration +1. Settings exclusive to operators, including: -2. Setting shared between operators and developers, where operators can - override defaults developers set in images at build time. + - Detached or Foreground running, + - Container Identification, + - Network settings, and + - Runtime Constraints on CPU and Memory + - Privileges and LXC Configuration + +2. Setting shared between operators and developers, where operators can + override defaults developers set in images at build time. Together, the `docker run [OPTIONS]` give complete control over runtime behavior to the operator, allowing them to override all defaults set by the developer during `docker build` and nearly all the defaults set by the Docker runtime itself. -## [Operator Exclusive Options](#id4) +## Operator Exclusive Options Only the operator (the person executing `docker run`) can set the following options. -- [Detached vs Foreground](#detached-vs-foreground) - - [Detached (-d)](#detached-d) - - [Foreground](#foreground) -- [Container Identification](#container-identification) - - [Name (–name)](#name-name) - - [PID Equivalent](#pid-equivalent) -- [Network Settings](#network-settings) -- [Clean Up (–rm)](#clean-up-rm) -- [Runtime Constraints on CPU and + - [Detached vs Foreground](#detached-vs-foreground) + - [Detached (-d)](#detached-d) + - [Foreground](#foreground) + - [Container Identification](#container-identification) + - [Name (–name)](#name-name) + - [PID Equivalent](#pid-equivalent) + - [Network Settings](#network-settings) + - [Clean Up (–rm)](#clean-up-rm) + - [Runtime Constraints on CPU and Memory](#runtime-constraints-on-cpu-and-memory) -- [Runtime Privilege and LXC + - [Runtime Privilege and LXC Configuration](#runtime-privilege-and-lxc-configuration) -### [Detached vs Foreground](#id2) +## Detached vs Foreground When starting a Docker container, you must first decide if you want to run the container in the background in a "detached" mode or in the @@ -70,73 +71,70 @@ default foreground mode: -d=false: Detached mode: Run container in the background, print new container id -#### [Detached (-d)](#id3) +### Detached (-d) In detached mode (`-d=true` or just `-d`), all I/O should be done through network connections or shared volumes because the container is no longer listening to the commandline where you executed `docker run`. You can reattach to a detached container with `docker` -[*attach*](../commandline/cli/#cli-attach). If you choose to run a +[*attach*](commandline/cli/#attach). If you choose to run a container in the detached mode, then you cannot use the `--rm` option. -#### [Foreground](#id4) +### Foreground -In foreground mode (the default when `-d` is not -specified), `docker run` can start the process in -the container and attach the console to the process’s standard input, -output, and standard error. It can even pretend to be a TTY (this is -what most commandline executables expect) and pass along signals. All of -that is configurable: +In foreground mode (the default when `-d` is not specified), `docker run` +can start the process in the container and attach the console to the process's +standard input, output, and standard error. It can even pretend to be a TTY +(this is what most commandline executables expect) and pass along signals. All +of that is configurable: -a=[] : Attach to ``stdin``, ``stdout`` and/or ``stderr`` -t=false : Allocate a pseudo-tty --sig-proxy=true: Proxify all received signal to the process (even in non-tty mode) -i=false : Keep STDIN open even if not attached -If you do not specify `-a` then Docker will [attach -everything -(stdin,stdout,stderr)](https://github.com/dotcloud/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). -You can specify to which of the three standard streams -(`stdin`, `stdout`, -`stderr`) you’d like to connect instead, as in: +If you do not specify `-a` then Docker will [attach everything (stdin,stdout,stderr)]( +https://github.com/dotcloud/docker/blob/ +75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). You can specify to which +of the three standard streams (`stdin`, `stdout`, `stderr`) you'd like to connect +instead, as in: docker run -a stdin -a stdout -i -t ubuntu /bin/bash -For interactive processes (like a shell) you will typically want a tty -as well as persistent standard input (`stdin`), so -you’ll use `-i -t` together in most interactive -cases. +For interactive processes (like a shell) you will typically want a tty as well as +persistent standard input (`stdin`), so you'll use `-i -t` together in most +interactive cases. -### [Container Identification](#id5) +## Container Identification -#### [Name (–name)](#id6) +### Name (–name) The operator can identify a container in three ways: - UUID long identifier ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778") - UUID short identifier ("f78375b1c487") -- Name ("evil\_ptolemy") +- Name ("evil_ptolemy") The UUID identifiers come from the Docker daemon, and if you do not assign a name to the container with `--name` then the daemon will also generate a random string name too. The name can become a handy way to add meaning to a container since you can use this name when defining -[*links*](../../use/working_with_links_names/#working-with-links-names) +[*links*](/use/working_with_links_names/#working-with-links-names) (or any other place you need to identify a container). This works for both background and foreground Docker containers. -#### [PID Equivalent](#id7) +### PID Equivalent And finally, to help with automation, you can have Docker write the container ID out to a file of your choosing. This is similar to how some -programs might write out their process ID to a file (you’ve seen them as +programs might write out their process ID to a file (you`ve seen them as PID files): --cidfile="": Write the container ID to the file -### [Network Settings](#id8) +## Network Settings -n=true : Enable networking for this container --dns=[] : Set custom dns servers for the container @@ -150,19 +148,19 @@ files or STDIN/STDOUT only. Your container will use the same DNS servers as the host by default, but you can override this with `--dns`. -### [Clean Up (–rm)](#id9) +## Clean Up (–rm) -By default a container’s file system persists even after the container +By default a container's file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term **foreground** processes, these container file -systems can really pile up. If instead you’d like Docker to +systems can really pile up. If instead you'd like Docker to **automatically clean up the container and remove the file system when the container exits**, you can add the `--rm` flag: --rm=false: Automatically remove the container when it exits (incompatible with -d) -### [Runtime Constraints on CPU and Memory](#id10) +## Runtime Constraints on CPU and Memory The operator can also adjust the performance parameters of the container: @@ -181,7 +179,7 @@ the same priority and get the same proportion of CPU cycles, but you can tell the kernel to give more shares of CPU time to one or more containers when you start them via Docker. -### [Runtime Privilege and LXC Configuration](#id11) +## Runtime Privilege and LXC Configuration --privileged=false: Give extended privileges to this container --lxc-conf=[]: (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" @@ -189,71 +187,63 @@ containers when you start them via Docker. By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a -"privileged" container is given access to all devices (see -[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go) -and documentation on [cgroups -devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). +"privileged" container is given access to all devices (see [lxc-template.go]( +https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go) +and documentation on [cgroups devices]( +https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). When the operator executes `docker run --privileged`, Docker will enable to access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with `--privileged` is available on the -[Docker -Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/). +[Docker Blog](http://blog.docker.io/2013/09/docker-can-now-run-within-docker/). -If the Docker daemon was started using the `lxc` -exec-driver (`docker -d --exec-driver=lxc`) then the -operator can also specify LXC options using one or more -`--lxc-conf` parameters. These can be new parameters -or override existing parameters from the -[lxc-template.go](https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go). -Note that in the future, a given host’s Docker daemon may not use LXC, -so this is an implementation-specific configuration meant for operators -already familiar with using LXC directly. +If the Docker daemon was started using the `lxc` exec-driver +(`docker -d --exec-driver=lxc`) then the operator can also specify LXC options +using one or more `--lxc-conf` parameters. These can be new parameters or +override existing parameters from the [lxc-template.go]( +https://github.com/dotcloud/docker/blob/master/execdriver/lxc/lxc_template.go). +Note that in the future, a given host's docker daemon may not use LXC, so this +is an implementation-specific configuration meant for operators already +familiar with using LXC directly. -## Overriding `Dockerfile` Image Defaults +## Overriding Dockerfile Image Defaults -When a developer builds an image from a -[*Dockerfile*](../builder/#dockerbuilder) or when she commits it, the -developer can set a number of default parameters that take effect when -the image starts up as a container. +When a developer builds an image from a [*Dockerfile*](builder/#dockerbuilder) +or when she commits it, the developer can set a number of default parameters +that take effect when the image starts up as a container. -Four of the `Dockerfile` commands cannot be -overridden at runtime: `FROM, MAINTAINER, RUN`, and -`ADD`. Everything else has a corresponding override -in `docker run`. We’ll go through what the developer -might have set in each `Dockerfile` instruction and -how the operator can override that setting. +Four of the Dockerfile commands cannot be overridden at runtime: `FROM`, +`MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override +in `docker run`. We'll go through what the developer might have set in each +Dockerfile instruction and how the operator can override that setting. -- [CMD (Default Command or Options)](#cmd-default-command-or-options) -- [ENTRYPOINT (Default Command to Execute at - Runtime](#entrypoint-default-command-to-execute-at-runtime) -- [EXPOSE (Incoming Ports)](#expose-incoming-ports) -- [ENV (Environment Variables)](#env-environment-variables) -- [VOLUME (Shared Filesystems)](#volume-shared-filesystems) -- [USER](#user) -- [WORKDIR](#workdir) + - [CMD (Default Command or Options)](#cmd-default-command-or-options) + - [ENTRYPOINT (Default Command to Execute at Runtime]( + #entrypoint-default-command-to-execute-at-runtime) + - [EXPOSE (Incoming Ports)](#expose-incoming-ports) + - [ENV (Environment Variables)](#env-environment-variables) + - [VOLUME (Shared Filesystems)](#volume-shared-filesystems) + - [USER](#user) + - [WORKDIR](#workdir) -### [CMD (Default Command or Options)](#id12) +## CMD (Default Command or Options) Recall the optional `COMMAND` in the Docker commandline: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] -This command is optional because the person who created the -`IMAGE` may have already provided a default -`COMMAND` using the `Dockerfile` -`CMD`. As the operator (the person running a -container from the image), you can override that `CMD` -just by specifying a new `COMMAND`. +This command is optional because the person who created the `IMAGE` may have +already provided a default `COMMAND` using the Dockerfile `CMD`. As the +operator (the person running a container from the image), you can override that +`CMD` just by specifying a new `COMMAND`. -If the image also specifies an `ENTRYPOINT` then the -`CMD` or `COMMAND` get appended -as arguments to the `ENTRYPOINT`. +If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` get +appended as arguments to the `ENTRYPOINT`. -### [ENTRYPOINT (Default Command to Execute at Runtime](#id13) +## ENTRYPOINT (Default Command to Execute at Runtime --entrypoint="": Overwrite the default entrypoint set by the image @@ -276,13 +266,12 @@ or two examples of how to pass more parameters to that ENTRYPOINT: docker run -i -t --entrypoint /bin/bash example/redis -c ls -l docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help -### [EXPOSE (Incoming Ports)](#id14) +## EXPOSE (Incoming Ports) -The `Dockerfile` doesn’t give much control over -networking, only providing the `EXPOSE` instruction -to give a hint to the operator about what incoming ports might provide -services. The following options work with or override the -`Dockerfile`‘s exposed defaults: +The Dockerfile doesn't give much control over networking, only providing the +`EXPOSE` instruction to give a hint to the operator about what incoming ports +might provide services. The following options work with or override the +Dockerfile's exposed defaults: --expose=[]: Expose a port from the container without publishing it to your host @@ -293,40 +282,34 @@ services. The following options work with or override the (use 'docker port' to see the actual mapping) --link="" : Add link to another container (name:alias) -As mentioned previously, `EXPOSE` (and -`--expose`) make a port available **in** a container -for incoming connections. The port number on the inside of the container -(where the service listens) does not need to be the same number as the -port exposed on the outside of the container (where clients connect), so -inside the container you might have an HTTP service listening on port 80 -(and so you `EXPOSE 80` in the -`Dockerfile`), but outside the container the port -might be 42800. +As mentioned previously, `EXPOSE` (and `--expose`) make a port available **in** +a container for incoming connections. The port number on the inside of the +container (where the service listens) does not need to be the same number as the +port exposed on the outside of the container (where clients connect), so inside +the container you might have an HTTP service listening on port 80 (and so you +`EXPOSE 80` in the Dockerfile), but outside the container the port might be +42800. -To help a new client container reach the server container’s internal -port operator `--expose`‘d by the operator or -`EXPOSE`‘d by the developer, the operator has three -choices: start the server container with `-P` or -`-p,` or start the client container with -`--link`. +To help a new client container reach the server container's internal port +operator `--expose``d by the operator or `EXPOSE``d by the developer, the +operator has three choices: start the server container with `-P` or `-p,` or +start the client container with `--link`. -If the operator uses `-P` or `-p` -then Docker will make the exposed port accessible on the host -and the ports will be available to any client that can reach the host. -To find the map between the host ports and the exposed ports, use -`docker port`) +If the operator uses `-P` or `-p` then Docker will make the exposed port +accessible on the host and the ports will be available to any client that +can reach the host. To find the map between the host ports and the exposed +ports, use `docker port`) -If the operator uses `--link` when starting the new -client container, then the client container can access the exposed port -via a private networking interface. Docker will set some environment -variables in the client container to help indicate which interface and -port to use. +If the operator uses `--link` when starting the new client container, then the +client container can access the exposed port via a private networking interface. +Docker will set some environment variables in the client container to help +indicate which interface and port to use. -### [ENV (Environment Variables)](#id15) +## ENV (Environment Variables) -The operator can **set any environment variable** in the container by -using one or more `-e` flags, even overriding those -already defined by the developer with a Dockefile `ENV`: +The operator can **set any environment variable** in the container by using one +or more `-e` flags, even overriding those already defined by the developer with +a Dockefile `ENV`: $ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export declare -x HOME="/" @@ -340,10 +323,10 @@ already defined by the developer with a Dockefile `ENV`: Similarly the operator can set the **hostname** with `-h`. -`--link name:alias` also sets environment variables, -using the *alias* string to define environment variables within the -container that give the IP and PORT information for connecting to the -service container. Let’s imagine we have a container running Redis: +`--link name:alias` also sets environment variables, using the *alias* string to +define environment variables within the container that give the IP and PORT +information for connecting to the service container. Let's imagine we have a +container running Redis: # Start the service container, named redis-name $ docker run -d --name redis-name dockerfiles/redis @@ -358,7 +341,7 @@ service container. Let’s imagine we have a container running Redis: $ docker port 4241164edf6f 6379 2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f -Yet we can get information about the Redis container’s exposed ports +Yet we can get information about the Redis container'sexposed ports with `--link`. Choose an alias that will form a valid environment variable! @@ -377,40 +360,36 @@ valid environment variable! declare -x SHLVL="1" declare -x container="lxc" -And we can use that information to connect from another container as a -client: +And we can use that information to connect from another container as a client: $ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT' 172.17.0.32:6379> -### [VOLUME (Shared Filesystems)](#id16) +## VOLUME (Shared Filesystems) -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "container-dir" is missing, then docker creates a new volume. --volumes-from="": Mount all volumes from the given container(s) -The volumes commands are complex enough to have their own documentation -in section [*Share Directories via -Volumes*](../../use/working_with_volumes/#volume-def). A developer can -define one or more `VOLUME`s associated with an -image, but only the operator can give access from one container to -another (or from a container to a volume mounted on the host). +The volumes commands are complex enough to have their own documentation in +section [*Share Directories via Volumes*](/use/working_with_volumes/#volume-def). +A developer can define one or more `VOLUME's associated with an image, but only the +operator can give access from one container to another (or from a container to a +volume mounted on the host). -### [USER](#id17) +## USER -The default user within a container is `root` (id = -0), but if the developer created additional users, those are accessible -too. The developer can set a default user to run the first process with -the `Dockerfile USER` command, but the operator can -override it +The default user within a container is `root` (id = 0), but if the developer +created additional users, those are accessible too. The developer can set a +default user to run the first process with the `Dockerfile USER` command, +but the operator can override it: -u="": Username or UID -### [WORKDIR](#id18) +## WORKDIR -The default working directory for running binaries within a container is -the root directory (`/`), but the developer can set -a different default with the `Dockerfile WORKDIR` -command. The operator can override this with: +The default working directory for running binaries within a container is the +root directory (`/`), but the developer can set a different default with the +Dockerfile `WORKDIR` command. The operator can override this with: -w="": Working directory inside the container diff --git a/docs/sources/terms.md b/docs/sources/terms.md index 59579d99a1..228b18fbd9 100644 --- a/docs/sources/terms.md +++ b/docs/sources/terms.md @@ -4,10 +4,10 @@ ## Contents: -- [File System](filesystem/) -- [Layers](layer/) -- [Image](image/) -- [Container](container/) -- [Registry](registry/) -- [Repository](repository/) + - [File System](filesystem/) + - [Layers](layer/) + - [Image](image/) + - [Container](container/) + - [Registry](registry/) + - [Repository](repository/) diff --git a/docs/sources/terms/container.md b/docs/sources/terms/container.md index 92a0265d99..5bedc3160e 100644 --- a/docs/sources/terms/container.md +++ b/docs/sources/terms/container.md @@ -6,22 +6,20 @@ page_keywords: containers, lxc, concepts, explanation, image, container ## Introduction -![](../../_images/docker-filesystems-busyboxrw.png) +![](/terms/images/docker-filesystems-busyboxrw.png) -Once you start a process in Docker from an -[*Image*](../image/#image-def), Docker fetches the image and its -[*Parent Image*](../image/#parent-image-def), and repeats the process -until it reaches the [*Base Image*](../image/#base-image-def). Then the -[*Union File System*](../layer/#ufs-def) adds a read-write layer on top. -That read-write layer, plus the information about its [*Parent -Image*](../image/#parent-image-def) and some additional information like -its unique id, networking configuration, and resource limits is called a -**container**. +Once you start a process in Docker from an [*Image*](image.md), Docker fetches +the image and its [*Parent Image*](image.md), and repeats the process until it +reaches the [*Base Image*](image.md/#base-image-def). Then the +[*Union File System*](layer.md) adds a read-write layer on top. That read-write +layer, plus the information about its [*Parent Image*](image.md) and some +additional information like its unique id, networking configuration, and +resource limits is called a **container**. ## Container State -Containers can change, and so they have state. A container may be -**running** or **exited**. +Containers can change, and so they have state. A container may be **running** or +**exited**. When a container is running, the idea of a "container" also includes a tree of processes running on the CPU, isolated from the other processes @@ -33,9 +31,8 @@ processes restart from scratch (their memory state is **not** preserved in a container), but the file system is just as it was when the container was stopped. -You can promote a container to an [*Image*](../image/#image-def) with -`docker commit`. Once a container is an image, you -can use it as a parent for new containers. +You can promote a container to an [*Image*](image.md) with `docker commit`. +Once a container is an image, you can use it as a parent for new containers. ## Container IDs diff --git a/docs/sources/terms/filesystem.md b/docs/sources/terms/filesystem.md index 2038d009e3..5587e3c831 100644 --- a/docs/sources/terms/filesystem.md +++ b/docs/sources/terms/filesystem.md @@ -6,13 +6,13 @@ page_keywords: containers, files, linux ## Introduction -![](../../_images/docker-filesystems-generic.png) +![](/terms/images/docker-filesystems-generic.png) In order for a Linux system to run, it typically needs two [file systems](http://en.wikipedia.org/wiki/Filesystem): -1. boot file system (bootfs) -2. root file system (rootfs) +1. boot file system (bootfs) +2. root file system (rootfs) The **boot file system** contains the bootloader and the kernel. The user never makes any changes to the boot file system. In fact, soon @@ -22,10 +22,9 @@ initrd disk image. The **root file system** includes the typical directory structure we associate with Unix-like operating systems: -`/dev, /proc, /bin, /etc, /lib, /usr,` and -`/tmp` plus all the configuration files, binaries -and libraries required to run user applications (like bash, ls, and so -forth). +`/dev, /proc, /bin, /etc, /lib, /usr,` and `/tmp` plus all the configuration +files, binaries and libraries required to run user applications (like bash, +ls, and so forth). While there can be important kernel differences between different Linux distributions, the contents and organization of the root file system are @@ -33,4 +32,4 @@ usually what make your software packages dependent on one distribution versus another. Docker can help solve this problem by running multiple distributions at the same time. -![](../../_images/docker-filesystems-multiroot.png) +![](/terms/images/docker-filesystems-multiroot.png) diff --git a/docs/sources/terms/image.md b/docs/sources/terms/image.md index 721d4c954c..b10debcc6a 100644 --- a/docs/sources/terms/image.md +++ b/docs/sources/terms/image.md @@ -6,7 +6,7 @@ page_keywords: containers, lxc, concepts, explanation, image, container ## Introduction -![](../../_images/docker-filesystems-debian.png) +![](/terms/images/docker-filesystems-debian.png) In Docker terminology, a read-only [*Layer*](../layer/#layer-def) is called an **image**. An image never changes. @@ -14,14 +14,14 @@ called an **image**. An image never changes. Since Docker uses a [*Union File System*](../layer/#ufs-def), the processes think the whole file system is mounted read-write. But all the changes go to the top-most writeable layer, and underneath, the original -file in the read-only image is unchanged. Since images don’t change, +file in the read-only image is unchanged. Since images don't change, images do not have state. -![](../../_images/docker-filesystems-debianrw.png) +![](/terms/images/docker-filesystems-debianrw.png) ## Parent Image -![](../../_images/docker-filesystems-multilayer.png) +![](/terms/images/docker-filesystems-multilayer.png) Each image may depend on one more image which forms the layer beneath it. We sometimes say that the lower image is the **parent** of the upper diff --git a/docs/sources/terms/layer.md b/docs/sources/terms/layer.md index 7665467aae..b4b2ea4b7a 100644 --- a/docs/sources/terms/layer.md +++ b/docs/sources/terms/layer.md @@ -20,7 +20,7 @@ file system *over* the read-only file system. In fact there may be multiple read-only file systems stacked on top of each other. We think of each one of these file systems as a **layer**. -![](../../_images/docker-filesystems-multilayer.png) +![](/terms/images/docker-filesystems-multilayer.png) At first, the top read-write layer has nothing in it, but any time a process creates a file, this happens in the top layer. And if something diff --git a/docs/sources/terms/registry.md b/docs/sources/terms/registry.md index 0d5af2c65d..bb3209ebac 100644 --- a/docs/sources/terms/registry.md +++ b/docs/sources/terms/registry.md @@ -6,9 +6,9 @@ page_keywords: containers, lxc, concepts, explanation, image, repository, contai ## Introduction -A Registry is a hosted service containing -[*repositories*](../repository/#repository-def) of -[*images*](../image/#image-def) which responds to the Registry API. +A Registry is a hosted service containing [*repositories*]( +../repository/#repository-def) of [*images*](../image/#image-def) which +responds to the Registry API. The default registry can be accessed using a browser at [http://images.docker.io](http://images.docker.io) or using the @@ -16,5 +16,5 @@ The default registry can be accessed using a browser at ## Further Reading -For more information see [*Working with -Repositories*](../../use/workingwithrepository/#working-with-the-repository) +For more information see [*Working with Repositories*]( +../use/workingwithrepository/#working-with-the-repository) diff --git a/docs/sources/terms/repository.md b/docs/sources/terms/repository.md index 7ccd69ad19..52760ac20d 100644 --- a/docs/sources/terms/repository.md +++ b/docs/sources/terms/repository.md @@ -13,26 +13,23 @@ server. Images can be associated with a repository (or multiple) by giving them an image name using one of three different commands: -1. At build time (e.g. `sudo docker build -t IMAGENAME` -), -2. When committing a container (e.g. - `sudo docker commit CONTAINERID IMAGENAME`) or -3. When tagging an image id with an image name (e.g. - `sudo docker tag IMAGEID IMAGENAME`). +1. At build time (e.g. `sudo docker build -t IMAGENAME`), +2. When committing a container (e.g. + `sudo docker commit CONTAINERID IMAGENAME`) or +3. When tagging an image id with an image name (e.g. + `sudo docker tag IMAGEID IMAGENAME`). A Fully Qualified Image Name (FQIN) can be made up of 3 parts: `[registry_hostname[:port]/][user_name/](repository_name:version_tag)` -`username` and `registry_hostname` -default to an empty string. When `registry_hostname` -is an empty string, then `docker push` -will push to `index.docker.io:80`. +`username` and `registry_hostname` default to an empty string. When +`registry_hostname` is an empty string, then `docker push` will push to +`index.docker.io:80`. If you create a new repository which you want to share, you will need to -set at least the `user_name`, as the ‘default’ blank -`user_name` prefix is reserved for official Docker -images. +set at least the `user_name`, as the `default` blank `user_name` prefix is +reserved for official Docker images. -For more information see [*Working with -Repositories*](../../use/workingwithrepository/#working-with-the-repository) +For more information see [*Working with Repositories*]( +../use/workingwithrepository/#working-with-the-repository) diff --git a/docs/sources/toctree.md b/docs/sources/toctree.md index e837c7e3af..ec1832fc21 100644 --- a/docs/sources/toctree.md +++ b/docs/sources/toctree.md @@ -6,12 +6,12 @@ page_keywords: todo, docker, documentation, installation, usage, examples, contr This documentation has the following resources: -- [Installation](../installation/) -- [Use](../use/) -- [Examples](../examples/) -- [Reference Manual](../reference/) -- [Contributing](../contributing/) -- [Glossary](../terms/) -- [Articles](../articles/) -- [FAQ](../faq/) + - [Installation](../installation/) + - [Use](../use/) + - [Examples](../examples/) + - [Reference Manual](../reference/) + - [Contributing](../contributing/) + - [Glossary](../terms/) + - [Articles](../articles/) + - [FAQ](../faq/) diff --git a/docs/sources/use.md b/docs/sources/use.md index ce4a51025c..5b2524361e 100644 --- a/docs/sources/use.md +++ b/docs/sources/use.md @@ -2,12 +2,12 @@ ## Contents: -- [First steps with Docker](basics/) -- [Share Images via Repositories](workingwithrepository/) -- [Redirect Ports](port_redirection/) -- [Configure Networking](networking/) -- [Automatically Start Containers](host_integration/) -- [Share Directories via Volumes](working_with_volumes/) -- [Link Containers](working_with_links_names/) -- [Link via an Ambassador Container](ambassador_pattern_linking/) -- [Using Puppet](puppet/) \ No newline at end of file + - [First steps with Docker](basics/) + - [Share Images via Repositories](workingwithrepository/) + - [Redirect Ports](port_redirection/) + - [Configure Networking](networking/) + - [Automatically Start Containers](host_integration/) + - [Share Directories via Volumes](working_with_volumes/) + - [Link Containers](working_with_links_names/) + - [Link via an Ambassador Container](ambassador_pattern_linking/) + - [Using Puppet](puppet/) \ No newline at end of file diff --git a/docs/sources/use/ambassador_pattern_linking.md b/docs/sources/use/ambassador_pattern_linking.md index 685d155917..a04dbdffc0 100644 --- a/docs/sources/use/ambassador_pattern_linking.md +++ b/docs/sources/use/ambassador_pattern_linking.md @@ -62,8 +62,7 @@ linking to the local redis ambassador. ## How it works The following example shows what the `svendowideit/ambassador` -container does automatically (with a tiny amount of -`sed`) +container does automatically (with a tiny amount of `sed`) On the docker host (192.168.1.52) that redis will run on: @@ -82,8 +81,8 @@ On the docker host (192.168.1.52) that redis will run on: # add redis ambassador $ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh -in the redis\_ambassador container, you can see the linked redis -containers’s env +in the redis_ambassador container, you can see the linked redis +containers'senv $ env REDIS_PORT=tcp://172.17.0.136:6379 diff --git a/docs/sources/use/basics.md b/docs/sources/use/basics.md index e283a9dec8..bbe967cc7c 100644 --- a/docs/sources/use/basics.md +++ b/docs/sources/use/basics.md @@ -17,7 +17,7 @@ like `/var/lib/docker/repositories: permission denied` you may have an incomplete docker installation or insufficient privileges to access Docker on your machine. -Please refer to [*Installation*](../../installation/#installation-list) +Please refer to [*Installation*](/installation/#installation-list) for installation instructions. ## Download a pre-built image @@ -37,7 +37,7 @@ cache. > characters of the full image ID - which can be found using > `docker inspect` or `docker images --no-trunc=true` -**If you’re using OS X** then you shouldn’t use `sudo`. +**If you're using OS X** then you shouldn't use `sudo`. ## Running an interactive shell @@ -75,9 +75,9 @@ following format: `tcp://[host][:port]` or For example: -- `tcp://host:4243` -\> tcp connection on +- `tcp://host:4243` -> tcp connection on host:4243 -- `unix://path/to/socket` -\> unix socket located +- `unix://path/to/socket` -> unix socket located at `path/to/socket` `-H`, when empty, will default to the same value as @@ -170,7 +170,6 @@ will be stored (as a diff). See which images you already have using the You now have a image state from which you can create new instances. -Read more about [*Share Images via -Repositories*](../workingwithrepository/#working-with-the-repository) or -continue to the complete [*Command -Line*](../../reference/commandline/cli/#cli) +Read more about [*Share Images via Repositories*]( +../workingwithrepository/#working-with-the-repository) or +continue to the complete [*Command Line*](/reference/commandline/cli/#cli) diff --git a/docs/sources/use/chef.md b/docs/sources/use/chef.md index b35391dca5..5145107a38 100644 --- a/docs/sources/use/chef.md +++ b/docs/sources/use/chef.md @@ -6,13 +6,13 @@ page_keywords: chef, installation, usage, docker, documentation > **Note**: > Please note this is a community contributed installation path. The only -> ‘official’ installation is using the -> [*Ubuntu*](../../installation/ubuntulinux/#ubuntu-linux) installation +> `official` installation is using the +> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation > path. This version may sometimes be out of date. ## Requirements -To use this guide you’ll need a working installation of +To use this guide you'll need a working installation of [Chef](http://www.getchef.com/). This cookbook supports a variety of operating systems. diff --git a/docs/sources/use/host_integration.md b/docs/sources/use/host_integration.md index 0aa0dc8314..370c00e20a 100644 --- a/docs/sources/use/host_integration.md +++ b/docs/sources/use/host_integration.md @@ -5,8 +5,7 @@ page_keywords: systemd, upstart, supervisor, docker, documentation, host integra # Automatically Start Containers You can use your Docker containers with process managers like -`upstart`, `systemd` and -`supervisor`. +`upstart`, `systemd` and `supervisor`. ## Introduction @@ -27,7 +26,7 @@ docker. ## Sample Upstart Script -In this example we’ve already created a container to run Redis with +In this example We've already created a container to run Redis with `--name redis_server`. To create an upstart script for our container, we create a file named `/etc/init/redis.conf` and place the following into @@ -42,7 +41,7 @@ it: /usr/bin/docker start -a redis_server end script -Next, we have to configure docker so that it’s run with the option +Next, we have to configure docker so that it's run with the option `-r=false`. Run the following command: $ sudo sh -c "echo 'DOCKER_OPTS=\"-r=false\"' > /etc/default/docker" diff --git a/docs/sources/use/networking.md b/docs/sources/use/networking.md index 3dfca0cb94..2249ca42cd 100644 --- a/docs/sources/use/networking.md +++ b/docs/sources/use/networking.md @@ -10,10 +10,10 @@ Docker uses Linux bridge capabilities to provide network connectivity to containers. The `docker0` bridge interface is managed by Docker for this purpose. When the Docker daemon starts it : -- creates the `docker0` bridge if not present -- searches for an IP address range which doesn’t overlap with an existing route -- picks an IP in the selected range -- assigns this IP to the `docker0` bridge + - creates the `docker0` bridge if not present + - searches for an IP address range which doesn't overlap with an existing route + - picks an IP in the selected range + - assigns this IP to the `docker0` bridge @@ -47,7 +47,7 @@ is dedicated to the 52f811c5d3d6 container. ## How to use a specific IP address range Docker will try hard to find an IP range that is not used by the host. -Even though it works for most cases, it’s not bullet-proof and sometimes +Even though it works for most cases, it's not bullet-proof and sometimes you need to have more control over the IP addressing scheme. For this purpose, Docker allows you to manage the `docker0` @@ -56,10 +56,10 @@ parameter. In this scenario: -- ensure Docker is stopped -- create your own bridge (`bridge0` for example) -- assign a specific IP to this bridge -- start Docker with the `-b=bridge0` parameter + - ensure Docker is stopped + - create your own bridge (`bridge0` for example) + - assign a specific IP to this bridge + - start Docker with the `-b=bridge0` parameter @@ -107,14 +107,12 @@ In this scenario: ## Container intercommunication -The value of the Docker daemon’s `icc` parameter +The value of the Docker daemon's `icc` parameter determines whether containers can communicate with each other over the bridge network. -- The default, `-icc=true` allows containers to - communicate with each other. -- `-icc=false` means containers are isolated from - each other. + - The default, `-icc=true` allows containers to communicate with each other. + - `-icc=false` means containers are isolated from each other. Docker uses `iptables` under the hood to either accept or drop communication between containers. @@ -125,7 +123,7 @@ Well. Things get complicated here. The `vethXXXX` interface is the host side of a point-to-point link between the host and the corresponding container; -the other side of the link is the container’s `eth0` +the other side of the link is the container's `eth0` interface. This pair (host `vethXXX` and container `eth0`) are connected like a tube. Everything that comes in one side will come out the other side. @@ -135,6 +133,6 @@ ip link command) and the namespaces infrastructure. ## I want more -Jérôme Petazzoni has create `pipework` to connect -together containers in arbitrarily complex scenarios : +Jérôme Petazzoni has create `pipework` to connect together containers in +arbitrarily complex scenarios: [https://github.com/jpetazzo/pipework](https://github.com/jpetazzo/pipework) diff --git a/docs/sources/use/port_redirection.md b/docs/sources/use/port_redirection.md index a85234f48f..ef0e644ace 100644 --- a/docs/sources/use/port_redirection.md +++ b/docs/sources/use/port_redirection.md @@ -31,22 +31,19 @@ containers, Docker provides the linking mechanism. To bind all the exposed container ports to the host automatically, use `docker run -P `. The mapped host ports will be auto-selected from a pool of unused ports (49000..49900), and -you will need to use `docker ps`, -`docker inspect ` or -`docker port ` to determine -what they are. +you will need to use `docker ps`, `docker inspect ` or +`docker port ` to determine what they are. ## Binding a port to a host interface To bind a port of the container to a specific interface of the host -system, use the `-p` parameter of the -`docker run` command: +system, use the `-p` parameter of the `docker run` command: # General syntax docker run -p [([:[host_port]])|():][/udp] When no host interface is provided, the port is bound to all available -interfaces of the host machine (aka INADDR\_ANY, or 0.0.0.0).When no +interfaces of the host machine (aka INADDR_ANY, or 0.0.0.0). When no host port is provided, one is dynamically allocated. The possible combinations of options for TCP port are the following: @@ -68,9 +65,9 @@ combinations described for TCP work. Here is only one example: # Bind UDP port 5353 of the container to UDP port 53 on 127.0.0.1 of the host machine. docker run -p 127.0.0.1:53:5353/udp -The command `docker port` lists the interface and -port on the host machine bound to a given container port. It is useful -when using dynamically allocated ports: +The command `docker port` lists the interface and port on the host machine +bound to a given container port. It is useful when using dynamically allocated +ports: # Bind to a dynamically allocated port docker run -p 127.0.0.1::8080 --name dyn-bound @@ -84,29 +81,22 @@ when using dynamically allocated ports: Communication between two containers can also be established in a docker-specific way called linking. -To briefly present the concept of linking, let us consider two -containers: `server`, containing the service, and -`client`, accessing the service. Once -`server` is running, `client` is -started and links to server. Linking sets environment variables in -`client` giving it some information about -`server`. In this sense, linking is a method of -service discovery. +To briefly present the concept of linking, let us consider two containers: +`server`, containing the service, and `client`, accessing the service. Once +`server` is running, `client` is started and links to server. Linking sets +environment variables in `client` giving it some information about `server`. +In this sense, linking is a method of service discovery. -Let us now get back to our topic of interest; communication between the -two containers. We mentioned that the tricky part about this -communication was that the IP address of `server` -was not fixed. Therefore, some of the environment variables are going to -be used to inform `client` about this IP address. -This process called exposure, is possible because `client` -is started after `server` has been -started. +Let us now get back to our topic of interest; communication between the two +containers. We mentioned that the tricky part about this communication was that +the IP address of `server` was not fixed. Therefore, some of the environment +variables are going to be used to inform `client` about this IP address. This +process called exposure, is possible because `client` is started after `server` +has been started. -Here is a full example. On `server`, the port of -interest is exposed. The exposure is done either through the -`--expose` parameter to the `docker run` -command, or the `EXPOSE` build command in -a Dockerfile: +Here is a full example. On `server`, the port of interest is exposed. The +exposure is done either through the `--expose` parameter to the `docker run` +command, or the `EXPOSE` build command in a Dockerfile: # Expose port 80 docker run --expose 80 --name server @@ -116,8 +106,7 @@ The `client` then links to the `server`: # Link docker run --name client --link server:linked-server -`client` locally refers to `server` -as `linked-server`. The following +`client` locally refers to `server` as `linked-server`. The following environment variables, among others, are available on `client`: # The default protocol, ip, and port of the service running in the container @@ -129,9 +118,7 @@ environment variables, among others, are available on `client`: LINKED-SERVER_PORT_80_TCP_ADDR=172.17.0.8 LINKED-SERVER_PORT_80_TCP_PORT=80 -This tells `client` that a service is running on -port 80 of `server` and that `server` -is accessible at the IP address 172.17.0.8 +This tells `client` that a service is running on port 80 of `server` and that +`server` is accessible at the IP address 172.17.0.8 -Note: Using the `-p` parameter also exposes the -port.. +Note: Using the `-p` parameter also exposes the port. diff --git a/docs/sources/use/puppet.md b/docs/sources/use/puppet.md index 55f16dd5bc..c1ac95f4ab 100644 --- a/docs/sources/use/puppet.md +++ b/docs/sources/use/puppet.md @@ -4,15 +4,15 @@ page_keywords: puppet, installation, usage, docker, documentation # Using Puppet -> *Note:* Please note this is a community contributed installation path. The only -> ‘official’ installation is using the -> [*Ubuntu*](../../installation/ubuntulinux/#ubuntu-linux) installation +> *Note:* Please note this is a community contributed installation path. The +> only `official` installation is using the +> [*Ubuntu*](/installation/ubuntulinux/#ubuntu-linux) installation > path. This version may sometimes be out of date. ## Requirements -To use this guide you’ll need a working installation of Puppet from -[Puppetlabs](https://www.puppetlabs.com) . +To use this guide you'll need a working installation of Puppet from +[Puppetlabs](https://puppetlabs.com) . The module also currently uses the official PPA so only works with Ubuntu. @@ -26,7 +26,7 @@ installed using the built-in module tool. puppet module install garethr/docker It can also be found on -[GitHub](https://www.github.com/garethr/garethr-docker) if you would +[GitHub](https://github.com/garethr/garethr-docker) if you would rather download the source. ## Usage diff --git a/docs/sources/use/working_with_links_names.md b/docs/sources/use/working_with_links_names.md index 67ca8004f1..40260feabf 100644 --- a/docs/sources/use/working_with_links_names.md +++ b/docs/sources/use/working_with_links_names.md @@ -6,19 +6,18 @@ page_keywords: Examples, Usage, links, linking, docker, documentation, examples, ## Introduction -From version 0.6.5 you are now able to `name` a -container and `link` it to another container by -referring to its name. This will create a parent -\> child relationship -where the parent container can see selected information about its child. +From version 0.6.5 you are now able to `name` a container and `link` it to +another container by referring to its name. This will create a parent -> child +relationship where the parent container can see selected information about its +child. ## Container Naming New in version v0.6.5. -You can now name your container by using the `--name` -flag. If no name is provided, Docker will automatically -generate a name. You can see this name using the `docker ps` -command. +You can now name your container by using the `--name` flag. If no name is +provided, Docker will automatically generate a name. You can see this name +using the `docker ps` command. # format is "sudo docker run --name " $ sudo docker run --name test ubuntu /bin/bash @@ -33,52 +32,45 @@ command. New in version v0.6.5. Links allow containers to discover and securely communicate with each -other by using the flag `-link name:alias`. -Inter-container communication can be disabled with the daemon flag -`-icc=false`. With this flag set to -`false`, Container A cannot access Container B -unless explicitly allowed via a link. This is a huge win for securing -your containers. When two containers are linked together Docker creates -a parent child relationship between the containers. The parent container -will be able to access information via environment variables of the -child such as name, exposed ports, IP and other selected environment -variables. +other by using the flag `-link name:alias`. Inter-container communication +can be disabled with the daemon flag `-icc=false`. With this flag set to +`false`, Container A cannot access Container unless explicitly allowed via +a link. This is a huge win for securing your containers. When two containers +are linked together Docker creates a parent child relationship between the +containers. The parent container will be able to access information via +environment variables of the child such as name, exposed ports, IP and other +selected environment variables. -When linking two containers Docker will use the exposed ports of the -container to create a secure tunnel for the parent to access. If a -database container only exposes port 8080 then the linked container will -only be allowed to access port 8080 and nothing else if inter-container -communication is set to false. +When linking two containers Docker will use the exposed ports of the container +to create a secure tunnel for the parent to access. If a database container +only exposes port 8080 then the linked container will only be allowed to access +port 8080 and nothing else if inter-container communication is set to false. -For example, there is an image called `crosbymichael/redis` -that exposes the port 6379 and starts the Redis server. Let’s -name the container as `redis` based on that image -and run it as daemon. +For example, there is an image called `crosbymichael/redis` that exposes the +port 6379 and starts the Redis server. Let's name the container as `redis` +based on that image and run it as daemon. $ sudo docker run -d -name redis crosbymichael/redis -We can issue all the commands that you would expect using the name -`redis`; start, stop, attach, using the name for our -container. The name also allows us to link other containers into this -one. +We can issue all the commands that you would expect using the name `redis`; +start, stop, attach, using the name for our container. The name also allows +us to link other containers into this one. -Next, we can start a new web application that has a dependency on Redis -and apply a link to connect both containers. If you noticed when running -our Redis server we did not use the `-p` flag to -publish the Redis port to the host system. Redis exposed port 6379 and -this is all we need to establish a link. +Next, we can start a new web application that has a dependency on Redis and +apply a link to connect both containers. If you noticed when running our Redis +server we did not use the `-p` flag to publish the Redis port to the host +system. Redis exposed port 6379 and this is all we need to establish a link. $ sudo docker run -t -i -link redis:db -name webapp ubuntu bash -When you specified `-link redis:db` you are telling -Docker to link the container named `redis` into this -new container with the alias `db`. Environment -variables are prefixed with the alias so that the parent container can -access network and environment information from the containers that are +When you specified `-link redis:db` you are telling Docker to link the +container named `redis` into this new container with the alias `db`. +Environment variables are prefixed with the alias so that the parent container +can access network and environment information from the containers that are linked into it. -If we inspect the environment variables of the second container, we -would see all the information about the child container. +If we inspect the environment variables of the second container, we would see +all the information about the child container. $ root@4c01db0b339c:/# env @@ -98,20 +90,20 @@ would see all the information about the child container. _=/usr/bin/env root@4c01db0b339c:/# -Accessing the network information along with the environment of the -child container allows us to easily connect to the Redis service on the -specific IP and port in the environment. +Accessing the network information along with the environment of the child +container allows us to easily connect to the Redis service on the specific +IP and port in the environment. > **Note**: > These Environment variables are only set for the first process in the > container. Similarly, some daemons (such as `sshd`) > will scrub them when spawning shells for connection. -You can work around this by storing the initial `env` -in a file, or looking at `/proc/1/environ`. +You can work around this by storing the initial `env` in a file, or looking +at `/proc/1/environ`. -Running `docker ps` shows the 2 containers, and the -`webapp/db` alias name for the Redis container. +Running `docker ps` shows the 2 containers, and the `webapp/db` alias name for +the Redis container. $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/docs/sources/use/working_with_volumes.md b/docs/sources/use/working_with_volumes.md index e95d3786b1..5817309e62 100644 --- a/docs/sources/use/working_with_volumes.md +++ b/docs/sources/use/working_with_volumes.md @@ -8,23 +8,23 @@ page_keywords: Examples, Usage, volume, docker, documentation, examples A *data volume* is a specially-designated directory within one or more containers that bypasses the [*Union File -System*](../../terms/layer/#ufs-def) to provide several useful features +System*](/terms/layer/#ufs-def) to provide several useful features for persistent or shared data: -- **Data volumes can be shared and reused between containers:** - This is the feature that makes data volumes so powerful. You can - use it for anything from hot database upgrades to custom backup or - replication tools. See the example below. -- **Changes to a data volume are made directly:** - Without the overhead of a copy-on-write mechanism. This is good for - very large files. -- **Changes to a data volume will not be included at the next commit:** - Because they are not recorded as regular filesystem changes in the - top layer of the [*Union File System*](../../terms/layer/#ufs-def) -- **Volumes persist until no containers use them:** - As they are a reference counted resource. The container does not need to be - running to share its volumes, but running it can help protect it - against accidental removal via `docker rm`. + - **Data volumes can be shared and reused between containers:** + This is the feature that makes data volumes so powerful. You can + use it for anything from hot database upgrades to custom backup or + replication tools. See the example below. + - **Changes to a data volume are made directly:** + Without the overhead of a copy-on-write mechanism. This is good for + very large files. + - **Changes to a data volume will not be included at the next commit:** + Because they are not recorded as regular filesystem changes in the + top layer of the [*Union File System*](/terms/layer/#ufs-def) + - **Volumes persist until no containers use them:** + As they are a reference counted resource. The container does not need to be + running to share its volumes, but running it can help protect it + against accidental removal via `docker rm`. Each container can have zero or more data volumes. @@ -82,8 +82,8 @@ Interestingly, you can mount the volumes that came from the $ docker run -t -i -rm -volumes-from client1 -name client2 ubuntu bash This allows you to abstract the actual data source from users of that -data, similar to -[*ambassador\_pattern\_linking*](../ambassador_pattern_linking/#ambassador-pattern-linking). +data, similar to [*Ambassador Pattern Linking*]( +../ambassador_pattern_linking/#ambassador-pattern-linking). If you remove containers that mount volumes, including the initial DATA container, or the middleman, the volumes will not be deleted until there @@ -117,40 +117,34 @@ New in version v0.5.0. ### Note for OS/X users and remote daemon users: -OS/X users run `boot2docker` to create a minimalist -virtual machine running the docker daemon. That virtual machine then -launches docker commands on behalf of the OS/X command line. The means -that `host directories` refer to directories in the -`boot2docker` virtual machine, not the OS/X -filesystem. +OS/X users run `boot2docker` to create a minimalist virtual machine running +the docker daemon. That virtual machine then launches docker commands on +behalf of the OS/X command line. The means that `host directories` refer to +directories in the `boot2docker` virtual machine, not the OS/X filesystem. -Similarly, anytime when the docker daemon is on a remote machine, the -`host directories` always refer to directories on -the daemon’s machine. +Similarly, anytime when the docker daemon is on a remote machine, the +`host directories` always refer to directories on the daemon's machine. ### Backup, restore, or migrate data volumes -You cannot back up volumes using `docker export`, -`docker save` and `docker cp` -because they are external to images. Instead you can use -`--volumes-from` to start a new container that can -access the data-container’s volume. For example: +You cannot back up volumes using `docker export`, `docker save` and `docker cp` +because they are external to images. Instead you can use `--volumes-from` to +start a new container that can access the data-container's volume. For example: $ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data -- `-rm` - remove the container when it exits -- `--volumes-from DATA` - attach to the volumes - shared by the `DATA` container -- `-v $(pwd):/backup` - bind mount the current - directory into the container; to write the tar file to -- `busybox` - a small simpler image - good for - quick maintenance -- `tar cvf /backup/backup.tar /data` - creates an - uncompressed tar file of all the files in the `/data` - directory + - `-rm`: + remove the container when it exits + - `--volumes-from DATA`: + attach to the volumes shared by the `DATA` container + - `-v $(pwd):/backup`: + bind mount the current directory into the container; to write the tar file to + - `busybox`: + a small simpler image - good for quick maintenance + - `tar cvf /backup/backup.tar /data`: + creates an uncompressed tar file of all the files in the `/data` directory -Then to restore to the same container, or another that you’ve made -elsewhere: +Then to restore to the same container, or another that you`ve made elsewhere: # create a new data container $ sudo docker run -v /data -name DATA2 busybox true @@ -167,12 +161,11 @@ restore testing using your preferred tools. ## Known Issues -- [Issue 2702](https://github.com/dotcloud/docker/issues/2702): + - [Issue 2702](https://github.com/dotcloud/docker/issues/2702): "lxc-start: Permission denied - failed to mount" could indicate a permissions problem with AppArmor. Please see the issue for a workaround. -- [Issue 2528](https://github.com/dotcloud/docker/issues/2528): the + - [Issue 2528](https://github.com/dotcloud/docker/issues/2528): the busybox container is used to make the resulting container as small and simple as possible - whenever you need to interact with the data in the volume you mount it into another container. - diff --git a/docs/sources/use/workingwithrepository.md b/docs/sources/use/workingwithrepository.md index c71aa60e10..2ffca34ce5 100644 --- a/docs/sources/use/workingwithrepository.md +++ b/docs/sources/use/workingwithrepository.md @@ -7,8 +7,8 @@ page_keywords: repo, repositories, usage, pull image, push image, image, documen ## Introduction A *repository* is a shareable collection of tagged -[*images*](../../terms/image/#image-def) that together create the file -systems for containers. The repository’s name is a label that indicates +[*images*](/terms/image/#image-def) that together create the file +systems for containers. The repository's name is a label that indicates the provenance of the repository, i.e. who created it and where the original copy is located. @@ -19,7 +19,7 @@ the home of "top-level" repositories and the Central Index. This registry may also include public "user" repositories. Docker is not only a tool for creating and managing your own -[*containers*](../../terms/container/#container-def) – **Docker is also +[*containers*](/terms/container/#container-def) – **Docker is also a tool for sharing**. The Docker project provides a Central Registry to host public repositories, namespaced by user, and a Central Index which provides user authentication and search over all the public @@ -44,17 +44,15 @@ they really help people get started quickly! You could also use control of who accesses your images, but we will only refer to public repositories in these examples. -- Top-level repositories can easily be recognized by **not** having a - `/` (slash) in their name. These repositories - can generally be trusted. -- User repositories always come in the form of - `/`. This is what your - published images will look like if you push to the public Central - Registry. -- Only the authenticated user can push to their *username* namespace - on the Central Registry. -- User images are not checked, it is therefore up to you whether or - not you trust the creator of this image. +- Top-level repositories can easily be recognized by **not** having a + `/` (slash) in their name. These repositories can generally be trusted. +- User repositories always come in the form of `/`. + This is what your published images will look like if you push to the public + Central Registry. +- Only the authenticated user can push to their *username* namespace + on the Central Registry. +- User images are not checked, it is therefore up to you whether or not you + trust the creator of this image. ## Find Public Images on the Central Index @@ -79,9 +77,9 @@ There you can see two example results: `centos` and `slantview/centos-chef-solo`. The second result shows that it comes from the public repository of a user, `slantview/`, while the first result -(`centos`) doesn’t explicitly list a repository so +(`centos`) doesn't explicitly list a repository so it comes from the trusted Central Repository. The `/` -character separates a user’s repository and the image name. +character separates a user's repository and the image name. Once you have found the image name, you can download it: @@ -91,7 +89,7 @@ Once you have found the image name, you can download it: 539c0211cd76: Download complete What can you do with that image? Check out the -[*Examples*](../../examples/#example-list) and, when you’re ready with +[*Examples*](/examples/#example-list) and, when you're ready with your own image, come back here to learn how to share it. ## Contributing to the Central Registry @@ -109,13 +107,13 @@ namespace for your public repositories. If your username is available then `docker` will also prompt you to enter a password and your e-mail address. It will -then automatically log you in. Now you’re ready to commit and push your +then automatically log you in. Now you're ready to commit and push your own images! ## Committing a Container to a Named Image When you make changes to an existing image, those changes get saved to a -container’s file system. You can then promote that container to become +container's file system. You can then promote that container to become an image by making a `commit`. In addition to converting the container to an image, this is also your opportunity to name the image, specifically a name that includes your user name from @@ -146,17 +144,13 @@ when you push a commit. ### To setup a trusted build 1. Create a [Docker Index account](https://index.docker.io/) and login. -2. Link your GitHub account through the `Link Accounts` - menu. +2. Link your GitHub account through the `Link Accounts` menu. 3. [Configure a Trusted build](https://index.docker.io/builds/). -4. Pick a GitHub project that has a `Dockerfile` - that you want to build. -5. Pick the branch you want to build (the default is the - `master` branch). +4. Pick a GitHub project that has a `Dockerfile` that you want to build. +5. Pick the branch you want to build (the default is the `master` branch). 6. Give the Trusted Build a name. 7. Assign an optional Docker tag to the Build. -8. Specify where the `Dockerfile` is located. The - default is `/`. +8. Specify where the `Dockerfile` is located. The default is `/`. Once the Trusted Build is configured it will automatically trigger a build, and in a few minutes, if there are no errors, you will see your @@ -168,22 +162,20 @@ If you want to see the status of your Trusted Builds you can go to your index, and it will show you the status of your builds, and the build history. -Once you’ve created a Trusted Build you can deactivate or delete it. You -cannot however push to a Trusted Build with the `docker push` -command. You can only manage it by committing code to your -GitHub repository. +Once you`ve created a Trusted Build you can deactivate or delete it. You +cannot however push to a Trusted Build with the `docker push` command. +You can only manage it by committing code to your GitHub repository. You can create multiple Trusted Builds per repository and configure them -to point to specific `Dockerfile`‘s or Git branches. +to point to specific Dockerfile's or Git branches. ## Private Registry Private registries and private shared repositories are only possible by -hosting [your own -registry](https://github.com/dotcloud/docker-registry). To push or pull -to a repository on your own registry, you must prefix the tag with the -address of the registry’s host (a `.` or -`:` is used to identify a host), like this: +hosting [your own registry](https://github.com/dotcloud/docker-registry). +To push or pull to a repository on your own registry, you must prefix the +tag with the address of the registry's host (a `.` or `:` is used to identify +a host), like this: # Tag to create a repository with the full registry location. # The location (e.g. localhost.localdomain:5000) becomes @@ -193,7 +185,7 @@ address of the registry’s host (a `.` or # Push the new repository to its home location on localhost sudo docker push localhost.localdomain:5000/repo_name -Once a repository has your registry’s host name as part of the tag, you +Once a repository has your registry's host name as part of the tag, you can push and pull it like any other repository, but it will **not** be searchable (or indexed at all) in the Central Index, and there will be no user name checking performed. Your registry will function completely @@ -203,8 +195,8 @@ independently from the Central Index. See also -[Docker Blog: How to use your own -registry](http://blog.docker.io/2013/07/how-to-use-your-own-registry/) +[Docker Blog: How to use your own registry]( +http://blog.docker.io/2013/07/how-to-use-your-own-registry/) ## Authentication File @@ -212,11 +204,11 @@ The authentication is stored in a json file, `.dockercfg` located in your home directory. It supports multiple registry urls. -`docker login` will create the -"[https://index.docker.io/v1/](https://index.docker.io/v1/)" key. +`docker login` will create the "[https://index.docker.io/v1/]( +https://index.docker.io/v1/)" key. -`docker login https://my-registry.com` will create -the "[https://my-registry.com](https://my-registry.com)" key. +`docker login https://my-registry.com` will create the +"[https://my-registry.com](https://my-registry.com)" key. For example: