Merge branch 'master' into bump_v0.9.0

Docker-DCO-1.1-Signed-off-by: Michael Crosby <michael@crosbymichael.com> (github: crosbymichael)
This commit is contained in:
Michael Crosby 2014-03-07 15:37:29 -08:00
commit ccdf27e502
178 changed files with 7913 additions and 2053 deletions

1
.gitignore vendored
View file

@ -22,3 +22,4 @@ bundles/
.git/
vendor/pkg/
pyenv
Vagrantfile

View file

@ -334,6 +334,7 @@ Wes Morgan <cap10morgan@gmail.com>
Will Dietz <w@wdtz.org>
William Delanoue <william.delanoue@gmail.com>
Will Rouesnel <w.rouesnel@gmail.com>
Will Weaver <monkey@buildingbananas.com>
Xiuming Chen <cc@cxm.cc>
Yang Bai <hamo.by@gmail.com>
Yurii Rashkovskii <yrashk@gmail.com>

View file

@ -88,7 +88,7 @@ curl -o .git/hooks/pre-commit https://raw.github.com/edsrzf/gofmt-git-hook/maste
Pull requests descriptions should be as clear as possible and include a
reference to all the issues that they address.
Pull requests mustn't contain commits from other users or branches.
Pull requests must not contain commits from other users or branches.
Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Be
@ -117,7 +117,7 @@ to indicate acceptance.
A change requires LGTMs from an absolute majority of the maintainers of each
component affected. For example, if a change affects docs/ and registry/, it
needs an absolute majority from the maintainers of docs/ AND, separately, an
absolute majority of the maintainers of registry
absolute majority of the maintainers of registry.
For more details see [MAINTAINERS.md](hack/MAINTAINERS.md)
@ -170,10 +170,15 @@ curl -o .git/hooks/prepare-commit-msg https://raw.github.com/dotcloud/docker/mas
* Note: the above script expects to find your GitHub user name in ``git config --get github.user``
#### Small patch exception
There are several exceptions to the signing requirement. Currently these are:
* Your patch fixes spelling or grammar errors.
* Your patch is a single line change to documentation.
If you have any questions, please refer to the FAQ in the [docs](http://docs.docker.io)
### How can I become a maintainer?
* Step 1: learn the component inside out

View file

@ -62,7 +62,7 @@ RUN cd /usr/local/lvm2 && ./configure --enable-static_link && make device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install Go
RUN curl -s https://go.googlecode.com/files/go1.2.src.tar.gz | tar -v -C /usr/local -xz
RUN curl -s https://go.googlecode.com/files/go1.2.1.src.tar.gz | tar -v -C /usr/local -xz
ENV PATH /usr/local/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/dotcloud/docker/vendor
RUN cd /usr/local/go/src && ./make.bash --no-clean 2>&1
@ -87,6 +87,7 @@ RUN git config --global user.email 'docker-dummy@example.com'
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/dotcloud/docker
ENV DOCKER_BUILDTAGS apparmor
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]

View file

@ -1,9 +1,7 @@
Solomon Hykes <solomon@dotcloud.com> (@shykes)
Guillaume Charmes <guillaume@dotcloud.com> (@creack)
Victor Vieux <victor@dotcloud.com> (@vieux)
Victor Vieux <vieux@docker.com> (@vieux)
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
.travis.yml: Tianon Gravi <admwiggin@gmail.com> (@tianon)
api.go: Victor Vieux <victor@dotcloud.com> (@vieux)
Dockerfile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Makefile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Vagrantfile: Cristian Staretu <cristian.staretu@gmail.com> (@unclejack)

View file

@ -3,7 +3,7 @@
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD)
DOCKER_IMAGE := docker:$(GIT_BRANCH)
DOCKER_DOCS_IMAGE := docker-docs:$(GIT_BRANCH)
DOCKER_RUN_DOCKER := docker run -rm -i -t -privileged -e TESTFLAGS -v "$(CURDIR)/bundles:/go/src/github.com/dotcloud/docker/bundles" "$(DOCKER_IMAGE)"
DOCKER_RUN_DOCKER := docker run --rm -i -t --privileged -e TESTFLAGS -v "$(CURDIR)/bundles:/go/src/github.com/dotcloud/docker/bundles" "$(DOCKER_IMAGE)"
default: binary
@ -17,10 +17,10 @@ cross: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross
docs: docs-build
docker run -rm -i -t -p 8000:8000 "$(DOCKER_DOCS_IMAGE)"
docker run --rm -i -t -p 8000:8000 "$(DOCKER_DOCS_IMAGE)"
docs-shell: docs-build
docker run -rm -i -t -p 8000:8000 "$(DOCKER_DOCS_IMAGE)" bash
docker run --rm -i -t -p 8000:8000 "$(DOCKER_DOCS_IMAGE)" bash
test: build
$(DOCKER_RUN_DOCKER) hack/make.sh test test-integration
@ -32,10 +32,10 @@ shell: build
$(DOCKER_RUN_DOCKER) bash
build: bundles
docker build -rm -t "$(DOCKER_IMAGE)" .
docker build -t "$(DOCKER_IMAGE)" .
docs-build:
docker build -rm -t "$(DOCKER_DOCS_IMAGE)" docs
docker build -t "$(DOCKER_DOCS_IMAGE)" docs
bundles:
mkdir bundles

107
README.md
View file

@ -4,19 +4,19 @@ Docker: the Linux container engine
Docker is an open source project to pack, ship and run any application
as a lightweight container
Docker containers are both *hardware-agnostic* and
*platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between -
and they don't require that you use a particular language, framework
or packaging system. That makes them great building blocks for
deploying and scaling web apps, databases and backend services without
depending on a particular stack or provider.
Docker containers are both *hardware-agnostic* and *platform-agnostic*.
This means that they can run anywhere, from your laptop to the largest
EC2 compute instance and everything in between - and they don't require
that you use a particular language, framework or packaging system. That
makes them great building blocks for deploying and scaling web apps,
databases and backend services without depending on a particular stack
or provider.
Docker is an open-source implementation of the deployment engine which
powers [dotCloud](http://dotcloud.com), a popular
Platform-as-a-Service. It benefits directly from the experience
accumulated over several years of large-scale operation and support of
hundreds of thousands of applications and databases.
powers [dotCloud](http://dotcloud.com), a popular Platform-as-a-Service.
It benefits directly from the experience accumulated over several years
of large-scale operation and support of hundreds of thousands of
applications and databases.
![Docker L](docs/theme/docker/static/img/dockerlogo-h.png "Docker")
@ -24,10 +24,10 @@ hundreds of thousands of applications and databases.
A common method for distributing applications and sandboxing their
execution is to use virtual machines, or VMs. Typical VM formats are
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In
theory these formats should allow every developer to automatically
package their application into a "machine" for easy distribution and
deployment. In practice, that almost never happens, for a few reasons:
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory
these formats should allow every developer to automatically package
their application into a "machine" for easy distribution and deployment.
In practice, that almost never happens, for a few reasons:
* *Size*: VMs are very large which makes them impractical to store
and transfer.
@ -47,39 +47,37 @@ deployment. In practice, that almost never happens, for a few reasons:
service discovery.
By contrast, Docker relies on a different sandboxing method known as
*containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern
operating system kernels now support the primitives necessary for
containerization, including Linux with [openvz](http://openvz.org),
*containerization*. Unlike traditional virtualization, containerization
takes place at the kernel level. Most modern operating system kernels
now support the primitives necessary for containerization, including
Linux with [openvz](http://openvz.org),
[vserver](http://linux-vserver.org) and more recently
[lxc](http://lxc.sourceforge.net), Solaris with
[zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc)
and FreeBSD with
[Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers
a portable format and runtime environment that solves all 4
problems. Docker containers are small (and their transfer can be
optimized with layers), they have basically zero memory and cpu
overhead, they are completely portable and are designed from the
ground up with an application-centric design.
Docker builds on top of these low-level primitives to offer developers a
portable format and runtime environment that solves all 4 problems.
Docker containers are small (and their transfer can be optimized with
layers), they have basically zero memory and cpu overhead, they are
completely portable and are designed from the ground up with an
application-centric design.
The best part: because ``docker`` operates at the OS level, it can
still be run inside a VM!
The best part: because Docker operates at the OS level, it can still be
run inside a VM!
## Plays well with others
Docker does not require that you buy into a particular programming
language, framework, packaging system or configuration language.
Is your application a Unix process? Does it use files, tcp
connections, environment variables, standard Unix streams and
command-line arguments as inputs and outputs? Then ``docker`` can run
it.
Is your application a Unix process? Does it use files, tcp connections,
environment variables, standard Unix streams and command-line arguments
as inputs and outputs? Then Docker can run it.
Can your application's build be expressed as a sequence of such
commands? Then ``docker`` can build it.
commands? Then Docker can build it.
## Escape dependency hell
@ -126,14 +124,11 @@ build command inherits the result of the previous commands, the
Here's a typical Docker build process:
```bash
from ubuntu:12.10
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip
run pip install django
run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl
run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
run cd helloflask-master && pip install -r requirements.txt
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -q -y python python-pip curl
RUN curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
RUN cd helloflask-master && pip install -r requirements.txt
```
Note that Docker doesn't care *how* dependencies are built - as long
@ -143,22 +138,25 @@ as they can be built by running a Unix command in a container.
Getting started
===============
Docker can be installed on your local machine as well as servers - both bare metal and virtualized.
It is available as a binary on most modern Linux systems, or as a VM on Windows, Mac and other systems.
Docker can be installed on your local machine as well as servers - both
bare metal and virtualized. It is available as a binary on most modern
Linux systems, or as a VM on Windows, Mac and other systems.
We also offer an interactive tutorial for quickly learning the basics of using Docker.
For up-to-date install instructions and online tutorials, see the [Getting Started page](http://www.docker.io/gettingstarted/).
We also offer an interactive tutorial for quickly learning the basics of
using Docker.
For up-to-date install instructions and online tutorials, see the
[Getting Started page](http://www.docker.io/gettingstarted/).
Usage examples
==============
Docker can be used to run short-lived commands, long-running daemons (app servers, databases etc.),
interactive shell sessions, etc.
Docker can be used to run short-lived commands, long-running daemons
(app servers, databases etc.), interactive shell sessions, etc.
You can find a [list of real-world examples](http://docs.docker.io/en/latest/examples/) in the documentation.
You can find a [list of real-world
examples](http://docs.docker.io/en/latest/examples/) in the
documentation.
Under the hood
--------------
@ -170,13 +168,7 @@ Under the hood, Docker is built on the following components:
and
[namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part)
capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union
filesystem with copy-on-write capabilities;
* The [Go](http://golang.org) programming language;
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to
simplify the creation of Linux containers.
* The [Go](http://golang.org) programming language.
Contributing to Docker
======================
@ -187,7 +179,6 @@ started [here](CONTRIBUTING.md).
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
### Legal
*Brought to you courtesy of our legal counsel. For more context,

View file

@ -1 +1 @@
0.8.1
0.8.1-dev

206
Vagrantfile vendored
View file

@ -1,206 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
BOX_NAME = ENV['BOX_NAME'] || "ubuntu"
BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box"
VF_BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64_vmware_fusion.box"
AWS_BOX_URI = ENV['BOX_URI'] || "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
AWS_REGION = ENV['AWS_REGION'] || "us-east-1"
AWS_AMI = ENV['AWS_AMI'] || "ami-69f5a900"
AWS_INSTANCE_TYPE = ENV['AWS_INSTANCE_TYPE'] || 't1.micro'
SSH_PRIVKEY_PATH = ENV['SSH_PRIVKEY_PATH']
PRIVATE_NETWORK = ENV['PRIVATE_NETWORK']
# Boolean that forwards the Docker dynamic ports 49000-49900
# See http://docs.docker.io/en/latest/use/port_redirection/ for more
# $ FORWARD_DOCKER_PORTS=1 vagrant [up|reload]
FORWARD_DOCKER_PORTS = ENV['FORWARD_DOCKER_PORTS']
VAGRANT_RAM = ENV['VAGRANT_RAM'] || 512
VAGRANT_CORES = ENV['VAGRANT_CORES'] || 1
# You may also provide a comma-separated list of ports
# for Vagrant to forward. For example:
# $ FORWARD_PORTS=8080,27017 vagrant [up|reload]
FORWARD_PORTS = ENV['FORWARD_PORTS']
# A script to upgrade from the 12.04 kernel to the raring backport kernel (3.8)
# and install docker.
$script = <<SCRIPT
# The username to add to the docker group will be passed as the first argument
# to the script. If nothing is passed, default to "vagrant".
user="$1"
if [ -z "$user" ]; then
user=vagrant
fi
# Enable memory cgroup and swap accounting
sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"/g' /etc/default/grub
update-grub
# Adding an apt gpg key is idempotent.
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
# Creating the docker.list file is idempotent, but it may overwrite desired
# settings if it already exists. This could be solved with md5sum but it
# doesn't seem worth it.
echo 'deb http://get.docker.io/ubuntu docker main' > \
/etc/apt/sources.list.d/docker.list
# Update remote package metadata. 'apt-get update' is idempotent.
apt-get update -q
# Install docker. 'apt-get install' is idempotent.
apt-get install -q -y lxc-docker
usermod -a -G docker "$user"
tmp=`mktemp -q` && {
# Only install the backport kernel, don't bother upgrading if the backport is
# already installed. We want parse the output of apt so we need to save it
# with 'tee'. NOTE: The installation of the kernel will trigger dkms to
# install vboxguest if needed.
apt-get install -q -y --no-upgrade linux-image-generic-lts-raring | \
tee "$tmp"
# Parse the number of installed packages from the output
NUM_INST=`awk '$2 == "upgraded," && $4 == "newly" { print $3 }' "$tmp"`
rm "$tmp"
}
# If the number of installed packages is greater than 0, we want to reboot (the
# backport kernel was installed but is not running).
if [ "$NUM_INST" -gt 0 ];
then
echo "Rebooting down to activate new kernel."
echo "/vagrant will not be mounted. Use 'vagrant halt' followed by"
echo "'vagrant up' to ensure /vagrant is mounted."
shutdown -r now
fi
SCRIPT
# We need to install the virtualbox guest additions *before* we do the normal
# docker installation. As such this script is prepended to the common docker
# install script above. This allows the install of the backport kernel to
# trigger dkms to build the virtualbox guest module install.
$vbox_script = <<VBOX_SCRIPT + $script
# Install the VirtualBox guest additions if they aren't already installed.
if [ ! -d /opt/VBoxGuestAdditions-4.3.6/ ]; then
# Update remote package metadata. 'apt-get update' is idempotent.
apt-get update -q
# Kernel Headers and dkms are required to build the vbox guest kernel
# modules.
apt-get install -q -y linux-headers-generic-lts-raring dkms
echo 'Downloading VBox Guest Additions...'
wget -cq http://dlc.sun.com.edgesuite.net/virtualbox/4.3.6/VBoxGuestAdditions_4.3.6.iso
echo "95648fcdb5d028e64145a2fe2f2f28c946d219da366389295a61fed296ca79f0 VBoxGuestAdditions_4.3.6.iso" | sha256sum --check || exit 1
mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.3.6.iso /mnt
/mnt/VBoxLinuxAdditions.run --nox11
umount /mnt
fi
VBOX_SCRIPT
Vagrant::Config.run do |config|
# Setup virtual machine box. This VM configuration code is always executed.
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
# Use the specified private key path if it is specified and not empty.
if SSH_PRIVKEY_PATH
config.ssh.private_key_path = SSH_PRIVKEY_PATH
end
config.ssh.forward_agent = true
end
# Providers were added on Vagrant >= 1.1.0
#
# NOTE: The vagrant "vm.provision" appends its arguments to a list and executes
# them in order. If you invoke "vm.provision :shell, :inline => $script"
# twice then vagrant will run the script two times. Unfortunately when you use
# providers and the override argument to set up provisioners (like the vbox
# guest extensions) they 1) don't replace the other provisioners (they append
# to the end of the list) and 2) you can't control the order the provisioners
# are executed (you can only append to the list). If you want the virtualbox
# only script to run before the other script, you have to jump through a lot of
# hoops.
#
# Here is my only repeatable solution: make one script that is common ($script)
# and another script that is the virtual box guest *prepended* to the common
# script. Only ever use "vm.provision" *one time* per provider. That means
# every single provider has an override, and every single one configures
# "vm.provision". Much saddness, but such is life.
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
config.vm.provider :aws do |aws, override|
username = "ubuntu"
override.vm.box_url = AWS_BOX_URI
override.vm.provision :shell, :inline => $script, :args => username
aws.access_key_id = ENV["AWS_ACCESS_KEY"]
aws.secret_access_key = ENV["AWS_SECRET_KEY"]
aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
override.ssh.username = username
aws.region = AWS_REGION
aws.ami = AWS_AMI
aws.instance_type = AWS_INSTANCE_TYPE
end
config.vm.provider :rackspace do |rs, override|
override.vm.provision :shell, :inline => $script
rs.username = ENV["RS_USERNAME"]
rs.api_key = ENV["RS_API_KEY"]
rs.public_key_path = ENV["RS_PUBLIC_KEY"]
rs.flavor = /512MB/
rs.image = /Ubuntu/
end
config.vm.provider :vmware_fusion do |f, override|
override.vm.box_url = VF_BOX_URI
override.vm.synced_folder ".", "/vagrant", disabled: true
override.vm.provision :shell, :inline => $script
f.vmx["displayName"] = "docker"
end
config.vm.provider :virtualbox do |vb, override|
override.vm.provision :shell, :inline => $vbox_script
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
vb.customize ["modifyvm", :id, "--memory", VAGRANT_RAM]
vb.customize ["modifyvm", :id, "--cpus", VAGRANT_CORES]
end
end
# If this is a version 1 config, virtualbox is the only option. A version 2
# config would have already been set in the above provider section.
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
config.vm.provision :shell, :inline => $vbox_script
end
# Setup port forwarding per loaded environment variables
forward_ports = FORWARD_DOCKER_PORTS.nil? ? [] : [*49153..49900]
forward_ports += FORWARD_PORTS.split(',').map{|i| i.to_i } if FORWARD_PORTS
if forward_ports.any?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
forward_ports.each do |port|
config.vm.forward_port port, port
end
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
forward_ports.each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port, auto_correct: true
end
end
end
if !PRIVATE_NETWORK.nil?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
config.vm.network :hostonly, PRIVATE_NETWORK
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
config.vm.network "private_network", ip: PRIVATE_NETWORK
end
end

1
api/MAINTAINERS Normal file
View file

@ -0,0 +1 @@
Victor Vieux <vieux@docker.com> (@vieux)

View file

@ -37,8 +37,15 @@ import (
"time"
)
var funcMap = template.FuncMap{
"json": func(v interface{}) string {
a, _ := json.Marshal(v)
return string(a)
},
}
var (
ErrConnectionRefused = errors.New("Can't connect to docker daemon. Is 'docker -d' running on this host?")
ErrConnectionRefused = errors.New("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
)
func (cli *DockerCli) getMethod(name string) (func(...string) error, bool) {
@ -138,7 +145,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
tag := cmd.String([]string{"t", "-tag"}, "", "Repository name (and optionally a tag) to be applied to the resulting image in case of success")
suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the verbose output generated by the containers")
noCache := cmd.Bool([]string{"#no-cache", "-no-cache"}, false, "Do not use cache when building the image")
rm := cmd.Bool([]string{"#rm", "-rm"}, false, "Remove intermediate containers after a successful build")
rm := cmd.Bool([]string{"#rm", "-rm"}, true, "Remove intermediate containers after a successful build")
if err := cmd.Parse(args); err != nil {
return nil
}
@ -226,9 +233,9 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
var username, password, email string
cmd.StringVar(&username, []string{"u", "-username"}, "", "username")
cmd.StringVar(&password, []string{"p", "-password"}, "", "password")
cmd.StringVar(&email, []string{"e", "-email"}, "", "email")
cmd.StringVar(&username, []string{"u", "-username"}, "", "Username")
cmd.StringVar(&password, []string{"p", "-password"}, "", "Password")
cmd.StringVar(&email, []string{"e", "-email"}, "", "Email")
err := cmd.Parse(args)
if err != nil {
return nil
@ -516,7 +523,7 @@ func (cli *DockerCli) CmdRestart(args ...string) error {
_, _, err := readBody(cli.call("POST", "/containers/"+name+"/restart?"+v.Encode(), nil, false))
if err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
encounteredError = fmt.Errorf("Error: failed to restart one or more containers")
encounteredError = fmt.Errorf("Error: failed to restart one or more containers")
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
@ -556,7 +563,7 @@ func (cli *DockerCli) CmdStart(args ...string) error {
var tty bool
if *attach || *openStdin {
if cmd.NArg() > 1 {
return fmt.Errorf("Impossible to start and attach multiple containers at once.")
return fmt.Errorf("You cannot start and attach multiple containers at once.")
}
body, _, err := readBody(cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, false))
@ -640,7 +647,7 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
var tmpl *template.Template
if *tmplStr != "" {
var err error
if tmpl, err = template.New("").Parse(*tmplStr); err != nil {
if tmpl, err = template.New("").Funcs(funcMap).Parse(*tmplStr); err != nil {
fmt.Fprintf(cli.err, "Template parsing error: %v\n", err)
return &utils.StatusError{StatusCode: 64,
Status: "Template parsing error: " + err.Error()}
@ -780,7 +787,10 @@ func (cli *DockerCli) CmdPort(args ...string) error {
// 'docker rmi IMAGE' removes all images with the name IMAGE
func (cli *DockerCli) CmdRmi(args ...string) error {
cmd := cli.Subcmd("rmi", "IMAGE [IMAGE...]", "Remove one or more images")
var (
cmd = cli.Subcmd("rmi", "IMAGE [IMAGE...]", "Remove one or more images")
force = cmd.Bool([]string{"f", "-force"}, false, "Force")
)
if err := cmd.Parse(args); err != nil {
return nil
}
@ -789,9 +799,14 @@ func (cli *DockerCli) CmdRmi(args ...string) error {
return nil
}
v := url.Values{}
if *force {
v.Set("force", "1")
}
var encounteredError error
for _, name := range cmd.Args() {
body, _, err := readBody(cli.call("DELETE", "/images/"+name, nil, false))
body, _, err := readBody(cli.call("DELETE", "/images/"+name+"?"+v.Encode(), nil, false))
if err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
encounteredError = fmt.Errorf("Error: failed to remove one or more images")
@ -816,7 +831,7 @@ func (cli *DockerCli) CmdRmi(args ...string) error {
func (cli *DockerCli) CmdHistory(args ...string) error {
cmd := cli.Subcmd("history", "[OPTIONS] IMAGE", "Show the history of an image")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "only show numeric IDs")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
if err := cmd.Parse(args); err != nil {
@ -875,6 +890,7 @@ func (cli *DockerCli) CmdRm(args ...string) error {
cmd := cli.Subcmd("rm", "[OPTIONS] CONTAINER [CONTAINER...]", "Remove one or more containers")
v := cmd.Bool([]string{"v", "-volumes"}, false, "Remove the volumes associated to the container")
link := cmd.Bool([]string{"l", "#link", "-link"}, false, "Remove the specified link and not the underlying container")
force := cmd.Bool([]string{"f", "-force"}, false, "Force removal of running container")
if err := cmd.Parse(args); err != nil {
return nil
@ -890,6 +906,9 @@ func (cli *DockerCli) CmdRm(args ...string) error {
if *link {
val.Set("link", "1")
}
if *force {
val.Set("force", "1")
}
var encounteredError error
for _, name := range cmd.Args() {
@ -977,13 +996,13 @@ func (cli *DockerCli) CmdPush(args ...string) error {
cli.LoadConfigFile()
// Resolve the Repository name from fqn to endpoint + name
endpoint, _, err := registry.ResolveRepositoryName(name)
// Resolve the Repository name from fqn to hostname + name
hostname, _, err := registry.ResolveRepositoryName(name)
if err != nil {
return err
}
// Resolve the Auth config relevant for this server
authConfig := cli.configFile.ResolveAuthConfig(endpoint)
authConfig := cli.configFile.ResolveAuthConfig(hostname)
// If we're not using a custom registry, we know the restrictions
// applied to repository names and can warn the user in advance.
// Custom repositories can have different rules, and we must also
@ -993,7 +1012,7 @@ func (cli *DockerCli) CmdPush(args ...string) error {
if username == "" {
username = "<user>"
}
return fmt.Errorf("Impossible to push a \"root\" repository. Please rename your repository in <user>/<repo> (ex: %s/%s)", username, name)
return fmt.Errorf("You cannot push a \"root\" repository. Please rename your repository in <user>/<repo> (ex: %s/%s)", username, name)
}
v := url.Values{}
@ -1014,10 +1033,10 @@ func (cli *DockerCli) CmdPush(args ...string) error {
if err := push(authConfig); err != nil {
if strings.Contains(err.Error(), "Status 401") {
fmt.Fprintln(cli.out, "\nPlease login prior to push:")
if err := cli.CmdLogin(endpoint); err != nil {
if err := cli.CmdLogin(hostname); err != nil {
return err
}
authConfig := cli.configFile.ResolveAuthConfig(endpoint)
authConfig := cli.configFile.ResolveAuthConfig(hostname)
return push(authConfig)
}
return err
@ -1042,8 +1061,8 @@ func (cli *DockerCli) CmdPull(args ...string) error {
*tag = parsedTag
}
// Resolve the Repository name from fqn to endpoint + name
endpoint, _, err := registry.ResolveRepositoryName(remote)
// Resolve the Repository name from fqn to hostname + name
hostname, _, err := registry.ResolveRepositoryName(remote)
if err != nil {
return err
}
@ -1051,7 +1070,7 @@ func (cli *DockerCli) CmdPull(args ...string) error {
cli.LoadConfigFile()
// Resolve the Auth config relevant for this server
authConfig := cli.configFile.ResolveAuthConfig(endpoint)
authConfig := cli.configFile.ResolveAuthConfig(hostname)
v := url.Values{}
v.Set("fromImage", remote)
v.Set("tag", *tag)
@ -1073,10 +1092,10 @@ func (cli *DockerCli) CmdPull(args ...string) error {
if err := pull(authConfig); err != nil {
if strings.Contains(err.Error(), "Status 401") {
fmt.Fprintln(cli.out, "\nPlease login prior to pull:")
if err := cli.CmdLogin(endpoint); err != nil {
if err := cli.CmdLogin(hostname); err != nil {
return err
}
authConfig := cli.configFile.ResolveAuthConfig(endpoint)
authConfig := cli.configFile.ResolveAuthConfig(hostname)
return pull(authConfig)
}
return err
@ -1087,11 +1106,11 @@ func (cli *DockerCli) CmdPull(args ...string) error {
func (cli *DockerCli) CmdImages(args ...string) error {
cmd := cli.Subcmd("images", "[OPTIONS] [NAME]", "List images")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "only show numeric IDs")
all := cmd.Bool([]string{"a", "-all"}, false, "show all images (by default filter out the intermediate images used to build)")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (by default filter out the intermediate images used to build)")
noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output")
flViz := cmd.Bool([]string{"v", "#viz", "-viz"}, false, "output graph in graphviz format")
flTree := cmd.Bool([]string{"t", "#tree", "-tree"}, false, "output graph in tree format")
flViz := cmd.Bool([]string{"v", "#viz", "-viz"}, false, "Output graph in graphviz format")
flTree := cmd.Bool([]string{"t", "#tree", "-tree"}, false, "Output graph in tree format")
if err := cmd.Parse(args); err != nil {
return nil
@ -1573,7 +1592,7 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
}
if !container.State.Running {
return fmt.Errorf("Impossible to attach to a stopped container, start it first")
return fmt.Errorf("You cannot attach to a stopped container, start it first")
}
if container.Config.Tty && cli.isTerminal {
@ -1668,7 +1687,7 @@ func (cli *DockerCli) CmdSearch(args ...string) error {
type ports []int
func (cli *DockerCli) CmdTag(args ...string) error {
cmd := cli.Subcmd("tag", "[OPTIONS] IMAGE REPOSITORY[:TAG]", "Tag an image into a repository")
cmd := cli.Subcmd("tag", "[OPTIONS] IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG]", "Tag an image into a repository")
force := cmd.Bool([]string{"f", "#force", "-force"}, false, "Force")
if err := cmd.Parse(args); err != nil {
return nil
@ -1681,7 +1700,7 @@ func (cli *DockerCli) CmdTag(args ...string) error {
var repository, tag string
if cmd.NArg() == 3 {
fmt.Fprintf(cli.err, "[DEPRECATED] The format 'IMAGE [REPOSITORY [TAG]]' as been deprecated. Please use IMAGE [REPOSITORY[:TAG]]\n")
fmt.Fprintf(cli.err, "[DEPRECATED] The format 'IMAGE [REPOSITORY [TAG]]' as been deprecated. Please use IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG]]\n")
repository, tag = cmd.Arg(1), cmd.Arg(2)
} else {
repository, tag = utils.ParseRepositoryTag(cmd.Arg(1))
@ -1729,10 +1748,10 @@ func (cli *DockerCli) CmdRun(args ...string) error {
var containerIDFile io.WriteCloser
if len(hostConfig.ContainerIDFile) > 0 {
if _, err := os.Stat(hostConfig.ContainerIDFile); err == nil {
return fmt.Errorf("cid file found, make sure the other container isn't running or delete %s", hostConfig.ContainerIDFile)
return fmt.Errorf("Container ID file found, make sure the other container isn't running or delete %s", hostConfig.ContainerIDFile)
}
if containerIDFile, err = os.Create(hostConfig.ContainerIDFile); err != nil {
return fmt.Errorf("failed to create the container ID file: %s", err)
return fmt.Errorf("Failed to create the container ID file: %s", err)
}
defer containerIDFile.Close()
}
@ -1753,8 +1772,8 @@ func (cli *DockerCli) CmdRun(args ...string) error {
v.Set("fromImage", repos)
v.Set("tag", tag)
// Resolve the Repository name from fqn to endpoint + name
endpoint, _, err := registry.ResolveRepositoryName(repos)
// Resolve the Repository name from fqn to hostname + name
hostname, _, err := registry.ResolveRepositoryName(repos)
if err != nil {
return err
}
@ -1763,7 +1782,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
cli.LoadConfigFile()
// Resolve the Auth config relevant for this server
authConfig := cli.configFile.ResolveAuthConfig(endpoint)
authConfig := cli.configFile.ResolveAuthConfig(hostname)
buf, err := json.Marshal(authConfig)
if err != nil {
return err
@ -1793,7 +1812,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
if len(hostConfig.ContainerIDFile) > 0 {
if _, err = containerIDFile.Write([]byte(runResult.Get("Id"))); err != nil {
return fmt.Errorf("failed to write the container ID to the file: %s", err)
return fmt.Errorf("Failed to write the container ID to the file: %s", err)
}
}
@ -2032,7 +2051,7 @@ func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo b
re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), params)
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", APIVERSION, path), params)
if err != nil {
return nil, -1, err
}
@ -2086,7 +2105,7 @@ func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo b
return nil, -1, err
}
if len(body) == 0 {
return nil, resp.StatusCode, fmt.Errorf("Error: request returned %s for api route and version %s, check if the server supports the requested api version", http.StatusText(resp.StatusCode), req.URL)
return nil, resp.StatusCode, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(resp.StatusCode), req.URL)
}
return nil, resp.StatusCode, fmt.Errorf("Error: %s", bytes.TrimSpace(body))
}
@ -2109,7 +2128,7 @@ func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, h
re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), in)
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", APIVERSION, path), in)
if err != nil {
return err
}
@ -2128,7 +2147,7 @@ func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, h
dial, err := net.Dial(cli.proto, cli.addr)
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
return fmt.Errorf("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
}
return err
}
@ -2137,7 +2156,7 @@ func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, h
defer clientconn.Close()
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
return fmt.Errorf("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
}
return err
}
@ -2173,7 +2192,7 @@ func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in io.Rea
re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), nil)
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", APIVERSION, path), nil)
if err != nil {
return err
}
@ -2184,7 +2203,7 @@ func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in io.Rea
dial, err := net.Dial(cli.proto, cli.addr)
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return fmt.Errorf("Can't connect to docker daemon. Is 'docker -d' running on this host?")
return fmt.Errorf("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
}
return err
}

44
api/common.go Normal file
View file

@ -0,0 +1,44 @@
package api
import (
"fmt"
"github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/utils"
"mime"
"strings"
)
const (
APIVERSION = "1.10"
DEFAULTHTTPHOST = "127.0.0.1"
DEFAULTUNIXSOCKET = "/var/run/docker.sock"
)
func ValidateHost(val string) (string, error) {
host, err := utils.ParseHost(DEFAULTHTTPHOST, DEFAULTUNIXSOCKET, val)
if err != nil {
return val, err
}
return host, nil
}
//TODO remove, used on < 1.5 in getContainersJSON
func displayablePorts(ports *engine.Table) string {
result := []string{}
for _, port := range ports.Data {
if port.Get("IP") == "" {
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PublicPort"), port.Get("Type")))
} else {
result = append(result, fmt.Sprintf("%s:%d->%d/%s", port.Get("IP"), port.GetInt("PublicPort"), port.GetInt("PrivatePort"), port.Get("Type")))
}
}
return strings.Join(result, ", ")
}
func MatchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
utils.Errorf("Error parsing media type: %s error: %s", contentType, err.Error())
}
return err == nil && mimetype == expectedType
}

View file

@ -12,47 +12,28 @@ import (
"github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/pkg/listenbuffer"
"github.com/dotcloud/docker/pkg/systemd"
"github.com/dotcloud/docker/pkg/user"
"github.com/dotcloud/docker/pkg/version"
"github.com/dotcloud/docker/utils"
"github.com/gorilla/mux"
"io"
"io/ioutil"
"log"
"mime"
"net"
"net/http"
"net/http/pprof"
"os"
"regexp"
"strconv"
"strings"
"syscall"
"time"
)
// FIXME: move code common to client and server to common.go
const (
APIVERSION = 1.9
DEFAULTHTTPHOST = "127.0.0.1"
DEFAULTUNIXSOCKET = "/var/run/docker.sock"
)
var (
activationLock chan struct{}
)
func ValidateHost(val string) (string, error) {
host, err := utils.ParseHost(DEFAULTHTTPHOST, DEFAULTUNIXSOCKET, val)
if err != nil {
return val, err
}
return host, nil
}
type HttpApiFunc func(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error
func init() {
engine.Register("serveapi", ServeApi)
}
type HttpApiFunc func(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error
func hijackServer(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
conn, _, err := w.(http.Hijacker).Hijack()
@ -133,28 +114,7 @@ func getBoolParam(value string) (bool, error) {
return ret, nil
}
//TODO remove, used on < 1.5 in getContainersJSON
func displayablePorts(ports *engine.Table) string {
result := []string{}
for _, port := range ports.Data {
if port.Get("IP") == "" {
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PublicPort"), port.Get("Type")))
} else {
result = append(result, fmt.Sprintf("%s:%d->%d/%s", port.Get("IP"), port.GetInt("PublicPort"), port.GetInt("PrivatePort"), port.Get("Type")))
}
}
return strings.Join(result, ", ")
}
func MatchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
utils.Errorf("Error parsing media type: %s error: %s", contentType, err.Error())
}
return err == nil && mimetype == expectedType
}
func postAuth(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postAuth(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var (
authConfig, err = ioutil.ReadAll(r.Body)
job = eng.Job("auth")
@ -177,13 +137,13 @@ func postAuth(eng *engine.Engine, version float64, w http.ResponseWriter, r *htt
return nil
}
func getVersion(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getVersion(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Content-Type", "application/json")
eng.ServeHTTP(w, r)
return nil
}
func postContainersKill(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersKill(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -201,7 +161,7 @@ func postContainersKill(eng *engine.Engine, version float64, w http.ResponseWrit
return nil
}
func getContainersExport(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getContainersExport(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -213,7 +173,7 @@ func getContainersExport(eng *engine.Engine, version float64, w http.ResponseWri
return nil
}
func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getImagesJSON(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -227,7 +187,7 @@ func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r
job.Setenv("filter", r.Form.Get("filter"))
job.Setenv("all", r.Form.Get("all"))
if version >= 1.7 {
if version.GreaterThanOrEqualTo("1.7") {
streamJSON(job, w, false)
} else if outs, err = job.Stdout.AddListTable(); err != nil {
return err
@ -237,7 +197,7 @@ func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r
return err
}
if version < 1.7 && outs != nil { // Convert to legacy format
if version.LessThan("1.7") && outs != nil { // Convert to legacy format
outsLegacy := engine.NewTable("Created", 0)
for _, out := range outs.Data {
for _, repoTag := range out.GetList("RepoTags") {
@ -260,8 +220,8 @@ func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r
return nil
}
func getImagesViz(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version > 1.6 {
func getImagesViz(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version.GreaterThan("1.6") {
w.WriteHeader(http.StatusNotFound)
return fmt.Errorf("This is now implemented in the client.")
}
@ -269,13 +229,13 @@ func getImagesViz(eng *engine.Engine, version float64, w http.ResponseWriter, r
return nil
}
func getInfo(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getInfo(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Content-Type", "application/json")
eng.ServeHTTP(w, r)
return nil
}
func getEvents(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getEvents(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -286,7 +246,7 @@ func getEvents(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
return job.Run()
}
func getImagesHistory(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getImagesHistory(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -300,7 +260,7 @@ func getImagesHistory(eng *engine.Engine, version float64, w http.ResponseWriter
return nil
}
func getContainersChanges(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getContainersChanges(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -310,8 +270,8 @@ func getContainersChanges(eng *engine.Engine, version float64, w http.ResponseWr
return job.Run()
}
func getContainersTop(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version < 1.4 {
func getContainersTop(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version.LessThan("1.4") {
return fmt.Errorf("top was improved a lot since 1.3, Please upgrade your docker client.")
}
if vars == nil {
@ -326,7 +286,7 @@ func getContainersTop(eng *engine.Engine, version float64, w http.ResponseWriter
return job.Run()
}
func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getContainersJSON(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -342,7 +302,7 @@ func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWrite
job.Setenv("before", r.Form.Get("before"))
job.Setenv("limit", r.Form.Get("limit"))
if version >= 1.5 {
if version.GreaterThanOrEqualTo("1.5") {
streamJSON(job, w, false)
} else if outs, err = job.Stdout.AddTable(); err != nil {
return err
@ -350,7 +310,7 @@ func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWrite
if err = job.Run(); err != nil {
return err
}
if version < 1.5 { // Convert to legacy format
if version.LessThan("1.5") { // Convert to legacy format
for _, out := range outs.Data {
ports := engine.NewTable("", 0)
ports.ReadListFrom([]byte(out.Get("Ports")))
@ -364,7 +324,7 @@ func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWrite
return nil
}
func postImagesTag(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postImagesTag(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -381,7 +341,7 @@ func postImagesTag(eng *engine.Engine, version float64, w http.ResponseWriter, r
return nil
}
func postCommit(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postCommit(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -410,7 +370,7 @@ func postCommit(eng *engine.Engine, version float64, w http.ResponseWriter, r *h
}
// Creates an image from Pull or from Import
func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postImagesCreate(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -430,9 +390,6 @@ func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter
authConfig = &auth.AuthConfig{}
}
}
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
if image != "" { //pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
@ -441,7 +398,7 @@ func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter
}
}
job = eng.Job("pull", r.Form.Get("fromImage"), tag)
job.SetenvBool("parallel", version > 1.3)
job.SetenvBool("parallel", version.GreaterThan("1.3"))
job.SetenvJson("metaHeaders", metaHeaders)
job.SetenvJson("authConfig", authConfig)
} else { //import
@ -449,7 +406,7 @@ func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter
job.Stdin.Add(r.Body)
}
if version > 1.0 {
if version.GreaterThan("1.0") {
job.SetenvBool("json", true)
streamJSON(job, w, true)
} else {
@ -459,14 +416,14 @@ func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter
if !job.Stdout.Used() {
return err
}
sf := utils.NewStreamFormatter(version > 1.0)
sf := utils.NewStreamFormatter(version.GreaterThan("1.0"))
w.Write(sf.FormatError(err))
}
return nil
}
func getImagesSearch(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getImagesSearch(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -498,19 +455,15 @@ func getImagesSearch(eng *engine.Engine, version float64, w http.ResponseWriter,
return job.Run()
}
func postImagesInsert(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postImagesInsert(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
job := eng.Job("insert", vars["name"], r.Form.Get("url"), r.Form.Get("path"))
if version > 1.0 {
if version.GreaterThan("1.0") {
job.SetenvBool("json", true)
streamJSON(job, w, false)
} else {
@ -520,14 +473,14 @@ func postImagesInsert(eng *engine.Engine, version float64, w http.ResponseWriter
if !job.Stdout.Used() {
return err
}
sf := utils.NewStreamFormatter(version > 1.0)
sf := utils.NewStreamFormatter(version.GreaterThan("1.0"))
w.Write(sf.FormatError(err))
}
return nil
}
func postImagesPush(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postImagesPush(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -558,13 +511,10 @@ func postImagesPush(eng *engine.Engine, version float64, w http.ResponseWriter,
}
}
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
job := eng.Job("push", vars["name"])
job.SetenvJson("metaHeaders", metaHeaders)
job.SetenvJson("authConfig", authConfig)
if version > 1.0 {
if version.GreaterThan("1.0") {
job.SetenvBool("json", true)
streamJSON(job, w, true)
} else {
@ -575,17 +525,17 @@ func postImagesPush(eng *engine.Engine, version float64, w http.ResponseWriter,
if !job.Stdout.Used() {
return err
}
sf := utils.NewStreamFormatter(version > 1.0)
sf := utils.NewStreamFormatter(version.GreaterThan("1.0"))
w.Write(sf.FormatError(err))
}
return nil
}
func getImagesGet(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getImagesGet(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
if version > 1.0 {
if version.GreaterThan("1.0") {
w.Header().Set("Content-Type", "application/x-tar")
}
job := eng.Job("image_export", vars["name"])
@ -593,13 +543,13 @@ func getImagesGet(eng *engine.Engine, version float64, w http.ResponseWriter, r
return job.Run()
}
func postImagesLoad(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postImagesLoad(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
job := eng.Job("load")
job.Stdin.Add(r.Body)
return job.Run()
}
func postContainersCreate(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersCreate(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return nil
}
@ -630,7 +580,7 @@ func postContainersCreate(eng *engine.Engine, version float64, w http.ResponseWr
return writeJSON(w, http.StatusCreated, out)
}
func postContainersRestart(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersRestart(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -646,7 +596,7 @@ func postContainersRestart(eng *engine.Engine, version float64, w http.ResponseW
return nil
}
func deleteContainers(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func deleteContainers(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -656,6 +606,7 @@ func deleteContainers(eng *engine.Engine, version float64, w http.ResponseWriter
job := eng.Job("container_delete", vars["name"])
job.Setenv("removeVolume", r.Form.Get("v"))
job.Setenv("removeLink", r.Form.Get("link"))
job.Setenv("forceRemove", r.Form.Get("force"))
if err := job.Run(); err != nil {
return err
}
@ -663,7 +614,7 @@ func deleteContainers(eng *engine.Engine, version float64, w http.ResponseWriter
return nil
}
func deleteImages(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func deleteImages(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -672,12 +623,12 @@ func deleteImages(eng *engine.Engine, version float64, w http.ResponseWriter, r
}
var job = eng.Job("image_delete", vars["name"])
streamJSON(job, w, false)
job.SetenvBool("autoPrune", version > 1.1)
job.Setenv("force", r.Form.Get("force"))
return job.Run()
}
func postContainersStart(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersStart(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -698,7 +649,7 @@ func postContainersStart(eng *engine.Engine, version float64, w http.ResponseWri
return nil
}
func postContainersStop(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersStop(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -714,7 +665,7 @@ func postContainersStop(eng *engine.Engine, version float64, w http.ResponseWrit
return nil
}
func postContainersWait(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersWait(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -736,7 +687,7 @@ func postContainersWait(eng *engine.Engine, version float64, w http.ResponseWrit
return writeJSON(w, http.StatusOK, env)
}
func postContainersResize(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersResize(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -749,7 +700,7 @@ func postContainersResize(eng *engine.Engine, version float64, w http.ResponseWr
return nil
}
func postContainersAttach(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersAttach(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -791,7 +742,7 @@ func postContainersAttach(eng *engine.Engine, version float64, w http.ResponseWr
fmt.Fprintf(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
if c.GetSubEnv("Config") != nil && !c.GetSubEnv("Config").GetBool("Tty") && version >= 1.6 {
if c.GetSubEnv("Config") != nil && !c.GetSubEnv("Config").GetBool("Tty") && version.GreaterThanOrEqualTo("1.6") {
errStream = utils.NewStdWriter(outStream, utils.Stderr)
outStream = utils.NewStdWriter(outStream, utils.Stdout)
} else {
@ -814,7 +765,7 @@ func postContainersAttach(eng *engine.Engine, version float64, w http.ResponseWr
return nil
}
func wsContainersAttach(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func wsContainersAttach(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
@ -846,7 +797,7 @@ func wsContainersAttach(eng *engine.Engine, version float64, w http.ResponseWrit
return nil
}
func getContainersByName(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getContainersByName(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -856,7 +807,7 @@ func getContainersByName(eng *engine.Engine, version float64, w http.ResponseWri
return job.Run()
}
func getImagesByName(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func getImagesByName(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -866,8 +817,8 @@ func getImagesByName(eng *engine.Engine, version float64, w http.ResponseWriter,
return job.Run()
}
func postBuild(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version < 1.3 {
func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version.LessThan("1.3") {
return fmt.Errorf("Multipart upload for build is no longer supported. Please upgrade your docker client.")
}
var (
@ -882,7 +833,7 @@ func postBuild(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
// Both headers will be parsed and sent along to the daemon, but if a non-empty
// ConfigFile is present, any value provided as an AuthConfig directly will
// be overridden. See BuildFile::CmdFrom for details.
if version < 1.9 && authEncoded != "" {
if version.LessThan("1.9") && authEncoded != "" {
authJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJson).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
@ -900,7 +851,7 @@ func postBuild(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
}
}
if version >= 1.8 {
if version.GreaterThanOrEqualTo("1.8") {
job.SetenvBool("json", true)
streamJSON(job, w, true)
} else {
@ -912,18 +863,20 @@ func postBuild(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
job.Setenv("q", r.FormValue("q"))
job.Setenv("nocache", r.FormValue("nocache"))
job.Setenv("rm", r.FormValue("rm"))
job.SetenvJson("authConfig", authConfig)
job.SetenvJson("configFile", configFile)
if err := job.Run(); err != nil {
if !job.Stdout.Used() {
return err
}
sf := utils.NewStreamFormatter(version >= 1.8)
sf := utils.NewStreamFormatter(version.GreaterThanOrEqualTo("1.8"))
w.Write(sf.FormatError(err))
}
return nil
}
func postContainersCopy(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func postContainersCopy(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
@ -946,7 +899,7 @@ func postContainersCopy(eng *engine.Engine, version float64, w http.ResponseWrit
}
job := eng.Job("container_copy", vars["name"], copyData.Get("Resource"))
streamJSON(job, w, false)
job.Stdout.Add(w)
if err := job.Run(); err != nil {
utils.Errorf("%s", err.Error())
if strings.Contains(err.Error(), "No such container") {
@ -956,7 +909,7 @@ func postContainersCopy(eng *engine.Engine, version float64, w http.ResponseWrit
return nil
}
func optionsHandler(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func optionsHandler(eng *engine.Engine, version version.Version, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.WriteHeader(http.StatusOK)
return nil
}
@ -966,7 +919,7 @@ func writeCorsHeaders(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT, OPTIONS")
}
func makeHttpHandler(eng *engine.Engine, logging bool, localMethod string, localRoute string, handlerFunc HttpApiFunc, enableCors bool, dockerVersion string) http.HandlerFunc {
func makeHttpHandler(eng *engine.Engine, logging bool, localMethod string, localRoute string, handlerFunc HttpApiFunc, enableCors bool, dockerVersion version.Version) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
// log the request
utils.Debugf("Calling %s %s", localMethod, localRoute)
@ -977,20 +930,20 @@ func makeHttpHandler(eng *engine.Engine, logging bool, localMethod string, local
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
if len(userAgent) == 2 && userAgent[1] != dockerVersion {
if len(userAgent) == 2 && !dockerVersion.Equal(userAgent[1]) {
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], dockerVersion)
}
}
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
if err != nil {
version := version.Version(mux.Vars(r)["version"])
if version == "" {
version = APIVERSION
}
if enableCors {
writeCorsHeaders(w, r)
}
if version == 0 || version > APIVERSION {
http.Error(w, fmt.Errorf("client and server don't have same version (client : %g, server: %g)", version, APIVERSION).Error(), http.StatusNotFound)
if version.GreaterThan(APIVERSION) {
http.Error(w, fmt.Errorf("client and server don't have same version (client : %s, server: %s)", version, APIVERSION).Error(), http.StatusNotFound)
return
}
@ -1088,7 +1041,7 @@ func createRouter(eng *engine.Engine, logging, enableCors bool, dockerVersion st
localMethod := method
// build the handler function
f := makeHttpHandler(eng, logging, localMethod, localRoute, localFct, enableCors, dockerVersion)
f := makeHttpHandler(eng, logging, localMethod, localRoute, localFct, enableCors, version.Version(dockerVersion))
// add the new route
if localRoute == "" {
@ -1106,13 +1059,13 @@ func createRouter(eng *engine.Engine, logging, enableCors bool, dockerVersion st
// ServeRequest processes a single http request to the docker remote api.
// FIXME: refactor this to be part of Server and not require re-creating a new
// router each time. This requires first moving ListenAndServe into Server.
func ServeRequest(eng *engine.Engine, apiversion float64, w http.ResponseWriter, req *http.Request) error {
func ServeRequest(eng *engine.Engine, apiversion version.Version, w http.ResponseWriter, req *http.Request) error {
router, err := createRouter(eng, false, true, "")
if err != nil {
return err
}
// Insert APIVERSION into the request as a convenience
req.URL.Path = fmt.Sprintf("/v%g%s", apiversion, req.URL.Path)
req.URL.Path = fmt.Sprintf("/v%s%s", apiversion, req.URL.Path)
router.ServeHTTP(w, req)
return nil
}
@ -1127,6 +1080,11 @@ func ServeFd(addr string, handle http.Handler) error {
chErrors := make(chan error, len(ls))
// We don't want to start serving on these sockets until the
// "initserver" job has completed. Otherwise required handlers
// won't be ready.
<-activationLock
// Since ListenFD will return one or more sockets we have
// to create a go func to spawn off multiple serves
for i := range ls {
@ -1147,10 +1105,34 @@ func ServeFd(addr string, handle http.Handler) error {
return nil
}
func lookupGidByName(nameOrGid string) (int, error) {
groups, err := user.ParseGroupFilter(func(g *user.Group) bool {
return g.Name == nameOrGid || strconv.Itoa(g.Gid) == nameOrGid
})
if err != nil {
return -1, err
}
if groups != nil && len(groups) > 0 {
return groups[0].Gid, nil
}
return -1, fmt.Errorf("Group %s not found", nameOrGid)
}
func changeGroup(addr string, nameOrGid string) error {
gid, err := lookupGidByName(nameOrGid)
if err != nil {
return err
}
utils.Debugf("%s group found. gid: %d", nameOrGid, gid)
return os.Chown(addr, 0, gid)
}
// ListenAndServe sets up the required http.Server and gets it listening for
// each addr passed in and does protocol specific checking.
func ListenAndServe(proto, addr string, eng *engine.Engine, logging, enableCors bool, dockerVersion string) error {
func ListenAndServe(proto, addr string, eng *engine.Engine, logging, enableCors bool, dockerVersion string, socketGroup string) error {
r, err := createRouter(eng, logging, enableCors, dockerVersion)
if err != nil {
return err
}
@ -1181,19 +1163,14 @@ func ListenAndServe(proto, addr string, eng *engine.Engine, logging, enableCors
return err
}
groups, err := ioutil.ReadFile("/etc/group")
if err != nil {
return err
}
re := regexp.MustCompile("(^|\n)docker:.*?:([0-9]+)")
if gidMatch := re.FindStringSubmatch(string(groups)); gidMatch != nil {
gid, err := strconv.Atoi(gidMatch[2])
if err != nil {
return err
}
utils.Debugf("docker group found. gid: %d", gid)
if err := os.Chown(addr, 0, gid); err != nil {
return err
if socketGroup != "" {
if err := changeGroup(addr, socketGroup); err != nil {
if socketGroup == "docker" {
// if the user hasn't explicitly specified the group ownership, don't fail on errors.
utils.Debugf("Warning: could not chgrp %s to docker: %s", addr, err.Error())
} else {
return err
}
}
}
default:
@ -1221,7 +1198,7 @@ func ServeApi(job *engine.Job) engine.Status {
protoAddrParts := strings.SplitN(protoAddr, "://", 2)
go func() {
log.Printf("Listening for HTTP on %s (%s)\n", protoAddrParts[0], protoAddrParts[1])
chErrors <- ListenAndServe(protoAddrParts[0], protoAddrParts[1], job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("Version"))
chErrors <- ListenAndServe(protoAddrParts[0], protoAddrParts[1], job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("Version"), job.Getenv("SocketGroup"))
}()
}

View file

@ -2,12 +2,13 @@ package archive
import (
"bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"compress/bzip2"
"compress/gzip"
"errors"
"fmt"
"github.com/dotcloud/docker/pkg/system"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
"io"
"io/ioutil"
"os"
@ -165,6 +166,13 @@ func addTarFile(path, name string, tw *tar.Writer) error {
hdr.Devmajor = int64(major(uint64(stat.Rdev)))
hdr.Devminor = int64(minor(uint64(stat.Rdev)))
}
}
capability, _ := system.Lgetxattr(path, "security.capability")
if capability != nil {
hdr.Xattrs = make(map[string]string)
hdr.Xattrs["security.capability"] = string(capability)
}
if err := tw.WriteHeader(hdr); err != nil {
@ -251,6 +259,12 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader) e
return err
}
for key, value := range hdr.Xattrs {
if err := system.Lsetxattr(path, key, []byte(value), 0); err != nil {
return err
}
}
// There is no LChmod, so ignore mode for symlink. Also, this
// must happen after chown, as that can modify the file mode
if hdr.Typeflag != tar.TypeSymlink {
@ -262,11 +276,11 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader) e
ts := []syscall.Timespec{timeToTimespec(hdr.AccessTime), timeToTimespec(hdr.ModTime)}
// syscall.UtimesNano doesn't support a NOFOLLOW flag atm, and
if hdr.Typeflag != tar.TypeSymlink {
if err := UtimesNano(path, ts); err != nil {
if err := system.UtimesNano(path, ts); err != nil {
return err
}
} else {
if err := LUtimesNano(path, ts); err != nil {
if err := system.LUtimesNano(path, ts); err != nil {
return err
}
}

View file

@ -2,8 +2,8 @@ package archive
import (
"bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"fmt"
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
"io"
"io/ioutil"
"os"

View file

@ -1,9 +1,11 @@
package archive
import (
"code.google.com/p/go/src/pkg/archive/tar"
"bytes"
"fmt"
"github.com/dotcloud/docker/pkg/system"
"github.com/dotcloud/docker/utils"
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
"io"
"os"
"path/filepath"
@ -126,10 +128,11 @@ func Changes(layers []string, rw string) ([]Change, error) {
}
type FileInfo struct {
parent *FileInfo
name string
stat syscall.Stat_t
children map[string]*FileInfo
parent *FileInfo
name string
stat syscall.Stat_t
children map[string]*FileInfo
capability []byte
}
func (root *FileInfo) LookUp(path string) *FileInfo {
@ -200,7 +203,8 @@ func (info *FileInfo) addChanges(oldInfo *FileInfo, changes *[]Change) {
oldStat.Rdev != newStat.Rdev ||
// Don't look at size for dirs, its not a good measure of change
(oldStat.Size != newStat.Size && oldStat.Mode&syscall.S_IFDIR != syscall.S_IFDIR) ||
!sameFsTimeSpec(getLastModification(oldStat), getLastModification(newStat)) {
!sameFsTimeSpec(system.GetLastModification(oldStat), system.GetLastModification(newStat)) ||
bytes.Compare(oldChild.capability, newChild.capability) != 0 {
change := Change{
Path: newChild.path(),
Kind: ChangeModify,
@ -275,6 +279,8 @@ func collectFileInfo(sourceDir string) (*FileInfo, error) {
return err
}
info.capability, _ = system.Lgetxattr(path, "security.capability")
parent.children[info.name] = info
return nil

View file

@ -1,8 +1,8 @@
package archive
import (
"code.google.com/p/go/src/pkg/archive/tar"
"fmt"
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
"io"
"io/ioutil"
"os"

View file

@ -1,21 +0,0 @@
// +build !linux
package archive
import "syscall"
func getLastAccess(stat *syscall.Stat_t) syscall.Timespec {
return stat.Atimespec
}
func getLastModification(stat *syscall.Stat_t) syscall.Timespec {
return stat.Mtimespec
}
func LUtimesNano(path string, ts []syscall.Timespec) error {
return ErrNotImplemented
}
func UtimesNano(path string, ts []syscall.Timespec) error {
return ErrNotImplemented
}

View file

@ -2,7 +2,7 @@ package archive
import (
"bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
"io/ioutil"
)

View file

@ -252,50 +252,39 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
}
// this method matches a auth configuration to a server address or a url
func (config *ConfigFile) ResolveAuthConfig(registry string) AuthConfig {
if registry == IndexServerAddress() || len(registry) == 0 {
func (config *ConfigFile) ResolveAuthConfig(hostname string) AuthConfig {
if hostname == IndexServerAddress() || len(hostname) == 0 {
// default to the index server
return config.Configs[IndexServerAddress()]
}
// if it's not the index server there are three cases:
//
// 1. a full config url -> it should be used as is
// 2. a full url, but with the wrong protocol
// 3. a hostname, with an optional port
//
// as there is only one auth entry which is fully qualified we need to start
// parsing and matching
swapProtocol := func(url string) string {
if strings.HasPrefix(url, "http:") {
return strings.Replace(url, "http:", "https:", 1)
}
if strings.HasPrefix(url, "https:") {
return strings.Replace(url, "https:", "http:", 1)
}
return url
// First try the happy case
if c, found := config.Configs[hostname]; found {
return c
}
resolveIgnoringProtocol := func(url string) AuthConfig {
if c, found := config.Configs[url]; found {
return c
convertToHostname := func(url string) string {
stripped := url
if strings.HasPrefix(url, "http://") {
stripped = strings.Replace(url, "http://", "", 1)
} else if strings.HasPrefix(url, "https://") {
stripped = strings.Replace(url, "https://", "", 1)
}
registrySwappedProtocol := swapProtocol(url)
// now try to match with the different protocol
if c, found := config.Configs[registrySwappedProtocol]; found {
return c
}
return AuthConfig{}
nameParts := strings.SplitN(stripped, "/", 2)
return nameParts[0]
}
// match both protocols as it could also be a server name like httpfoo
if strings.HasPrefix(registry, "http:") || strings.HasPrefix(registry, "https:") {
return resolveIgnoringProtocol(registry)
// Maybe they have a legacy config file, we will iterate the keys converting
// them to the new format and testing
normalizedHostename := convertToHostname(hostname)
for registry, config := range config.Configs {
if registryHostname := convertToHostname(registry); registryHostname == normalizedHostename {
return config
}
}
url := "https://" + registry
if !strings.Contains(registry, "/") {
url = url + "/v1/"
}
return resolveIgnoringProtocol(url)
// When all else fails, return an empty auth config
return AuthConfig{}
}

View file

@ -108,6 +108,7 @@ func TestResolveAuthConfigFullURL(t *testing.T) {
}
configFile.Configs["https://registry.example.com/v1/"] = registryAuth
configFile.Configs["http://localhost:8000/v1/"] = localAuth
configFile.Configs["registry.com"] = registryAuth
validRegistries := map[string][]string{
"https://registry.example.com/v1/": {
@ -122,6 +123,12 @@ func TestResolveAuthConfigFullURL(t *testing.T) {
"localhost:8000",
"localhost:8000/v1/",
},
"registry.com": {
"https://registry.com/v1/",
"http://registry.com/v1/",
"registry.com",
"registry.com/v1/",
},
}
for configKey, registries := range validRegistries {

View file

@ -110,13 +110,21 @@ func (b *buildFile) CmdFrom(name string) error {
b.config = image.Config
}
if b.config.Env == nil || len(b.config.Env) == 0 {
b.config.Env = append(b.config.Env, "HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")
b.config.Env = append(b.config.Env, "HOME=/", "PATH="+defaultPathEnv)
}
// Process ONBUILD triggers if they exist
if nTriggers := len(b.config.OnBuild); nTriggers != 0 {
fmt.Fprintf(b.errStream, "# Executing %d build triggers\n", nTriggers)
}
for n, step := range b.config.OnBuild {
splitStep := strings.Split(step, " ")
stepInstruction := strings.ToUpper(strings.Trim(splitStep[0], " "))
switch stepInstruction {
case "ONBUILD":
return fmt.Errorf("Source image contains forbidden chained `ONBUILD ONBUILD` trigger: %s", step)
case "MAINTAINER", "FROM":
return fmt.Errorf("Source image contains forbidden %s trigger: %s", stepInstruction, step)
}
if err := b.BuildStep(fmt.Sprintf("onbuild-%d", n), step); err != nil {
return err
}
@ -128,6 +136,14 @@ func (b *buildFile) CmdFrom(name string) error {
// The ONBUILD command declares a build instruction to be executed in any future build
// using the current image as a base.
func (b *buildFile) CmdOnbuild(trigger string) error {
splitTrigger := strings.Split(trigger, " ")
triggerInstruction := strings.ToUpper(strings.Trim(splitTrigger[0], " "))
switch triggerInstruction {
case "ONBUILD":
return fmt.Errorf("Chaining ONBUILD via `ONBUILD ONBUILD` isn't allowed")
case "MAINTAINER", "FROM":
return fmt.Errorf("%s isn't allowed as an ONBUILD trigger", triggerInstruction)
}
b.config.OnBuild = append(b.config.OnBuild, trigger)
return b.commit("", b.config.Cmd, fmt.Sprintf("ONBUILD %s", trigger))
}

40
builtins/builtins.go Normal file
View file

@ -0,0 +1,40 @@
package builtins
import (
"github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker"
"github.com/dotcloud/docker/api"
"github.com/dotcloud/docker/networkdriver/lxc"
)
func Register(eng *engine.Engine) {
daemon(eng)
remote(eng)
}
// remote: a RESTful api for cross-docker communication
func remote(eng *engine.Engine) {
eng.Register("serveapi", api.ServeApi)
}
// daemon: a default execution and storage backend for Docker on Linux,
// with the following underlying components:
//
// * Pluggable storage drivers including aufs, vfs, lvm and btrfs.
// * Pluggable execution drivers including lxc and chroot.
//
// In practice `daemon` still includes most core Docker components, including:
//
// * The reference registry client implementation
// * Image management
// * The build facility
// * Logging
//
// These components should be broken off into plugins of their own.
//
func daemon(eng *engine.Engine) {
eng.Register("initserver", docker.InitServer)
eng.Register("init_networkdriver", lxc.InitDriver)
eng.Register("version", docker.GetVersion)
}

View file

@ -25,6 +25,7 @@ type DaemonConfig struct {
BridgeIP string
InterContainerCommunication bool
GraphDriver string
ExecDriver string
Mtu int
DisableNetwork bool
}
@ -43,6 +44,7 @@ func DaemonConfigFromJob(job *engine.Job) *DaemonConfig {
DefaultIp: net.ParseIP(job.Getenv("DefaultIp")),
InterContainerCommunication: job.GetenvBool("InterContainerCommunication"),
GraphDriver: job.Getenv("GraphDriver"),
ExecDriver: job.Getenv("ExecDriver"),
}
if dns := job.GetenvList("Dns"); dns != nil {
config.Dns = dns

View file

@ -10,10 +10,8 @@ import (
"github.com/dotcloud/docker/graphdriver"
"github.com/dotcloud/docker/links"
"github.com/dotcloud/docker/nat"
"github.com/dotcloud/docker/pkg/term"
"github.com/dotcloud/docker/runconfig"
"github.com/dotcloud/docker/utils"
"github.com/kr/pty"
"io"
"io/ioutil"
"log"
@ -25,6 +23,8 @@ import (
"time"
)
const defaultPathEnv = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
var (
ErrNotATTY = errors.New("The PTY is not a file")
ErrNoTTY = errors.New("No PTY found")
@ -55,13 +55,13 @@ type Container struct {
HostsPath string
Name string
Driver string
ExecDriver string
command *execdriver.Command
stdout *utils.WriteBroadcaster
stderr *utils.WriteBroadcaster
stdin io.ReadCloser
stdinPipe io.WriteCloser
ptyMaster io.Closer
runtime *Runtime
@ -213,56 +213,6 @@ func (container *Container) generateEnvConfig(env []string) error {
return nil
}
func (container *Container) setupPty() error {
ptyMaster, ptySlave, err := pty.Open()
if err != nil {
return err
}
container.ptyMaster = ptyMaster
container.command.Stdout = ptySlave
container.command.Stderr = ptySlave
container.command.Console = ptySlave.Name()
// Copy the PTYs to our broadcasters
go func() {
defer container.stdout.CloseWriters()
utils.Debugf("startPty: begin of stdout pipe")
io.Copy(container.stdout, ptyMaster)
utils.Debugf("startPty: end of stdout pipe")
}()
// stdin
if container.Config.OpenStdin {
container.command.Stdin = ptySlave
container.command.SysProcAttr.Setctty = true
go func() {
defer container.stdin.Close()
utils.Debugf("startPty: begin of stdin pipe")
io.Copy(ptyMaster, container.stdin)
utils.Debugf("startPty: end of stdin pipe")
}()
}
return nil
}
func (container *Container) setupStd() error {
container.command.Stdout = container.stdout
container.command.Stderr = container.stderr
if container.Config.OpenStdin {
stdin, err := container.command.StdinPipe()
if err != nil {
return err
}
go func() {
defer stdin.Close()
utils.Debugf("start: begin of stdin pipe")
io.Copy(stdin, container.stdin)
utils.Debugf("start: end of stdin pipe")
}()
}
return nil
}
func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, stdout io.Writer, stderr io.Writer) chan error {
var cStdout, cStderr io.ReadCloser
@ -500,7 +450,7 @@ func (container *Container) Start() (err error) {
// Setup environment
env := []string{
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PATH=" + defaultPathEnv,
"HOSTNAME=" + container.Config.Hostname,
}
@ -558,10 +508,10 @@ func (container *Container) Start() (err error) {
}
}
for _, elem := range container.Config.Env {
env = append(env, elem)
}
// because the env on the container can override certain default values
// we need to replace the 'env' keys where they match and append anything
// else.
env = utils.ReplaceOrAppendEnvValues(env, container.Config.Env)
if err := container.generateEnvConfig(env); err != nil {
return err
}
@ -583,6 +533,7 @@ func (container *Container) Start() (err error) {
}
populateCommand(container)
container.command.Env = env
// Setup logging of stdout and stderr to disk
if err := container.runtime.LogToDisk(container.stdout, container.logPath("json"), "stdout"); err != nil {
@ -593,17 +544,6 @@ func (container *Container) Start() (err error) {
}
container.waitLock = make(chan struct{})
// Setuping pipes and/or Pty
var setup func() error
if container.Config.Tty {
setup = container.setupPty
} else {
setup = container.setupStd
}
if err := setup(); err != nil {
return err
}
callbackLock := make(chan struct{})
callback := func(command *execdriver.Command) {
container.State.SetRunning(command.Pid())
@ -838,18 +778,27 @@ func (container *Container) monitor(callback execdriver.StartCallback) error {
exitCode int
)
if container.command == nil {
// This happends when you have a GHOST container with lxc
populateCommand(container)
err = container.runtime.RestoreCommand(container)
} else {
exitCode, err = container.runtime.Run(container, callback)
}
pipes := execdriver.NewPipes(container.stdin, container.stdout, container.stderr, container.Config.OpenStdin)
exitCode, err = container.runtime.Run(container, pipes, callback)
if err != nil {
utils.Errorf("Error running container: %s", err)
}
if container.runtime.srv.IsRunning() {
container.State.SetStopped(exitCode)
// FIXME: there is a race condition here which causes this to fail during the unit tests.
// If another goroutine was waiting for Wait() to return before removing the container's root
// from the filesystem... At this point it may already have done so.
// This is because State.setStopped() has already been called, and has caused Wait()
// to return.
// FIXME: why are we serializing running state to disk in the first place?
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
if err := container.ToDisk(); err != nil {
utils.Errorf("Error dumping container state to disk: %s\n", err)
}
}
// Cleanup
container.cleanup()
@ -858,23 +807,12 @@ func (container *Container) monitor(callback execdriver.StartCallback) error {
container.stdin, container.stdinPipe = io.Pipe()
}
container.State.SetStopped(exitCode)
if container.runtime != nil && container.runtime.srv != nil {
container.runtime.srv.LogEvent("die", container.ID, container.runtime.repositories.ImageName(container.Image))
}
close(container.waitLock)
// FIXME: there is a race condition here which causes this to fail during the unit tests.
// If another goroutine was waiting for Wait() to return before removing the container's root
// from the filesystem... At this point it may already have done so.
// This is because State.setStopped() has already been called, and has caused Wait()
// to return.
// FIXME: why are we serializing running state to disk in the first place?
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
container.ToDisk()
return err
}
@ -887,7 +825,6 @@ func (container *Container) cleanup() {
link.Disable()
}
}
if container.Config.OpenStdin {
if err := container.stdin.Close(); err != nil {
utils.Errorf("%s: Error close stdin: %s", container.ID, err)
@ -899,10 +836,9 @@ func (container *Container) cleanup() {
if err := container.stderr.CloseWriters(); err != nil {
utils.Errorf("%s: Error close stderr: %s", container.ID, err)
}
if container.ptyMaster != nil {
if err := container.ptyMaster.Close(); err != nil {
utils.Errorf("%s: Error closing Pty master: %s", container.ID, err)
if container.command != nil && container.command.Terminal != nil {
if err := container.command.Terminal.Close(); err != nil {
utils.Errorf("%s: Error closing terminal: %s", container.ID, err)
}
}
@ -994,11 +930,7 @@ func (container *Container) Wait() int {
}
func (container *Container) Resize(h, w int) error {
pty, ok := container.ptyMaster.(*os.File)
if !ok {
return fmt.Errorf("ptyMaster does not have Fd() method")
}
return term.SetWinsize(pty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
return container.command.Terminal.Resize(h, w)
}
func (container *Container) ExportRw() (archive.Archive, error) {
@ -1202,11 +1134,9 @@ func (container *Container) Exposes(p nat.Port) bool {
}
func (container *Container) GetPtyMaster() (*os.File, error) {
if container.ptyMaster == nil {
ttyConsole, ok := container.command.Terminal.(execdriver.TtyTerminal)
if !ok {
return nil, ErrNoTTY
}
if pty, ok := container.ptyMaster.(*os.File); ok {
return pty, nil
}
return nil, ErrNotATTY
return ttyConsole.Master(), nil
}

View file

@ -0,0 +1,257 @@
# docker.fish - docker completions for fish shell
#
# This file is generated by gen_docker_fish_completions.py from:
# https://github.com/barnybug/docker-fish-completion
#
# To install the completions:
# mkdir -p ~/.config/fish/completions
# cp docker.fish ~/.config/fish/completions
#
# Completion supported:
# - parameters
# - commands
# - containers
# - images
# - repositories
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
for i in (commandline -opc)
if contains -- $i attach build commit cp diff events export history images import info insert inspect kill load login logs port ps pull push restart rm rmi run save search start stop tag top version wait
return 1
end
end
return 0
end
function __fish_print_docker_containers --description 'Print a list of docker containers' -a select
switch $select
case running
docker ps -a --no-trunc | awk 'NR>1' | awk 'BEGIN {FS=" +"}; $5 ~ "^Up" {print $1 "\n" $(NF-1)}' | tr ',' '\n'
case stopped
docker ps -a --no-trunc | awk 'NR>1' | awk 'BEGIN {FS=" +"}; $5 ~ "^Exit" {print $1 "\n" $(NF-1)}' | tr ',' '\n'
case all
docker ps -a --no-trunc | awk 'NR>1' | awk 'BEGIN {FS=" +"}; {print $1 "\n" $(NF-1)}' | tr ',' '\n'
end
end
function __fish_print_docker_images --description 'Print a list of docker images'
docker images | awk 'NR>1' | grep -v '<none>' | awk '{print $1":"$2}'
end
function __fish_print_docker_repositories --description 'Print a list of docker repositories'
docker images | awk 'NR>1' | grep -v '<none>' | awk '{print $1}' | sort | uniq
end
# common options
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
complete -c docker -f -n '__fish_docker_no_subcommand' -s H -l host -d 'tcp://host:port, unix://path/to/socket, fd://* or fd://socketfd to use in daemon mode. Multiple sockets can be specified'
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-enable-cors -d 'Enable CORS headers in the remote API'
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d "Attach containers to a pre-existing network bridge; use 'none' to disable container networking"
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
complete -c docker -f -n '__fish_docker_no_subcommand' -s d -l daemon -d 'Enable daemon mode'
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns -d 'Force docker to use specific DNS servers'
complete -c docker -f -n '__fish_docker_no_subcommand' -s g -l graph -d 'Path to use as the root of the docker runtime'
complete -c docker -f -n '__fish_docker_no_subcommand' -l icc -d 'Enable inter-container communication'
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip -d 'Default IP address to use when binding container ports'
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-forward -d 'Disable enabling of net.ipv4.ip_forward'
complete -c docker -f -n '__fish_docker_no_subcommand' -l iptables -d "Disable docker's addition of iptables rules"
complete -c docker -f -n '__fish_docker_no_subcommand' -l mtu -d 'Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if not default route is available'
complete -c docker -f -n '__fish_docker_no_subcommand' -s p -l pidfile -d 'Path to use for daemon PID file'
complete -c docker -f -n '__fish_docker_no_subcommand' -s r -l restart -d 'Restart previously running containers'
complete -c docker -f -n '__fish_docker_no_subcommand' -s s -l storage-driver -d 'Force the docker runtime to use a specific storage driver'
complete -c docker -f -n '__fish_docker_no_subcommand' -s v -l version -d 'Print version information and quit'
# subcommands
# attach
complete -c docker -f -n '__fish_docker_no_subcommand' -a attach -d 'Attach to a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l no-stdin -d 'Do not attach stdin'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l sig-proxy -d 'Proxify all received signal to the process (even in non-tty mode)'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -a '(__fish_print_docker_containers running)' -d "Container"
# build
complete -c docker -f -n '__fish_docker_no_subcommand' -a build -d 'Build a container from a Dockerfile'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l no-cache -d 'Do not use cache when building the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s q -l quiet -d 'Suppress verbose build output'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l rm -d 'Remove intermediate containers after a successful build'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s t -l tag -d 'Repository name (and optionally a tag) to be applied to the resulting image in case of success'
# commit
complete -c docker -f -n '__fish_docker_no_subcommand' -a commit -d "Create a new image from a container's changes"
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s a -l author -d 'Author (eg. "John Hannibal Smith <hannibal@a-team.com>"'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s m -l message -d 'Commit message'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -l run -d 'Config automatically applied when the image is run. (ex: -run=\'{"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}\')'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -a '(__fish_print_docker_containers all)' -d "Container"
# cp
complete -c docker -f -n '__fish_docker_no_subcommand' -a cp -d 'Copy files/folders from the containers filesystem to the host path'
# diff
complete -c docker -f -n '__fish_docker_no_subcommand' -a diff -d "Inspect changes on a container's filesystem"
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -a '(__fish_print_docker_containers all)' -d "Container"
# events
complete -c docker -f -n '__fish_docker_no_subcommand' -a events -d 'Get real time events from the server'
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l since -d 'Show previously created events and then stream.'
# export
complete -c docker -f -n '__fish_docker_no_subcommand' -a export -d 'Stream the contents of a container as a tar archive'
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -a '(__fish_print_docker_containers all)' -d "Container"
# history
complete -c docker -f -n '__fish_docker_no_subcommand' -a history -d 'Show the history of an image'
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -s q -l quiet -d 'only show numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -a '(__fish_print_docker_images)' -d "Image"
# images
complete -c docker -f -n '__fish_docker_no_subcommand' -a images -d 'List images'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s a -l all -d 'show all images (by default filter out the intermediate images used to build)'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s q -l quiet -d 'only show numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s t -l tree -d 'output graph in tree format'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s v -l viz -d 'output graph in graphviz format'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -a '(__fish_print_docker_repositories)' -d "Repository"
# import
complete -c docker -f -n '__fish_docker_no_subcommand' -a import -d 'Create a new filesystem image from the contents of a tarball'
# info
complete -c docker -f -n '__fish_docker_no_subcommand' -a info -d 'Display system-wide information'
# insert
complete -c docker -f -n '__fish_docker_no_subcommand' -a insert -d 'Insert a file in an image'
complete -c docker -A -f -n '__fish_seen_subcommand_from insert' -a '(__fish_print_docker_images)' -d "Image"
# inspect
complete -c docker -f -n '__fish_docker_no_subcommand' -a inspect -d 'Return low-level information on a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s f -l format -d 'Format the output using the given go template.'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_containers running)' -d "Container"
# kill
complete -c docker -f -n '__fish_docker_no_subcommand' -a kill -d 'Kill a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -s s -l signal -d 'Signal to send to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -a '(__fish_print_docker_containers running)' -d "Container"
# load
complete -c docker -f -n '__fish_docker_no_subcommand' -a load -d 'Load an image from a tar archive'
# login
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Register or Login to the docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s e -l email -d 'email'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s p -l password -d 'password'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s u -l username -d 'username'
# logs
complete -c docker -f -n '__fish_docker_no_subcommand' -a logs -d 'Fetch the logs of a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s f -l follow -d 'Follow log output'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -a '(__fish_print_docker_containers running)' -d "Container"
# port
complete -c docker -f -n '__fish_docker_no_subcommand' -a port -d 'Lookup the public-facing port which is NAT-ed to PRIVATE_PORT'
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -a '(__fish_print_docker_containers running)' -d "Container"
# ps
complete -c docker -f -n '__fish_docker_no_subcommand' -a ps -d 'List containers'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s a -l all -d 'Show all containers. Only running containers are shown by default.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l before-id -d 'Show only container created before Id, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s l -l latest -d 'Show only the latest created container, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s n -d 'Show n last created containers, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s q -l quiet -d 'Only display numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s s -l size -d 'Display sizes'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l since-id -d 'Show only containers created since Id, include non-running ones.'
# pull
complete -c docker -f -n '__fish_docker_no_subcommand' -a pull -d 'Pull an image or a repository from the docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -s t -l tag -d 'Download tagged image in repository'
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_repositories)' -d "Repository"
# push
complete -c docker -f -n '__fish_docker_no_subcommand' -a push -d 'Push an image or a repository to the docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_repositories)' -d "Repository"
# restart
complete -c docker -f -n '__fish_docker_no_subcommand' -a restart -d 'Restart a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -s t -l time -d 'Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default=10'
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -a '(__fish_print_docker_containers running)' -d "Container"
# rm
complete -c docker -f -n '__fish_docker_no_subcommand' -a rm -d 'Remove one or more containers'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s l -l link -d 'Remove the specified link and not the underlying container'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s v -l volumes -d 'Remove the volumes associated to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -a '(__fish_print_docker_containers stopped)' -d "Container"
# rmi
complete -c docker -f -n '__fish_docker_no_subcommand' -a rmi -d 'Remove one or more images'
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -a '(__fish_print_docker_images)' -d "Image"
# run
complete -c docker -f -n '__fish_docker_no_subcommand' -a run -d 'Run a command in a new container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s P -l publish-all -d 'Publish all exposed ports to the host interfaces'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s a -l attach -d 'Attach to stdin, stdout or stderr.'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s c -l cpu-shares -d 'CPU shares (relative weight)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cidfile -d 'Write the container ID to the file'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s d -l detach -d 'Detached mode: Run container in the background, print new container id'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns -d 'Set custom dns servers'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s e -l env -d 'Set environment variables'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l entrypoint -d 'Overwrite the default entrypoint of the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l expose -d 'Expose a port from the container without publishing it to your host'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s h -l hostname -d 'Container host name'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s i -l interactive -d 'Keep stdin open even if not attached'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l link -d 'Add link to another container (name:alias)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l lxc-conf -d 'Add custom lxc options -lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s m -l memory -d 'Memory limit (format: <number><optional unit>, where unit = b, k, m or g)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s n -l networking -d 'Enable networking for this container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l name -d 'Assign a name to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s p -l publish -d "Publish a container's port to the host (format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort) (use 'docker port' to see the actual mapping)"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l privileged -d 'Give extended privileges to this container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l rm -d 'Automatically remove the container when it exits (incompatible with -d)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l sig-proxy -d 'Proxify all received signal to the process (even in non-tty mode)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s t -l tty -d 'Allocate a pseudo-tty'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s u -l user -d 'Username or UID'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s v -l volume -d 'Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l volumes-from -d 'Mount volumes from the specified container(s)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s w -l workdir -d 'Working directory inside the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -a '(__fish_print_docker_images)' -d "Image"
# save
complete -c docker -f -n '__fish_docker_no_subcommand' -a save -d 'Save an image to a tar archive'
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -a '(__fish_print_docker_images)' -d "Image"
# search
complete -c docker -f -n '__fish_docker_no_subcommand' -a search -d 'Search for an image in the docker index'
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -s s -l stars -d 'Only displays with at least xxx stars'
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -s t -l trusted -d 'Only show trusted builds'
# start
complete -c docker -f -n '__fish_docker_no_subcommand' -a start -d 'Start a stopped container'
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s a -l attach -d "Attach container's stdout/stderr and forward all signals to the process"
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s i -l interactive -d "Attach container's stdin"
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -a '(__fish_print_docker_containers stopped)' -d "Container"
# stop
complete -c docker -f -n '__fish_docker_no_subcommand' -a stop -d 'Stop a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -s t -l time -d 'Number of seconds to wait for the container to stop before killing it.'
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -a '(__fish_print_docker_containers running)' -d "Container"
# tag
complete -c docker -f -n '__fish_docker_no_subcommand' -a tag -d 'Tag an image into a repository'
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -s f -l force -d 'Force'
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -a '(__fish_print_docker_images)' -d "Image"
# top
complete -c docker -f -n '__fish_docker_no_subcommand' -a top -d 'Lookup the running processes of a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -a '(__fish_print_docker_containers running)' -d "Container"
# version
complete -c docker -f -n '__fish_docker_no_subcommand' -a version -d 'Show the docker version information'
# wait
complete -c docker -f -n '__fish_docker_no_subcommand' -a wait -d 'Block until a container stops, then print its exit code'
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -a '(__fish_print_docker_containers running)' -d "Container"

View file

@ -6,6 +6,8 @@ After=network.target
[Service]
ExecStart=/usr/bin/docker -d
Restart=on-failure
LimitNOFILE=1048576
LimitNPROC=1048576
[Install]
WantedBy=multi-user.target

View file

@ -6,6 +6,8 @@ After=network.target
[Service]
ExecStart=/usr/bin/docker -d -H fd://
Restart=on-failure
LimitNOFILE=1048576
LimitNPROC=1048576
[Install]
WantedBy=multi-user.target

View file

@ -39,7 +39,7 @@ arch-chroot $ROOTFS /bin/sh -c "haveged -w 1024; pacman-key --init; pkill havege
arch-chroot $ROOTFS /bin/sh -c "ln -s /usr/share/zoneinfo/UTC /etc/localtime"
echo 'en_US.UTF-8 UTF-8' > $ROOTFS/etc/locale.gen
arch-chroot $ROOTFS locale-gen
arch-chroot $ROOTFS /bin/sh -c 'echo "Server = http://mirrors.kernel.org/archlinux/\$repo/os/\$arch" > /etc/pacman.d/mirrorlist'
arch-chroot $ROOTFS /bin/sh -c 'echo "Server = https://mirrors.kernel.org/archlinux/\$repo/os/\$arch" > /etc/pacman.d/mirrorlist'
# udev doesn't work in containers, rebuild /dev
DEV=$ROOTFS/dev

View file

@ -44,6 +44,8 @@ debianStable=wheezy
debianUnstable=sid
# this should match the name found at http://releases.ubuntu.com/
ubuntuLatestLTS=precise
# this should match the name found at http://releases.tanglu.org/
tangluLatest=aequorea
while getopts v:i:a:p:dst name; do
case "$name" in
@ -201,6 +203,17 @@ if [ -z "$strictDebootstrap" ]; then
s/ $suite-updates main/ ${suite}-security main/
" etc/apt/sources.list
;;
Tanglu)
# add the updates repository
if [ "$suite" = "$tangluLatest" ]; then
# ${suite}-updates only applies to stable Tanglu versions
sudo sed -i "p; s/ $suite main$/ ${suite}-updates main/" etc/apt/sources.list
fi
;;
SteamOS)
# add contrib and non-free
sudo sed -i "s/ $suite main$/ $suite main contrib non-free/" etc/apt/sources.list
;;
esac
fi
@ -248,6 +261,28 @@ else
fi
fi
;;
Tanglu)
if [ "$suite" = "$tangluLatest" ]; then
# tag latest
$docker tag $repo:$suite $repo:latest
fi
if [ -r etc/lsb-release ]; then
lsbRelease="$(. etc/lsb-release && echo "$DISTRIB_RELEASE")"
if [ "$lsbRelease" ]; then
# tag specific Tanglu version number, if available (1.0, 2.0, etc.)
$docker tag $repo:$suite $repo:$lsbRelease
fi
fi
;;
SteamOS)
if [ -r etc/lsb-release ]; then
lsbRelease="$(. etc/lsb-release && echo "$DISTRIB_RELEASE")"
if [ "$lsbRelease" ]; then
# tag specific SteamOS version number, if available (1.0, 2.0, etc.)
$docker tag $repo:$suite $repo:$lsbRelease
fi
fi
;;
esac
fi
fi

View file

@ -45,9 +45,17 @@ target=$(mktemp -d --tmpdir $(basename $0).XXXXXX)
set -x
for dev in console null zero urandom; do
/sbin/MAKEDEV -d "$target"/dev -x $dev
done
mkdir -m 755 "$target"/dev
mknod -m 600 "$target"/dev/console c 5 1
mknod -m 600 "$target"/dev/initctl p
mknod -m 666 "$target"/dev/full c 1 7
mknod -m 666 "$target"/dev/null c 1 3
mknod -m 666 "$target"/dev/ptmx c 5 2
mknod -m 666 "$target"/dev/random c 1 8
mknod -m 666 "$target"/dev/tty c 5 0
mknod -m 666 "$target"/dev/tty0 c 4 0
mknod -m 666 "$target"/dev/urandom c 1 9
mknod -m 666 "$target"/dev/zero c 1 5
yum -c "$yum_config" --installroot="$target" --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y groupinstall Core
@ -76,7 +84,7 @@ rm -rf "$target"/var/cache/ldconfig/*
version=
if [ -r "$target"/etc/redhat-release ]; then
version="$(sed 's/^[^0-9\]*\([0-9.]\+\).*$/\1/' /etc/redhat-release)"
version="$(sed 's/^[^0-9\]*\([0-9.]\+\).*$/\1/' "$target"/etc/redhat-release)"
fi
if [ -z "$version" ]; then

View file

@ -4,4 +4,7 @@
#
GH_USER=$(git config --get github.user)
SOB=$(git var GIT_AUTHOR_IDENT | sed -n "s/^\(.*>\).*$/Docker-DCO-1.1-Signed-off-by: \1 \(github: $GH_USER\)/p")
grep -qs "^$SOB" "$1" || echo "\n$SOB" >> "$1"
grep -qs "^$SOB" "$1" || {
echo
echo "$SOB"
} >> "$1"

View file

@ -31,7 +31,7 @@ stop on runlevel [!2345]
respawn
script
/usr/bin/docker -d -H=tcp://0.0.0.0:4243/
/usr/bin/docker -d -H=tcp://0.0.0.0:4243
end script
```

View file

@ -6,8 +6,8 @@ import (
"os"
"strings"
_ "github.com/dotcloud/docker"
"github.com/dotcloud/docker/api"
"github.com/dotcloud/docker/builtins"
"github.com/dotcloud/docker/dockerversion"
"github.com/dotcloud/docker/engine"
flag "github.com/dotcloud/docker/pkg/mflag"
@ -17,7 +17,7 @@ import (
)
func main() {
if selfPath := utils.SelfPath(); selfPath == "/sbin/init" || selfPath == "/.dockerinit" {
if selfPath := utils.SelfPath(); strings.Contains(selfPath, ".dockerinit") {
// Running in init mode
sysinit.SysInit()
return
@ -32,6 +32,7 @@ func main() {
bridgeIp = flag.String([]string{"#bip", "-bip"}, "", "Use this CIDR notation address for the network bridge's IP, not compatible with -b")
pidfile = flag.String([]string{"p", "-pidfile"}, "/var/run/docker.pid", "Path to use for daemon PID file")
flRoot = flag.String([]string{"g", "-graph"}, "/var/lib/docker", "Path to use as the root of the docker runtime")
flSocketGroup = flag.String([]string{"G", "-group"}, "docker", "Group to assign the unix socket specified by -H when running in daemon mode; use '' (the empty string) to disable setting of a group")
flEnableCors = flag.Bool([]string{"#api-enable-cors", "-api-enable-cors"}, false, "Enable CORS headers in the remote API")
flDns = opts.NewListOpts(opts.ValidateIp4Address)
flEnableIptables = flag.Bool([]string{"#iptables", "-iptables"}, true, "Disable docker's addition of iptables rules")
@ -39,8 +40,9 @@ func main() {
flDefaultIp = flag.String([]string{"#ip", "-ip"}, "0.0.0.0", "Default IP address to use when binding container ports")
flInterContainerComm = flag.Bool([]string{"#icc", "-icc"}, true, "Enable inter-container communication")
flGraphDriver = flag.String([]string{"s", "-storage-driver"}, "", "Force the docker runtime to use a specific storage driver")
flExecDriver = flag.String([]string{"e", "-exec-driver"}, "native", "Force the docker runtime to use a specific exec driver")
flHosts = opts.NewListOpts(api.ValidateHost)
flMtu = flag.Int([]string{"#mtu", "-mtu"}, 0, "Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if not default route is available")
flMtu = flag.Int([]string{"#mtu", "-mtu"}, 0, "Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if no default route is available")
)
flag.Var(&flDns, []string{"#dns", "-dns"}, "Force docker to use specific DNS servers")
flag.Var(&flHosts, []string{"H", "-host"}, "tcp://host:port, unix://path/to/socket, fd://* or fd://socketfd to use in daemon mode. Multiple sockets can be specified")
@ -77,10 +79,32 @@ func main() {
return
}
eng, err := engine.New(*flRoot)
// set up the TempDir to use a canonical path
tmp := os.TempDir()
realTmp, err := utils.ReadSymlinkedDirectory(tmp)
if err != nil {
log.Fatalf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
}
os.Setenv("TMPDIR", realTmp)
// get the canonical path to the Docker root directory
root := *flRoot
var realRoot string
if _, err := os.Stat(root); err != nil && os.IsNotExist(err) {
realRoot = root
} else {
realRoot, err = utils.ReadSymlinkedDirectory(root)
if err != nil {
log.Fatalf("Unable to get the full path to root (%s): %s", root, err)
}
}
eng, err := engine.New(realRoot)
if err != nil {
log.Fatal(err)
}
// Load builtins
builtins.Register(eng)
// load the daemon in the background so we can immediately start
// the http api so that connections don't fail while the daemon
// is booting
@ -88,7 +112,7 @@ func main() {
// Load plugin: httpapi
job := eng.Job("initserver")
job.Setenv("Pidfile", *pidfile)
job.Setenv("Root", *flRoot)
job.Setenv("Root", realRoot)
job.SetenvBool("AutoRestart", *flAutoRestart)
job.SetenvList("Dns", flDns.GetAll())
job.SetenvBool("EnableIptables", *flEnableIptables)
@ -98,6 +122,7 @@ func main() {
job.Setenv("DefaultIp", *flDefaultIp)
job.SetenvBool("InterContainerCommunication", *flInterContainerComm)
job.Setenv("GraphDriver", *flGraphDriver)
job.Setenv("ExecDriver", *flExecDriver)
job.SetenvInt("Mtu", *flMtu)
if err := job.Run(); err != nil {
log.Fatal(err)
@ -114,6 +139,7 @@ func main() {
job.SetenvBool("Logging", true)
job.SetenvBool("EnableCors", *flEnableCors)
job.Setenv("Version", dockerversion.VERSION)
job.Setenv("SocketGroup", *flSocketGroup)
if err := job.Run(); err != nil {
log.Fatal(err)
}

View file

@ -19,10 +19,24 @@ post-commit hooks. The "release" branch maps to the "latest"
documentation and the "master" branch maps to the "master"
documentation.
**Warning**: The "master" documentation may include features not yet
part of any official docker release. "Master" docs should be used only
for understanding bleeding-edge development and "latest" should be
used for the latest official release.
## Branches
**There are two branches related to editing docs**: ``master`` and a
``doc*`` branch (currently ``doc0.8.1``). You should normally edit
docs on the ``master`` branch. That way your fixes will automatically
get included in later releases, and docs maintainers can easily
cherry-pick your changes to bring over to the current docs branch. In
the rare case where your change is not forward-compatible, then you
could base your change on the appropriate ``doc*`` branch.
Now that we have a ``doc*`` branch, we can keep the ``latest`` docs
up to date with any bugs found between ``docker`` code releases.
**Warning**: When *reading* the docs, the ``master`` documentation may
include features not yet part of any official docker
release. ``Master`` docs should be used only for understanding
bleeding-edge development and ``latest`` (which points to the ``doc*``
branch``) should be used for the latest official release.
If you need to manually trigger a build of an existing branch, then
you can do that through the [readthedocs
@ -39,7 +53,7 @@ Getting Started
To edit and test the docs, you'll need to install the Sphinx tool and
its dependencies. There are two main ways to install this tool:
###Native Installation
### Native Installation
Install dependencies from `requirements.txt` file in your `docker/docs`
directory:
@ -48,7 +62,7 @@ directory:
* Mac OS X: `[sudo] pip-2.7 install -r docs/requirements.txt`
###Alternative Installation: Docker Container
### Alternative Installation: Docker Container
If you're running ``docker`` on your development machine then you may
find it easier and cleaner to use the docs Dockerfile. This installs Sphinx
@ -59,11 +73,16 @@ docs inside the container, even starting a simple HTTP server on port
In the ``docker`` source directory, run:
```make docs```
This is the equivalent to ``make clean server`` since each container starts clean.
This is the equivalent to ``make clean server`` since each container
starts clean.
Usage
-----
* Follow the contribution guidelines (``../CONTRIBUTING.md``)
# Contributing
## Normal Case:
* Follow the contribution guidelines ([see
``../CONTRIBUTING.md``](../CONTRIBUTING.md)).
* [Remember to sign your work!](../CONTRIBUTING.md#sign-your-work)
* Work in your own fork of the code, we accept pull requests.
* Change the ``.rst`` files with your favorite editor -- try to keep the
lines short and respect RST and Sphinx conventions.
@ -75,6 +94,20 @@ Usage
``make clean docs`` must complete without any warnings or errors.
## Special Case for RST Newbies:
If you want to write a new doc or make substantial changes to an
existing doc, but **you don't know RST syntax**, we will accept pull
requests in Markdown and plain text formats. We really want to
encourage people to share their knowledge and don't want the markup
syntax to be the obstacle. So when you make the Pull Request, please
note in your comment that you need RST markup assistance, and we'll
make the changes for you, and then we will make a pull request to your
pull request so that you can get all the changes and learn about the
markup. You still need to follow the
[``CONTRIBUTING``](../CONTRIBUTING) guidelines, so please sign your
commits.
Working using GitHub's file editor
----------------------------------
@ -82,6 +115,7 @@ Alternatively, for small changes and typos you might want to use
GitHub's built in file editor. It allows you to preview your changes
right online (though there can be some differences between GitHub
markdown and Sphinx RST). Just be careful not to create many commits.
And you must still [sign your work!](../CONTRIBUTING.md#sign-your-work)
Images
------
@ -93,8 +127,11 @@ exists.
Notes
-----
* For the template the css is compiled from less. When changes are needed they can be compiled using
lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.less -o main.css``
* For the template the css is compiled from less. When changes are
needed they can be compiled using
lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.less -o main.css``
Guides on using sphinx
----------------------
@ -106,7 +143,8 @@ Guides on using sphinx
Hello world
===========
This is.. (etc.)
This is a reference to :ref:`hello_world` and will work even if we
move the target to another file or change the title of the section.
```
The ``_hello_world:`` will make it possible to link to this position

View file

@ -13,8 +13,8 @@ The specific process will depend heavily on the Linux distribution you
want to package. We have some examples below, and you are encouraged
to submit pull requests to contribute new ones.
Getting Started
...............
Create a full image using tar
.............................
In general, you'll want to start with a working machine that is
running the distribution you'd like to package as a base image, though
@ -44,3 +44,22 @@ Docker GitHub Repo:
<https://github.com/dotcloud/docker/blob/master/contrib/mkimage-yum.sh>`_
* `Debian / Ubuntu
<https://github.com/dotcloud/docker/blob/master/contrib/mkimage-debootstrap.sh>`_
Creating a simple base image using ``scratch``
..............................................
There is a special repository in the Docker registry called ``scratch``, which
was created using an empty tar file::
$ tar cv --files-from /dev/null | docker import - scratch
which you can ``docker pull``. You can then use that image to base your new
minimal containers ``FROM``::
FROM scratch
ADD true-asm /true
CMD ["/true"]
The Dockerfile above is from extremely minimal image -
`tianon/true <https://github.com/tianon/dockerfiles/tree/master/true>`_.

View file

@ -92,14 +92,6 @@ To execute the test cases, run this command:
sudo make test
Note: if you're running the tests in vagrant, you need to specify a dns entry in
the command (either edit the Makefile, or run the step manually):
.. code-block:: bash
sudo docker run -dns 8.8.8.8 -privileged -v `pwd`:/go/src/github.com/dotcloud/docker docker hack/make.sh test
If the test are successful then the tail of the output should look something like this
.. code-block:: bash
@ -130,7 +122,10 @@ If the test are successful then the tail of the output should look something lik
PASS
ok github.com/dotcloud/docker/utils 0.017s
If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'.
You can use this to select certain tests to run, eg.
TESTFLAGS='-run ^TestBuild$' make test
Step 6: Use Docker

View file

@ -55,7 +55,7 @@ The first two steps can be done as part of a Dockerfile, as follows.
FROM ubuntu
MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
RUN apt-get -y install wget lsb-release unzip
RUN apt-get -y install wget lsb-release unzip ca-certificates
# install latest CFEngine
RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
@ -64,7 +64,7 @@ The first two steps can be done as part of a Dockerfile, as follows.
RUN apt-get install cfengine-community
# install cfe-docker process management policy
RUN wget --no-check-certificate https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip

View file

@ -2,11 +2,6 @@
:description: A simple hello world example with Docker
:keywords: docker, example, hello world
.. _examples:
Hello World
-----------
.. _running_examples:
Check your Docker install
@ -18,7 +13,7 @@ your Docker install, run the following command:
.. code-block:: bash
# Check that you have a working install
docker info
$ sudo docker info
If you get ``docker: command not found`` or something like
``/var/lib/docker/repositories: permission denied`` you may have an incomplete
@ -30,27 +25,28 @@ Please refer to :ref:`installation_list` for installation instructions.
.. _hello_world:
Hello World
===========
-----------
.. include:: example_header.inc
This is the most basic example available for using Docker.
Download the base image which is named ``ubuntu``:
Download the small base image named ``busybox``:
.. code-block:: bash
# Download an ubuntu image
sudo docker pull ubuntu
# Download a busybox image
$ sudo docker pull busybox
Alternatively to the ``ubuntu`` image, you can select ``busybox``, a bare
minimal Linux system. The images are retrieved from the Docker
repository.
The ``busybox`` image is a minimal Linux system. You can do the same
with any number of other images, such as ``debian``, ``ubuntu`` or ``centos``.
The images can be found and retrieved using the `Docker index`_.
.. _Docker index: http://index.docker.io
.. code-block:: bash
sudo docker run ubuntu /bin/echo hello world
$ sudo docker run busybox /bin/echo hello world
This command will run a simple ``echo`` command, that will echo ``hello world`` back to the console over standard out.
@ -58,7 +54,7 @@ This command will run a simple ``echo`` command, that will echo ``hello world``
- **"sudo"** execute the following commands as user *root*
- **"docker run"** run a command in a new container
- **"ubuntu"** is the image we want to run the command inside of.
- **"busybox"** is the image we are running the command in.
- **"/bin/echo"** is the command we want to run in the container
- **"hello world"** is the input for the echo command
@ -73,8 +69,8 @@ See the example in action
<iframe width="560" height="400" frameborder="0"
sandbox="allow-same-origin allow-scripts"
srcdoc="<body><script type=&quot;text/javascript&quot;
src=&quot;https://asciinema.org/a/2603.js&quot;
id=&quot;asciicast-2603&quot; async></script></body>">
src=&quot;https://asciinema.org/a/7658.js&quot;
id=&quot;asciicast-7658&quot; async></script></body>">
</iframe>
----
@ -82,7 +78,7 @@ See the example in action
.. _hello_world_daemon:
Hello World Daemon
==================
------------------
.. include:: example_header.inc
@ -172,14 +168,14 @@ See the example in action
id=&quot;asciicast-2562&quot; async></script></body>">
</iframe>
The next example in the series is a :ref:`python_web_app` example, or
The next example in the series is a :ref:`nodejs_web_app` example, or
you could skip to any of the other examples:
* :ref:`python_web_app`
* :ref:`nodejs_web_app`
* :ref:`running_redis_service`
* :ref:`running_ssh_service`
* :ref:`running_couchdb_service`
* :ref:`postgresql_service`
* :ref:`mongodb_image`
* :ref:`python_web_app`

View file

@ -16,7 +16,6 @@ to more substantial services like those which you might find in production.
:maxdepth: 1
hello_world
python_web_app
nodejs_web_app
running_redis_service
running_ssh_service
@ -26,3 +25,4 @@ to more substantial services like those which you might find in production.
running_riak_service
using_supervisord
cfengine_process_management
python_web_app

View file

@ -9,109 +9,137 @@ Python Web App
.. include:: example_header.inc
The goal of this example is to show you how you can author your own
Docker images using a parent image, making changes to it, and then
saving the results as a new image. We will do that by making a simple
hello Flask web application image.
While using Dockerfiles is the preferred way to create maintainable
and repeatable images, its useful to know how you can try things out
and then commit your live changes to an image.
**Steps:**
The goal of this example is to show you how you can modify your own
Docker images by making changes to a running
container, and then saving the results as a new image. We will do
that by making a simple 'hello world' Flask web application image.
Download the initial image
--------------------------
Download the ``shykes/pybuilder`` Docker image from the ``http://index.docker.io``
registry.
This image contains a ``buildapp`` script to download the web app and then ``pip install``
any required modules, and a ``runapp`` script that finds the ``app.py`` and runs it.
.. _`shykes/pybuilder`: https://github.com/shykes/pybuilder
.. code-block:: bash
sudo docker pull shykes/pybuilder
$ sudo docker pull shykes/pybuilder
We are downloading the ``shykes/pybuilder`` Docker image
.. note:: This container was built with a very old version of docker
(May 2013 - see `shykes/pybuilder`_ ), when the ``Dockerfile`` format was different,
but the image can still be used now.
Interactively make some modifications
-------------------------------------
We then start a new container running interactively using the image.
First, we set a ``URL`` variable that points to a tarball of a simple
helloflask web app, and then we run a command contained in the image called
``buildapp``, passing it the ``$URL`` variable. The container is
given a name ``pybuilder_run`` which we will use in the next steps.
While this example is simple, you could run any number of interactive commands,
try things out, and then exit when you're done.
.. code-block:: bash
URL=http://github.com/shykes/helloflask/archive/master.tar.gz
$ sudo docker run -i -t -name pybuilder_run shykes/pybuilder bash
We set a ``URL`` variable that points to a tarball of a simple helloflask web app
.. code-block:: bash
BUILD_JOB=$(sudo docker run -d -t shykes/pybuilder:latest /usr/local/bin/buildapp $URL)
Inside of the ``shykes/pybuilder`` image there is a command called
``buildapp``, we are running that command and passing the ``$URL`` variable
from step 2 to it, and running the whole thing inside of a new
container. The ``BUILD_JOB`` environment variable will be set with the new container ID.
.. code-block:: bash
sudo docker attach -sig-proxy=false $BUILD_JOB
$$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
$$ /usr/local/bin/buildapp $URL
[...]
$$ exit
While this container is running, we can attach to the new container to
see what is going on. The flag ``--sig-proxy`` set as ``false`` allows you to connect and
disconnect (Ctrl-C) to it without stopping the container.
.. code-block:: bash
sudo docker ps -a
List all Docker containers. If this container has already finished
running, it will still be listed here.
.. code-block:: bash
BUILD_IMG=$(sudo docker commit $BUILD_JOB _/builds/github.com/shykes/helloflask/master)
Commit the container to create a new image
------------------------------------------
Save the changes we just made in the container to a new image called
``_/builds/github.com/hykes/helloflask/master`` and save the image ID in
the ``BUILD_IMG`` variable name.
``/builds/github.com/shykes/helloflask/master``. You now have 3 different
ways to refer to the container: name ``pybuilder_run``, short-id ``c8b2e8228f11``, or
long-id ``c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9``.
.. code-block:: bash
WEB_WORKER=$(sudo docker run -d -p 5000 $BUILD_IMG /usr/local/bin/runapp)
$ sudo docker commit pybuilder_run /builds/github.com/shykes/helloflask/master
c8b2e8228f11b8b3e492cbf9a49923ae66496230056d61e07880dc74c5f495f9
Run the new image to start the web worker
-----------------------------------------
Use the new image to create a new container with
network port 5000 mapped to a local port
.. code-block:: bash
$ sudo docker run -d -p 5000 --name web_worker /builds/github.com/shykes/helloflask/master /usr/local/bin/runapp
- **"docker run -d "** run a command in a new container. We pass "-d"
so it runs as a daemon.
- **"-p 5000"** the web app is going to listen on this port, so it
must be mapped from the container to the host system.
- **"$BUILD_IMG"** is the image we want to run the command inside of.
- **/usr/local/bin/runapp** is the command which starts the web app.
Use the new image we just created and create a new container with
network port 5000, and return the container ID and store in the
``WEB_WORKER`` variable.
.. code-block:: bash
View the container logs
-----------------------
sudo docker logs $WEB_WORKER
* Running on http://0.0.0.0:5000/
View the logs for the new container using the ``WEB_WORKER`` variable, and
View the logs for the new ``web_worker`` container and
if everything worked as planned you should see the line ``Running on
http://0.0.0.0:5000/`` in the log output.
To exit the view without stopping the container, hit Ctrl-C, or open another
terminal and continue with the example while watching the result in the logs.
.. code-block:: bash
WEB_PORT=$(sudo docker port $WEB_WORKER 5000 | awk -F: '{ print $2 }')
$ sudo docker logs -f web_worker
* Running on http://0.0.0.0:5000/
See the webapp output
---------------------
Look up the public-facing port which is NAT-ed. Find the private port
used by the container and store it inside of the ``WEB_PORT`` variable.
.. code-block:: bash
# install curl if necessary, then ...
curl http://127.0.0.1:$WEB_PORT
Hello world!
Access the web app using the ``curl`` binary. If everything worked as planned you
should see the line ``Hello world!`` inside of your console.
**Video:**
.. code-block:: bash
See the example in action
$ WEB_PORT=$(sudo docker port web_worker 5000 | awk -F: '{ print $2 }')
.. raw:: html
# install curl if necessary, then ...
$ curl http://127.0.0.1:$WEB_PORT
Hello world!
<iframe width="720" height="400" frameborder="0"
sandbox="allow-same-origin allow-scripts"
srcdoc="<body><script type=&quot;text/javascript&quot;
src=&quot;https://asciinema.org/a/2573.js&quot;
id=&quot;asciicast-2573&quot; async></script></body>">
</iframe>
Continue to :ref:`running_ssh_service`.
Clean up example containers and images
--------------------------------------
.. code-block:: bash
$ sudo docker ps --all
List ``--all`` the Docker containers. If this container had already finished
running, it will still be listed here with a status of 'Exit 0'.
.. code-block:: bash
$ sudo docker stop web_worker
$ sudo docker rm web_worker pybuilder_run
$ sudo docker rmi /builds/github.com/shykes/helloflask/master shykes/pybuilder:latest
And now stop the running web worker, and delete the containers, so that we can
then delete the images that we used.

View file

@ -67,14 +67,14 @@ Once inside our freshly created container we need to install Redis to get the
apt-get -y install redis-server
service redis-server stop
Now we can test the connection. Firstly, let's look at the available environmental
variables in our web application container. We can use these to get the IP and port
of our ``redis`` container.
As we've used the ``--link redis:db`` option, Docker has created some environment
variables in our web application container.
.. code-block:: bash
env
. . .
env | grep DB_
# Should return something similar to this with your values
DB_NAME=/violet_wolf/db
DB_PORT_6379_TCP_PORT=6379
DB_PORT=tcp://172.17.0.33:6379

View file

@ -0,0 +1,17 @@
# sshd
#
# VERSION 0.0.1
FROM ubuntu
MAINTAINER Thatcher R. Peskens "thatcher@dotcloud.com"
# make sure the package repository is up to date
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' |chpasswd
EXPOSE 22
CMD /usr/sbin/sshd -D

View file

@ -1,5 +1,5 @@
:title: Running an SSH service
:description: A screencast of installing and running an sshd service
:description: Installing and running an sshd service
:keywords: docker, example, package installation, networking
.. _running_ssh_service:
@ -9,101 +9,41 @@ SSH Daemon Service
.. include:: example_header.inc
The following Dockerfile sets up an sshd service in a container that you can use
to connect to and inspect other container's volumes, or to get quick access to a
test container.
**Video:**
.. literalinclude:: running_ssh_service.Dockerfile
I've created a little screencast to show how to create an SSHd service
and connect to it. It is something like 11 minutes and not entirely
smooth, but it gives you a good idea.
.. note::
This screencast was created before Docker version 0.5.2, so the
daemon is unprotected and available via a TCP port. When you run
through the same steps in a newer version of Docker, you will
need to add ``sudo`` in front of each ``docker`` command in order
to reach the daemon over its protected Unix socket.
.. raw:: html
<iframe width="815" height="450" frameborder="0"
sandbox="allow-same-origin allow-scripts"
srcdoc="<body><script type=&quot;text/javascript&quot;
src=&quot;https://asciinema.org/a/2637.js&quot;
id=&quot;asciicast-2637&quot; async></script></body>">
</iframe>
You can also get this sshd container by using:
Build the image using:
.. code-block:: bash
sudo docker pull dhrp/sshd
$ sudo docker build -rm -t eg_sshd .
The password is ``screencast``.
**Video's Transcription:**
Then run it. You can then use ``docker port`` to find out what host port the container's
port 22 is mapped to:
.. code-block:: bash
# Hello! We are going to try and install openssh on a container and run it as a service
# let's pull ubuntu to get a base ubuntu image.
$ docker pull ubuntu
# I had it so it was quick
# now let's connect using -i for interactive and with -t for terminal
# we execute /bin/bash to get a prompt.
$ docker run -i -t ubuntu /bin/bash
# yes! we are in!
# now lets install openssh
$ apt-get update
$ apt-get install openssh-server
# ok. lets see if we can run it.
$ which sshd
# we need to create privilege separation directory
$ mkdir /var/run/sshd
$ /usr/sbin/sshd
$ exit
# now let's commit it
# which container was it?
$ docker ps -a |more
$ docker commit a30a3a2f2b130749995f5902f079dc6ad31ea0621fac595128ec59c6da07feea dhrp/sshd
# I gave the name dhrp/sshd for the container
# now we can run it again
$ docker run -d dhrp/sshd /usr/sbin/sshd -D # D for daemon mode
# is it running?
$ docker ps
# yes!
# let's stop it
$ docker stop 0ebf7cec294755399d063f4b1627980d4cbff7d999f0bc82b59c300f8536a562
$ docker ps
# and reconnect, but now open a port to it
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 22
# it has now given us a port to connect to
# we have to connect using a public ip of our host
$ hostname
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49153
# Ah! forgot to set root passwd
$ docker commit b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 dhrp/sshd
$ docker ps -a
$ docker run -i -t dhrp/sshd /bin/bash
$ passwd
$ exit
$ docker commit 9e863f0ca0af31c8b951048ba87641d67c382d08d655c2e4879c51410e0fedc1 dhrp/sshd
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port a0aaa9558c90cf5c7782648df904a82365ebacce523e4acc085ac1213bfe2206 22
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49154
# Thanks for watching, Thatcher thatcher@dotcloud.com
Update:
-------
$ sudo docker run -d -P -name test_sshd eg_sshd
$ sudo docker port test_sshd 22
0.0.0.0:49154
For Ubuntu 13.10 using stackbrew/ubuntu, you may need do these additional steps:
And now you can ssh to port ``49154`` on the Docker daemon's host IP address
(``ip address`` or ``ifconfig`` can tell you that):
1. change /etc/pam.d/sshd, pam_loginuid line 'required' to 'optional'
2. echo LANG=\"en_US.UTF-8\" > /etc/default/locale
.. code-block:: bash
$ ssh root@192.168.1.2 -p 49154
# The password is ``screencast``.
$$
Finally, clean up after your test by stopping and removing the container, and
then removing the image.
.. code-block:: bash
$ sudo docker stop test_sshd
$ sudo docker rm test_sshd
$ sudo docker rmi eg_sshd

View file

@ -25,9 +25,9 @@ Does Docker run on Mac OS X or Windows?
Not at this time, Docker currently only runs on Linux, but you can
use VirtualBox to run Docker in a virtual machine on your box, and
get the best of both worlds. Check out the
:ref:`macosx` and :ref:`windows` installation
guides.
get the best of both worlds. Check out the :ref:`macosx` and
:ref:`windows` installation guides. The small Linux distribution boot2docker
can be run inside virtual machines on these two operating systems.
How do containers compare to virtual machines?
..............................................
@ -189,10 +189,15 @@ How do I report a security issue with Docker?
You can learn about the project's security policy `here <http://www.docker.io/security/>`_
and report security issues to this `mailbox <mailto:security@docker.com>`_.
Why do I need to sign my commits to Docker with the DCO?
........................................................
Please read `our blog post <http://blog.docker.io/2014/01/docker-code-contributions-require-developer-certificate-of-origin/>`_ on the introduction of the DCO.
Can I help by adding some questions and answers?
................................................
Definitely! You can fork `the repo`_ and edit the documentation sources.
Definitely! You can fork `the repo`_ and edit the documentation sources.
Where can I find more answers?
@ -216,5 +221,4 @@ Where can I find more answers?
.. _Ask questions on Stackoverflow: http://stackoverflow.com/search?q=docker
.. _Join the conversation on Twitter: http://twitter.com/docker
Looking for something else to read? Checkout the :ref:`hello_world` example.

View file

@ -17,13 +17,13 @@ Common use cases for Docker include:
- Deploying and scaling databases and backend services in a service-oriented environment.
- Building custom PaaS environments, either from scratch or as an extension of off-the-shelf platforms like OpenShift or Cloud Foundry.
Please note Docker is currently under heavy developement. It should not be used in production (yet).
Please note Docker is currently under heavy development. It should not be used in production (yet).
For a high-level overview of Docker, please see the `Introduction
<http://www.docker.io/learn_more/>`_. When you're ready to start working with
Docker, we have a `quick start <http://www.docker.io/gettingstarted>`_
and a more in-depth guide to :ref:`ubuntu_linux` and other
:ref:`installation_list` paths including prebuilt binaries,
Vagrant-created VMs, Rackspace and Amazon instances.
Rackspace and Amazon instances.
Enough reading! :ref:`Try it out! <running_examples>`

View file

@ -10,8 +10,7 @@ Amazon EC2
There are several ways to install Docker on AWS EC2:
* :ref:`amazonquickstart` or
* :ref:`amazonstandard` or
* :ref:`amazonvagrant`
* :ref:`amazonstandard`
**You'll need an** `AWS account <http://aws.amazon.com/>`_ **first, of course.**
@ -73,112 +72,4 @@ running Ubuntu. Just follow Step 1 from :ref:`amazonquickstart` to
pick an image (or use one of your own) and skip the step with the
*User Data*. Then continue with the :ref:`ubuntu_linux` instructions.
.. _amazonvagrant:
Use Vagrant
-----------
.. include:: install_unofficial.inc
And finally, if you prefer to work through Vagrant, you can install
Docker that way too. Vagrant 1.1 or higher is required.
1. Install vagrant from http://www.vagrantup.com/ (or use your package manager)
2. Install the vagrant aws plugin
::
vagrant plugin install vagrant-aws
3. Get the docker sources, this will give you the latest Vagrantfile.
::
git clone https://github.com/dotcloud/docker.git
4. Check your AWS environment.
Create a keypair specifically for EC2, give it a name and save it
to your disk. *I usually store these in my ~/.ssh/ folder*.
Check that your default security group has an inbound rule to
accept SSH (port 22) connections.
5. Inform Vagrant of your settings
Vagrant will read your access credentials from your environment, so
we need to set them there first. Make sure you have everything on
amazon aws setup so you can (manually) deploy a new image to EC2.
Note that where possible these variables are the same as those honored by
the ec2 api tools.
::
export AWS_ACCESS_KEY=xxx
export AWS_SECRET_KEY=xxx
export AWS_KEYPAIR_NAME=xxx
export SSH_PRIVKEY_PATH=xxx
export BOX_NAME=xxx
export AWS_REGION=xxx
export AWS_AMI=xxx
export AWS_INSTANCE_TYPE=xxx
The required environment variables are:
* ``AWS_ACCESS_KEY`` - The API key used to make requests to AWS
* ``AWS_SECRET_KEY`` - The secret key to make AWS API requests
* ``AWS_KEYPAIR_NAME`` - The name of the keypair used for this EC2 instance
* ``SSH_PRIVKEY_PATH`` - The path to the private key for the named
keypair, for example ``~/.ssh/docker.pem``
There are a number of optional environment variables:
* ``BOX_NAME`` - The name of the vagrant box to use. Defaults to
``ubuntu``.
* ``AWS_REGION`` - The aws region to spawn the vm in. Defaults to
``us-east-1``.
* ``AWS_AMI`` - The aws AMI to start with as a base. This must be
be an ubuntu 12.04 precise image. You must change this value if
``AWS_REGION`` is set to a value other than ``us-east-1``.
This is because AMIs are region specific. Defaults to ``ami-69f5a900``.
* ``AWS_INSTANCE_TYPE`` - The aws instance type. Defaults to ``t1.micro``.
You can check if they are set correctly by doing something like
::
echo $AWS_ACCESS_KEY
6. Do the magic!
::
vagrant up --provider=aws
If it stalls indefinitely on ``[default] Waiting for SSH to become
available...``, Double check your default security zone on AWS
includes rights to SSH (port 22) to your container.
If you have an advanced AWS setup, you might want to have a look at
`vagrant-aws <https://github.com/mitchellh/vagrant-aws>`_.
7. Connect to your machine
.. code-block:: bash
vagrant ssh
8. Your first command
Now you are in the VM, run docker
.. code-block:: bash
sudo docker
Continue with the :ref:`hello_world` example.

View file

@ -26,10 +26,7 @@ Check runtime dependencies
To run properly, docker needs the following software to be installed at runtime:
- iproute2 version 3.5 or later (build after 2012-05-21), and
specifically the "ip" utility
- iptables version 1.4 or later
- The LXC utility scripts (http://lxc.sourceforge.net) version 0.8 or later
- Git version 1.7 or later
- XZ Utils 4.9 or later
@ -41,7 +38,7 @@ Docker in daemon mode has specific kernel requirements. For details,
check your distribution in :ref:`installation_list`.
Note that Docker also has a client mode, which can run on virtually
any linux kernel (it even builds on OSX!).
any Linux kernel (it even builds on OSX!).
Get the docker binary:

View file

@ -39,7 +39,7 @@ boot2docker
``docker`` daemon. It also takes care of the installation for the OS image
that is used for the job.
.. _GitHub page: https://github.com/steeve/boot2docker
.. _GitHub page: https://github.com/boot2docker/boot2docker
Open up a new terminal window, if you have not already.
@ -51,7 +51,7 @@ Run the following commands to get boot2docker:
cd ~/bin
# Get the file
curl https://raw.github.com/steeve/boot2docker/master/boot2docker > boot2docker
curl https://raw.github.com/boot2docker/boot2docker/master/boot2docker > boot2docker
# Mark it executable
chmod +x boot2docker
@ -126,7 +126,7 @@ with our containers as if they were running locally:
.. code-block:: bash
# vm must be powered off
for i in {4900..49900}; do
for i in {49000..49900}; do
VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port$i,udp,,$i,,$i";
done
@ -153,7 +153,7 @@ boot2docker:
See the GitHub page for `boot2docker`_.
.. _boot2docker: https://github.com/steeve/boot2docker
.. _boot2docker: https://github.com/boot2docker/boot2docker
If SSH complains about keys:
----------------------------

View file

@ -182,9 +182,12 @@ daemon will make the ownership of the Unix socket read/writable by the
*docker* group when the daemon starts. The ``docker`` daemon must
always run as the root user, but if you run the ``docker`` client as a user in
the *docker* group then you don't need to add ``sudo`` to all the
client commands.
client commands. As of 0.9.0, you can specify that a group other than ``docker``
should own the Unix socket with the ``-G`` option.
.. warning:: The *docker* group (or the group specified with ``-G``) is
root-equivalent.
.. warning:: The *docker* group is root-equivalent.
**Example:**
@ -217,15 +220,35 @@ To install the latest version of docker, use the standard ``apt-get`` method:
# install the latest
sudo apt-get install lxc-docker
Memory and Swap Accounting
^^^^^^^^^^^^^^^^^^^^^^^^^^
If want to enable memory and swap accounting, you must add the following
command-line parameters to your kernel::
cgroup_enable=memory swapaccount=1
On systems using GRUB (which is the default for Ubuntu), you can add those
parameters by editing ``/etc/default/grub`` and extending
``GRUB_CMDLINE_LINUX``. Look for the following line::
GRUB_CMDLINE_LINUX=""
And replace it by the following one::
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
Then run ``update-grub``, and reboot.
Troubleshooting
^^^^^^^^^^^^^^^
On Linux Mint, the ``cgroups-lite`` package is not installed by default.
On Linux Mint, the ``cgroup-lite`` package is not installed by default.
Before Docker will work correctly, you will need to install this via:
.. code-block:: bash
sudo apt-get update && sudo apt-get install cgroups-lite
sudo apt-get update && sudo apt-get install cgroup-lite
.. _ufw:
@ -261,6 +284,64 @@ incoming connections on the Docker port (default 4243):
.. _installmirrors:
Docker and local DNS server warnings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Systems which are running Ubuntu or an Ubuntu derivative on the desktop will
use `127.0.0.1` as the default nameserver in `/etc/resolv.conf`. NetworkManager
sets up dnsmasq to use the real DNS servers of the connection and sets up
`nameserver 127.0.0.1` in `/etc/resolv.conf`.
When starting containers on these desktop machines, users will see a warning:
.. code-block:: bash
WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
This warning is shown because the containers can't use the local DNS nameserver
and Docker will default to using an external nameserver.
This can be worked around by specifying a DNS server to be used by the Docker
daemon for the containers:
.. code-block:: bash
sudo nano /etc/default/docker
---
# Add:
DOCKER_OPTS="-dns 8.8.8.8"
# 8.8.8.8 could be replaced with a local DNS server, such as 192.168.1.1
# multiple DNS servers can be specified: -dns 8.8.8.8 -dns 192.168.1.1
The Docker daemon has to be restarted:
.. code-block:: bash
sudo restart docker
.. warning:: If you're doing this on a laptop which connects to various networks, make sure to choose a public DNS server.
An alternative solution involves disabling dnsmasq in NetworkManager by
following these steps:
.. code-block:: bash
sudo nano /etc/NetworkManager/NetworkManager.conf
----
# Change:
dns=dnsmasq
# to
#dns=dnsmasq
NetworkManager and Docker need to be restarted afterwards:
.. code-block:: bash
sudo restart network-manager
sudo restart docker
.. warning:: This might make DNS resolution slower on some networks.
Mirrors
^^^^^^^

View file

@ -1,223 +1,72 @@
:title: Installation on Windows
:description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, Windows, requirements, virtualbox, vagrant, git, ssh, putty, cygwin
:keywords: Docker, Docker documentation, Windows, requirements, virtualbox, boot2docker
.. _windows:
Windows
=======
Docker can run on Windows using a VM like VirtualBox. You then run
Linux within the VM.
Docker can run on Windows using a virtualization platform like VirtualBox. A Linux
distribution is run inside a virtual machine and that's where Docker will run.
Installation
------------
.. include:: install_header.inc
.. include:: install_unofficial.inc
1. Install virtualbox from https://www.virtualbox.org - or follow this `tutorial <http://www.slideshare.net/julienbarbier42/install-virtualbox-on-windows-7>`_.
1. Install virtualbox from https://www.virtualbox.org - or follow this tutorial__
2. Download the latest boot2docker.iso from https://github.com/boot2docker/boot2docker/releases.
.. __: http://www.slideshare.net/julienbarbier42/install-virtualbox-on-windows-7
3. Start VirtualBox.
2. Install vagrant from http://www.vagrantup.com - or follow this tutorial__
4. Create a new Virtual machine with the following settings:
.. __: http://www.slideshare.net/julienbarbier42/install-vagrant-on-windows-7
- `Name: boot2docker`
- `Type: Linux`
- `Version: Linux 2.6 (64 bit)`
- `Memory size: 1024 MB`
- `Hard drive: Do not add a virtual hard drive`
3. Install git with ssh from http://git-scm.com/downloads - or follow this tutorial__
5. Open the settings of the virtual machine:
.. __: http://www.slideshare.net/julienbarbier42/install-git-with-ssh-on-windows-7
5.1. go to Storage
5.2. click the empty slot below `Controller: IDE`
We recommend having at least 2Gb of free disk space and 2Gb of RAM (or more).
5.3. click the disc icon on the right of `IDE Secondary Master`
Opening a command prompt
------------------------
5.4. click `Choose a virtual CD/DVD disk file`
First open a cmd prompt. Press Windows key and then press “R”
key. This will open the RUN dialog box for you. Type “cmd” and press
Enter. Or you can click on Start, type “cmd” in the “Search programs
and files” field, and click on cmd.exe.
6. Browse to the path where you've saved the `boot2docker.iso`, select the `boot2docker.iso` and click open.
.. image:: images/win/_01.gif
:alt: Git install
:align: center
7. Click OK on the Settings dialog to save the changes and close the window.
This should open a cmd prompt window.
8. Start the virtual machine by clicking the green start button.
.. image:: images/win/_02.gif
:alt: run docker
:align: center
Alternatively, you can also use a Cygwin terminal, or Git Bash (or any
other command line program you are usually using). The next steps
would be the same.
.. _launch_ubuntu:
Launch an Ubuntu virtual server
-------------------------------
Lets download and run an Ubuntu image with docker binaries already
installed.
.. code-block:: bash
git clone https://github.com/dotcloud/docker.git
cd docker
vagrant up
.. image:: images/win/run_02_.gif
:alt: run docker
:align: center
Congratulations! You are running an Ubuntu server with docker
installed on it. You do not see it though, because it is running in
the background.
Log onto your Ubuntu server
---------------------------
Lets log into your Ubuntu server now. To do so you have two choices:
- Use Vagrant on Windows command prompt OR
- Use SSH
Using Vagrant on Windows Command Prompt
```````````````````````````````````````
Run the following command
.. code-block:: bash
vagrant ssh
You may see an error message starting with “`ssh` executable not
found”. In this case it means that you do not have SSH in your
PATH. If you do not have SSH in your PATH you can set it up with the
“set” command. For instance, if your ssh.exe is in the folder named
“C:\Program Files (x86)\Git\bin”, then you can run the following
command:
.. code-block:: bash
set PATH=%PATH%;C:\Program Files (x86)\Git\bin
.. image:: images/win/run_03.gif
:alt: run docker
:align: center
Using SSH
`````````
First step is to get the IP and port of your Ubuntu server. Simply run:
.. code-block:: bash
vagrant ssh-config
You should see an output with HostName and Port information. In this
example, HostName is 127.0.0.1 and port is 2222. And the User is
“vagrant”. The password is not shown, but it is also “vagrant”.
.. image:: images/win/ssh-config.gif
:alt: run docker
:align: center
You can now use this information for connecting via SSH to your
server. To do so you can:
- Use putty.exe OR
- Use SSH from a terminal
Use putty.exe
'''''''''''''
You can download putty.exe from this page
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Launch
putty.exe and simply enter the information you got from last step.
.. image:: images/win/putty.gif
:alt: run docker
:align: center
Open, and enter user = vagrant and password = vagrant.
.. image:: images/win/putty_2.gif
:alt: run docker
:align: center
SSH from a terminal
'''''''''''''''''''
You can also run this command on your favorite terminal (windows
prompt, cygwin, git-bash, …). Make sure to adapt the IP and port from
what you got from the vagrant ssh-config command.
.. code-block:: bash
ssh vagrant@127.0.0.1 p 2222
Enter user = vagrant and password = vagrant.
.. image:: images/win/cygwin.gif
:alt: run docker
:align: center
Congratulations, you are now logged onto your Ubuntu Server, running
on top of your Windows machine !
9. The boot2docker virtual machine should boot now.
Running Docker
--------------
First you have to be root in order to run docker. Simply run the
following command:
boot2docker will log you in automatically so you can start using Docker right
away.
.. code-block:: bash
sudo su
You are now ready for the dockers “hello world” example. Run
Let's try the “hello world” example. Run
.. code-block:: bash
docker run busybox echo hello world
.. image:: images/win/run_04.gif
:alt: run docker
:align: center
This will download the small busybox image and print hello world.
All done!
Now you can continue with the :ref:`hello_world` example.
Observations
------------
Troubleshooting
---------------
Persistent storage
``````````````````
VM does not boot
````````````````
.. image:: images/win/ts_go_bios.JPG
If you run into this error message "The VM failed to remain in the
'running' state while attempting to boot", please check that your
computer has virtualization technology available and activated by
going to the BIOS. Here's an example for an HP computer (System
configuration / Device configuration)
.. image:: images/win/hp_bios_vm.JPG
On some machines the BIOS menu can only be accessed before startup.
To access BIOS in this scenario you should restart your computer and
press ESC/Enter when prompted to access the boot and BIOS controls. Typically
the option to allow virtualization is contained within the BIOS/Security menu.
Docker is not installed
```````````````````````
.. image:: images/win/ts_no_docker.JPG
If you run into this error message "The program 'docker' is currently
not installed", try deleting the docker folder and restart from
:ref:`launch_ubuntu`
The virtual machine created above lacks any persistent data storage. All images
and containers will be lost when shutting down or rebooting the VM.

View file

@ -3,3 +3,4 @@ This directory holds the authoritative specifications of APIs defined and implem
* The remote API by which a docker node can be queried over HTTP
* The registry API by which a docker node can download and upload container images for storage and sharing
* The index search API by which a docker node can search the public index for images to download
* The docker.io OAuth and accounts API which 3rd party services can use to access account information

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

View file

@ -0,0 +1,308 @@
:title: docker.io Accounts API
:description: API Documentation for docker.io accounts.
:keywords: API, Docker, accounts, REST, documentation
======================
docker.io Accounts API
======================
.. contents:: Table of Contents
1. Endpoints
============
1.1 Get a single user
^^^^^^^^^^^^^^^^^^^^^
.. http:get:: /api/v1.1/users/:username/
Get profile info for the specified user.
:param username: username of the user whose profile info is being requested.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token.
:statuscode 200: success, user data returned.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being requested, OAuth access tokens must have ``profile_read`` scope.
:statuscode 404: the specified username does not exist.
**Example request**:
.. sourcecode:: http
GET /api/v1.1/users/janedoe/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{
"id": 2,
"username": "janedoe",
"url": "",
"date_joined": "2014-02-12T17:58:01.431312Z",
"type": "User",
"full_name": "Jane Doe",
"location": "San Francisco, CA",
"company": "Success, Inc.",
"profile_url": "https://docker.io/",
"gravatar_email": "jane.doe+gravatar@example.com",
"email": "jane.doe@example.com",
"is_active": true
}
1.2 Update a single user
^^^^^^^^^^^^^^^^^^^^^^^^
.. http:patch:: /api/v1.1/users/:username/
Update profile info for the specified user.
:param username: username of the user whose profile info is being updated.
:jsonparam string full_name: (optional) the new name of the user.
:jsonparam string location: (optional) the new location.
:jsonparam string company: (optional) the new company of the user.
:jsonparam string profile_url: (optional) the new profile url.
:jsonparam string gravatar_email: (optional) the new Gravatar email address.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token.
:reqheader Content-Type: MIME Type of post data. JSON, url-encoded form data, etc.
:statuscode 200: success, user data updated.
:statuscode 400: post data validation error.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being updated, OAuth access tokens must have ``profile_write`` scope.
:statuscode 404: the specified username does not exist.
**Example request**:
.. sourcecode:: http
PATCH /api/v1.1/users/janedoe/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
{
"location": "Private Island",
"profile_url": "http://janedoe.com/",
"company": "Retired",
}
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{
"id": 2,
"username": "janedoe",
"url": "",
"date_joined": "2014-02-12T17:58:01.431312Z",
"type": "User",
"full_name": "Jane Doe",
"location": "Private Island",
"company": "Retired",
"profile_url": "http://janedoe.com/",
"gravatar_email": "jane.doe+gravatar@example.com",
"email": "jane.doe@example.com",
"is_active": true
}
1.3 List email addresses for a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. http:get:: /api/v1.1/users/:username/emails/
List email info for the specified user.
:param username: username of the user whose profile info is being updated.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token
:statuscode 200: success, user data updated.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being requested, OAuth access tokens must have ``email_read`` scope.
:statuscode 404: the specified username does not exist.
**Example request**:
.. sourcecode:: http
GET /api/v1.1/users/janedoe/emails/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"email": "jane.doe@example.com",
"verified": true,
"primary": true
}
]
1.4 Add email address for a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. http:post:: /api/v1.1/users/:username/emails/
Add a new email address to the specified user's account. The email address
must be verified separately, a confirmation email is not automatically sent.
:jsonparam string email: email address to be added.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token.
:reqheader Content-Type: MIME Type of post data. JSON, url-encoded form data, etc.
:statuscode 201: success, new email added.
:statuscode 400: data validation error.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being requested, OAuth access tokens must have ``email_write`` scope.
:statuscode 404: the specified username does not exist.
**Example request**:
.. sourcecode:: http
POST /api/v1.1/users/janedoe/emails/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
{
"email": "jane.doe+other@example.com"
}
**Example response**:
.. sourcecode:: http
HTTP/1.1 201 Created
Content-Type: application/json
{
"email": "jane.doe+other@example.com",
"verified": false,
"primary": false
}
1.5 Update an email address for a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. http:patch:: /api/v1.1/users/:username/emails/
Update an email address for the specified user to either verify an email
address or set it as the primary email for the user. You cannot use this
endpoint to un-verify an email address. You cannot use this endpoint to
unset the primary email, only set another as the primary.
:param username: username of the user whose email info is being updated.
:jsonparam string email: the email address to be updated.
:jsonparam boolean verified: (optional) whether the email address is verified, must be ``true`` or absent.
:jsonparam boolean primary: (optional) whether to set the email address as the primary email, must be ``true`` or absent.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token.
:reqheader Content-Type: MIME Type of post data. JSON, url-encoded form data, etc.
:statuscode 200: success, user's email updated.
:statuscode 400: data validation error.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being updated, OAuth access tokens must have ``email_write`` scope.
:statuscode 404: the specified username or email address does not exist.
**Example request**:
Once you have independently verified an email address.
.. sourcecode:: http
PATCH /api/v1.1/users/janedoe/emails/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
{
"email": "jane.doe+other@example.com",
"verified": true,
}
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{
"email": "jane.doe+other@example.com",
"verified": true,
"primary": false
}
1.6 Delete email address for a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. http:delete:: /api/v1.1/users/:username/emails/
Delete an email address from the specified user's account. You cannot
delete a user's primary email address.
:jsonparam string email: email address to be deleted.
:reqheader Authorization: required authentication credentials of either type HTTP Basic or OAuth Bearer Token.
:reqheader Content-Type: MIME Type of post data. JSON, url-encoded form data, etc.
:statuscode 204: success, email address removed.
:statuscode 400: validation error.
:statuscode 401: authentication error.
:statuscode 403: permission error, authenticated user must be the user whose data is being requested, OAuth access tokens must have ``email_write`` scope.
:statuscode 404: the specified username or email address does not exist.
**Example request**:
.. sourcecode:: http
DELETE /api/v1.1/users/janedoe/emails/ HTTP/1.1
Host: www.docker.io
Accept: application/json
Content-Type: application/json
Authorization: Bearer zAy0BxC1wDv2EuF3tGs4HrI6qJp6KoL7nM
{
"email": "jane.doe+other@example.com"
}
**Example response**:
.. sourcecode:: http
HTTP/1.1 204 NO CONTENT
Content-Length: 0

View file

@ -0,0 +1,253 @@
:title: docker.io OAuth API
:description: API Documentation for docker.io's OAuth flow.
:keywords: API, Docker, oauth, REST, documentation
===================
docker.io OAuth API
===================
.. contents:: Table of Contents
1. Brief introduction
=====================
Some docker.io API requests will require an access token to authenticate. To
get an access token for a user, that user must first grant your application
access to their docker.io account. In order for them to grant your application
access you must first register your application.
Before continuing, we encourage you to familiarize yourself with
`The OAuth 2.0 Authorization Framework <http://tools.ietf.org/html/rfc6749>`_.
*Also note that all OAuth interactions must take place over https connections*
2. Register Your Application
============================
You will need to register your application with docker.io before users will
be able to grant your application access to their account information. We
are currently only allowing applications selectively. To request registration
of your application send an email to support-accounts@docker.com with the
following information:
- The name of your application
- A description of your application and the service it will provide
to docker.io users.
- A callback URI that we will use for redirecting authorization requests to
your application. These are used in the step of getting an Authorization
Code. The domain name of the callback URI will be visible to the user when
they are requested to authorize your application.
When your application is approved you will receive a response from the
docker.io team with your ``client_id`` and ``client_secret`` which your
application will use in the steps of getting an Authorization Code and getting
an Access Token.
3. Endpoints
============
3.1 Get an Authorization Code
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once You have registered you are ready to start integrating docker.io accounts
into your application! The process is usually started by a user following a
link in your application to an OAuth Authorization endpoint.
.. http:get:: /api/v1.1/o/authorize/
Request that a docker.io user authorize your application. If the user is
not already logged in, they will be prompted to login. The user is then
presented with a form to authorize your application for the requested
access scope. On submission, the user will be redirected to the specified
``redirect_uri`` with an Authorization Code.
:query client_id: The ``client_id`` given to your application at
registration.
:query response_type: MUST be set to ``code``. This specifies that you
would like an Authorization Code returned.
:query redirect_uri: The URI to redirect back to after the user has
authorized your application. If omitted, the first of your registered
``response_uris`` is used. If included, it must be one of the URIs
which were submitted when registering your application.
:query scope: The extent of access permissions you are requesting.
Currently, the scope options are ``profile_read``, ``profile_write``,
``email_read``, and ``email_write``. Scopes must be separated by a
space. If omitted, the default scopes ``profile_read email_read`` are
used.
:query state: (Recommended) Used by your application to maintain state
between the authorization request and callback to protect against CSRF
attacks.
**Example Request**
Asking the user for authorization.
.. sourcecode:: http
GET /api/v1.1/o/authorize/?client_id=TestClientID&response_type=code&redirect_uri=https%3A//my.app/auth_complete/&scope=profile_read%20email_read&state=abc123 HTTP/1.1
Host: www.docker.io
**Authorization Page**
When the user follows a link, making the above GET request, they will be
asked to login to their docker.io account if they are not already and then
be presented with the following authorization prompt which asks the user
to authorize your application with a description of the requested scopes.
.. image:: _static/io_oauth_authorization_page.png
Once the user allows or denies your Authorization Request the user will be
redirected back to your application. Included in that request will be the
following query parameters:
``code``
The Authorization code generated by the docker.io authorization server.
Present it again to request an Access Token. This code expires in 60
seconds.
``state``
If the ``state`` parameter was present in the authorization request this
will be the exact value received from that request.
``error``
An error message in the event of the user denying the authorization or
some other kind of error with the request.
3.2 Get an Access Token
^^^^^^^^^^^^^^^^^^^^^^^
Once the user has authorized your application, a request will be made to your
application's specified ``redirect_uri`` which includes a ``code`` parameter
that you must then use to get an Access Token.
.. http:post:: /api/v1.1/o/token/
Submit your newly granted Authorization Code and your application's
credentials to receive an Access Token and Refresh Token. The code is valid
for 60 seconds and cannot be used more than once.
:reqheader Authorization: HTTP basic authentication using your
application's ``client_id`` and ``client_secret``
:form grant_type: MUST be set to ``authorization_code``
:form code: The authorization code received from the user's redirect
request.
:form redirect_uri: The same ``redirect_uri`` used in the authentication
request.
**Example Request**
Using an authorization code to get an access token.
.. sourcecode:: http
POST /api/v1.1/o/token/ HTTP/1.1
Host: www.docker.io
Authorization: Basic VGVzdENsaWVudElEOlRlc3RDbGllbnRTZWNyZXQ=
Accept: application/json
Content-Type: application/json
{
"grant_type": "code",
"code": "YXV0aG9yaXphdGlvbl9jb2Rl",
"redirect_uri": "https://my.app/auth_complete/"
}
**Example Response**
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
{
"username": "janedoe",
"user_id": 42,
"access_token": "t6k2BqgRw59hphQBsbBoPPWLqu6FmS",
"expires_in": 15552000,
"token_type": "Bearer",
"scope": "profile_read email_read",
"refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc"
}
In the case of an error, there will be a non-200 HTTP Status and and data
detailing the error.
3.3 Refresh a Token
^^^^^^^^^^^^^^^^^^^
Once the Access Token expires you can use your ``refresh_token`` to have
docker.io issue your application a new Access Token, if the user has not
revoked access from your application.
.. http:post:: /api/v1.1/o/token/
Submit your ``refresh_token`` and application's credentials to receive a
new Access Token and Refresh Token. The ``refresh_token`` can be used
only once.
:reqheader Authorization: HTTP basic authentication using your
application's ``client_id`` and ``client_secret``
:form grant_type: MUST be set to ``refresh_token``
:form refresh_token: The ``refresh_token`` which was issued to your
application.
:form scope: (optional) The scope of the access token to be returned.
Must not include any scope not originally granted by the user and if
omitted is treated as equal to the scope originally granted.
**Example Request**
Refreshing an access token.
.. sourcecode:: http
POST /api/v1.1/o/token/ HTTP/1.1
Host: www.docker.io
Authorization: Basic VGVzdENsaWVudElEOlRlc3RDbGllbnRTZWNyZXQ=
Accept: application/json
Content-Type: application/json
{
"grant_type": "refresh_token",
"refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc",
}
**Example Response**
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
{
"username": "janedoe",
"user_id": 42,
"access_token": "t6k2BqgRw59hphQBsbBoPPWLqu6FmS",
"expires_in": 15552000,
"token_type": "Bearer",
"scope": "profile_read email_read",
"refresh_token": "hJDhLH3cfsUrQlT4MxA6s8xAFEqdgc"
}
In the case of an error, there will be a non-200 HTTP Status and and data
detailing the error.
4. Use an Access Token with the API
===================================
Many of the docker.io API requests will require a Authorization request header
field. Simply ensure you add this header with "Bearer <``access_token``>":
.. sourcecode:: http
GET /api/v1.1/resource HTTP/1.1
Host: docker.io
Authorization: Bearer 2YotnFZFEjr1zCsicMWpAA

View file

@ -2,7 +2,7 @@
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
.. COMMENT use http://pythonhosted.org/sphinxcontrib-httpdomain/ to
.. COMMENT use https://pythonhosted.org/sphinxcontrib-httpdomain/ to
.. document the REST API.
=================
@ -26,15 +26,36 @@ Docker Remote API
2. Versions
===========
The current version of the API is 1.9
The current version of the API is 1.10
Calling /images/<name>/insert is the same as calling
/v1.9/images/<name>/insert
/v1.10/images/<name>/insert
You can still call an old version of the api using
/v1.0/images/<name>/insert
v1.10
*****
Full Documentation
------------------
:doc:`docker_remote_api_v1.10`
What's new
----------
.. http:delete:: /images/(name)
**New!** You can now use the force parameter to force delete of an image, even if it's
tagged in multiple repositories.
.. http:delete:: /containers/(id)
**New!** You can now use the force paramter to force delete a container, even if
it is currently running
v1.9
****

File diff suppressed because it is too large Load diff

View file

@ -118,6 +118,7 @@ Create a container
"User":"",
"Memory":0,
"MemorySwap":0,
"CpuShares":0,
"AttachStdin":false,
"AttachStdout":true,
"AttachStderr":true,
@ -153,7 +154,15 @@ Create a container
"Warnings":[]
}
:jsonparam config: the container's configuration
:jsonparam Hostname: Container host name
:jsonparam User: Username or UID
:jsonparam Memory: Memory Limit in bytes
:jsonparam CpuShares: CPU shares (relative weight)
:jsonparam AttachStdin: 1/True/true or 0/False/false, attach to standard input. Default false
:jsonparam AttachStdout: 1/True/true or 0/False/false, attach to standard output. Default false
:jsonparam AttachStderr: 1/True/true or 0/False/false, attach to standard error. Default false
:jsonparam Tty: 1/True/true or 0/False/false, allocate a pseudo-tty. Default false
:jsonparam OpenStdin: 1/True/true or 0/False/false, keep stdin open even if not attached. Default false
:query name: Assign the specified name to the container. Must match ``/?[a-zA-Z0-9_-]+``.
:statuscode 201: no error
:statuscode 404: no such container
@ -394,7 +403,11 @@ Start a container
HTTP/1.1 204 No Content
Content-Type: text/plain
:jsonparam hostConfig: the container's host configuration (optional)
:jsonparam Binds: Create a bind mount to a directory or file with [host-path]:[container-path]:[rw|ro]. If a directory "container-path" is missing, then docker creates a new volume.
:jsonparam LxcConf: Map of custom lxc options
:jsonparam PortBindings: Expose ports from the container, optionally publishing them via the HostPort flag
:jsonparam PublishAllPorts: 1/True/true or 0/False/false, publish all exposed ports to the host interfaces. Default false
:jsonparam Privileged: 1/True/true or 0/False/false, give extended privileges to this container. Default false
:statuscode 204: no error
:statuscode 404: no such container
:statuscode 500: server error

View file

@ -118,6 +118,7 @@ Create a container
"User":"",
"Memory":0,
"MemorySwap":0,
"CpuShares":0,
"AttachStdin":false,
"AttachStdout":true,
"AttachStderr":true,
@ -153,7 +154,15 @@ Create a container
"Warnings":[]
}
:jsonparam config: the container's configuration
:jsonparam Hostname: Container host name
:jsonparam User: Username or UID
:jsonparam Memory: Memory Limit in bytes
:jsonparam CpuShares: CPU shares (relative weight)
:jsonparam AttachStdin: 1/True/true or 0/False/false, attach to standard input. Default false
:jsonparam AttachStdout: 1/True/true or 0/False/false, attach to standard output. Default false
:jsonparam AttachStderr: 1/True/true or 0/False/false, attach to standard error. Default false
:jsonparam Tty: 1/True/true or 0/False/false, allocate a pseudo-tty. Default false
:jsonparam OpenStdin: 1/True/true or 0/False/false, keep stdin open even if not attached. Default false
:query name: Assign the specified name to the container. Must match ``/?[a-zA-Z0-9_-]+``.
:statuscode 201: no error
:statuscode 404: no such container
@ -394,7 +403,11 @@ Start a container
HTTP/1.1 204 No Content
Content-Type: text/plain
:jsonparam hostConfig: the container's host configuration (optional)
:jsonparam Binds: Create a bind mount to a directory or file with [host-path]:[container-path]:[rw|ro]. If a directory "container-path" is missing, then docker creates a new volume.
:jsonparam LxcConf: Map of custom lxc options
:jsonparam PortBindings: Expose ports from the container, optionally publishing them via the HostPort flag
:jsonparam PublishAllPorts: 1/True/true or 0/False/false, publish all exposed ports to the host interfaces. Default false
:jsonparam Privileged: 1/True/true or 0/False/false, give extended privileges to this container. Default false
:statuscode 204: no error
:statuscode 404: no such container
:statuscode 500: server error
@ -993,12 +1006,12 @@ Search images
2.3 Misc
--------
Build an image from Dockerfile via stdin
****************************************
Build an image from Dockerfile
******************************
.. http:post:: /build
Build an image from Dockerfile via stdin
Build an image from Dockerfile using a POST body.
**Example request**:
@ -1032,6 +1045,7 @@ Build an image from Dockerfile via stdin
:query t: repository name (and optionally a tag) to be applied to the resulting image in case of success
:query q: suppress verbose build output
:query nocache: do not use the cache when building the image
:query rm: Remove intermediate containers after a successful build
:reqheader Content-type: should be set to ``"application/tar"``.
:reqheader X-Registry-Config: base64-encoded ConfigFile object
:statuscode 200: no error

View file

@ -15,4 +15,6 @@ Your programs and scripts can access Docker's functionality via these interfaces
index_api
docker_remote_api
remote_api_client_libraries
docker_io_oauth_api
docker_io_accounts_api

View file

@ -27,7 +27,7 @@ and we will add the libraries here.
| JavaScript (NodeJS) | docker.io | https://github.com/appersonlabs/docker.io | Active |
| | | Install via NPM: `npm install docker.io` | |
+----------------------+----------------+--------------------------------------------+----------+
| JavaScript | docker-js | https://github.com/dgoujard/docker-js | Active |
| JavaScript | docker-js | https://github.com/dgoujard/docker-js | Outdated |
+----------------------+----------------+--------------------------------------------+----------+
| JavaScript (Angular) | docker-cp | https://github.com/13W/docker-cp | Active |
| **WebUI** | | | |
@ -43,3 +43,5 @@ and we will add the libraries here.
+----------------------+----------------+--------------------------------------------+----------+
| PHP | Alvine | http://pear.alvine.io/ (alpha) | Active |
+----------------------+----------------+--------------------------------------------+----------+
| PHP | Docker-PHP | http://stage1.github.io/docker-php/ | Active |
+----------------------+----------------+--------------------------------------------+----------+

View file

@ -74,7 +74,7 @@ When you're done with your build, you're ready to look into
2. Format
=========
The Dockerfile format is quite simple:
Here is the format of the Dockerfile:
::
@ -466,6 +466,8 @@ For example you might add something like this:
ONBUILD RUN /usr/local/bin/python-build --dir /app/src
[...]
.. warning:: Chaining ONBUILD instructions using `ONBUILD ONBUILD` isn't allowed.
.. warning:: ONBUILD may not trigger FROM or MAINTAINER instructions.
.. _dockerfile_examples:

View file

@ -20,8 +20,12 @@ To list available commands, either run ``docker`` with no parameters or execute
.. _cli_options:
Types of Options
----------------
Options
-------
Single character commandline options can be combined, so rather than typing
``docker run -t -i --name test busybox sh``, you can write
``docker run -ti --name test busybox sh``.
Boolean
~~~~~~~
@ -67,6 +71,7 @@ Commands
Usage of docker:
-D, --debug=false: Enable debug mode
-H, --host=[]: Multiple tcp://host:port or unix://path/to/socket to bind in daemon mode, single connection otherwise. systemd socket activation can be used with fd://[socketfd].
-G, --group="docker": Group to assign the unix socket specified by -H when running in daemon mode; use '' (the empty string) to disable setting of a group
--api-enable-cors=false: Enable CORS headers in the remote API
-b, --bridge="": Attach containers to a pre-existing network bridge; use 'none' to disable container networking
--bip="": Use this CIDR notation address for the network bridge's IP, not compatible with -b
@ -79,8 +84,9 @@ Commands
-p, --pidfile="/var/run/docker.pid": Path to use for daemon PID file
-r, --restart=true: Restart previously running containers
-s, --storage-driver="": Force the docker runtime to use a specific storage driver
-e, --exec-driver="native": Force the docker runtime to use a specific exec driver
-v, --version=false: Print version information and quit
-mtu, --mtu=0: Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if not default route is available
--mtu=0: Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if no default route is available
The Docker daemon is the persistent process that manages containers. Docker uses the same binary for both the
daemon and client. To run the daemon you provide the ``-d`` flag.
@ -91,6 +97,8 @@ To set the DNS server for all Docker containers, use ``docker -d -dns 8.8.8.8``.
To run the daemon with debug output, use ``docker -d -D``.
To use lxc as the execution driver, use ``docker -d -e lxc``.
The docker client will also honor the ``DOCKER_HOST`` environment variable to set
the ``-H`` flag for the client.
@ -107,11 +115,15 @@ Using ``fd://`` will work perfectly for most setups but you can also specify ind
If the specified socket activated files aren't found then docker will exit.
You can find examples of using systemd socket activation with docker and systemd in the `docker source tree <https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/>`_.
.. warning::
Docker and LXC do not support the use of softlinks for either the Docker data directory (``/var/lib/docker``) or for ``/tmp``.
If your system is likely to be set up in that way, you can use ``readlink -f`` to canonicalise the links:
Docker supports softlinks for the Docker data directory (``/var/lib/docker``) and for ``/tmp``.
TMPDIR and the data directory can be set like this:
``TMPDIR=$(readlink -f /tmp) /usr/local/bin/docker -d -D -g $(readlink -f /var/lib/docker) -H unix:// $EXPOSE_ALL > /var/lib/boot2docker/docker.log 2>&1``
::
TMPDIR=/mnt/disk2/tmp /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1
# or
export TMPDIR=/mnt/disk2/tmp
/usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// > /var/lib/boot2docker/docker.log 2>&1
.. _cli_attach:
@ -184,11 +196,11 @@ Examples:
Usage: docker build [OPTIONS] PATH | URL | -
Build a new container image from the source code at PATH
-t, --time="": Repository name (and optionally a tag) to be applied
-t, --tag="": Repository name (and optionally a tag) to be applied
to the resulting image in case of success.
-q, --quiet=false: Suppress the verbose output generated by the containers.
--no-cache: Do not use the cache when building the image.
--rm: Remove intermediate containers after a successful build
--rm=true: Remove intermediate containers after a successful build
The files at ``PATH`` or ``URL`` are called the "context" of the build. The
build process may refer to any of the files in the context, for example when
@ -229,6 +241,9 @@ Examples:
---> Running in 02071fceb21b
---> f52f38b7823e
Successfully built f52f38b7823e
Removing intermediate container 9c9e81692ae9
Removing intermediate container 02071fceb21b
This example specifies that the ``PATH`` is ``.``, and so all the files in
the local directory get tar'd and sent to the Docker daemon. The ``PATH``
@ -243,6 +258,9 @@ The transfer of context from the local machine to the Docker daemon is
what the ``docker`` client means when you see the "Uploading context"
message.
If you wish to keep the intermediate containers after the build is complete,
you must use ``--rm=false``. This does not affect the build cache.
.. code-block:: bash
@ -510,7 +528,7 @@ For example:
Show the history of an image
--no-trunc=false: Don't truncate output
-q, --quiet=false: only show numeric IDs
-q, --quiet=false: Only show numeric IDs
To see how the ``docker:latest`` image was built:
@ -557,11 +575,11 @@ To see how the ``docker:latest`` image was built:
List images
-a, --all=false: show all images (by default filter out the intermediate images used to build)
-a, --all=false: Show all images (by default filter out the intermediate images used to build)
--no-trunc=false: Don't truncate output
-q, --quiet=false: only show numeric IDs
--tree=false: output graph in tree format
--viz=false: output graph in graphviz format
-q, --quiet=false: Only show numeric IDs
--tree=false: Output graph in tree format
--viz=false: Output graph in graphviz format
Listing the most recently created images
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -798,6 +816,19 @@ we ask for the ``HostPort`` field to get the public address.
$ sudo docker inspect -format='{{(index (index .NetworkSettings.Ports "8787/tcp") 0).HostPort}}' $INSTANCE_ID
Get config
..........
The ``.Field`` syntax doesn't work when the field contains JSON data,
but the template language's custom ``json`` function does. The ``.config``
section contains complex json object, so to grab it as JSON, you use ``json``
to convert config object into JSON
.. code-block:: bash
$ sudo docker inspect -format='{{json .config}}' $INSTANCE_ID
.. _cli_kill:
``kill``
@ -844,9 +875,9 @@ Known Issues (kill)
Register or Login to the docker registry server
-e, --email="": email
-p, --password="": password
-u, --username="": username
-e, --email="": Email
-p, --password="": Password
-u, --username="": Username
If you want to login to a private registry you can
specify this by adding the server name.
@ -917,6 +948,8 @@ Running ``docker ps`` showing 2 linked containers.
The last container is marked as a ``Ghost`` container. It is a container that was running when the docker daemon was restarted (upgraded, or ``-H`` settings changed). The container is still running, but as this docker daemon process is not able to manage it, you can't attach to it. To bring them out of ``Ghost`` Status, you need to use ``docker kill`` or ``docker restart``.
``docker ps`` will show only running containers by default. To see all containers: ``docker ps -a``
.. _cli_pull:
``pull``
@ -962,7 +995,8 @@ The last container is marked as a ``Ghost`` container. It is a container that wa
Usage: docker rm [OPTIONS] CONTAINER
Remove one or more containers
--link="": Remove the link instead of the actual container
-l, --link="": Remove the link instead of the actual container
-f, --force=false: Force removal of running container
Known Issues (rm)
~~~~~~~~~~~~~~~~~
@ -1011,6 +1045,8 @@ containers will not be deleted.
Usage: docker rmi IMAGE [IMAGE...]
Remove one or more images
-f, --force=false: Force
Removing tagged images
~~~~~~~~~~~~~~~~~~~~~~
@ -1085,6 +1121,7 @@ The ``docker run`` command first ``creates`` a writeable container layer over
the specified image, and then ``starts`` it using the specified command. That
is, ``docker run`` is equivalent to the API ``/containers/create`` then
``/containers/(id)/start``.
Once the container is stopped it still exists and can be started back up. See ``docker ps -a`` to view a list of all containers.
The ``docker run`` command can be used in combination with ``docker commit`` to
:ref:`change the command that a container runs <cli_commit_examples>`.
@ -1212,7 +1249,7 @@ to the newly created container.
$ sudo docker run --volumes-from 777f7dc92da7,ba8c0c54f0f2:ro -i -t ubuntu pwd
The ``--volumes-from`` flag mounts all the defined volumes from the
referenced containers. Containers can be specified by a comma seperated
referenced containers. Containers can be specified by a comma separated
list or by repetitions of the ``--volumes-from`` argument. The container
ID may be optionally suffixed with ``:ro`` or ``:rw`` to mount the volumes in
read-only or read-write mode, respectively. By default, the volumes are mounted
@ -1301,7 +1338,7 @@ The main process inside the container will receive SIGTERM, and after a grace pe
::
Usage: docker tag [OPTIONS] IMAGE REPOSITORY[:TAG]
Usage: docker tag [OPTIONS] IMAGE [REGISTRYHOST/][USERNAME/]NAME[:TAG]
Tag an image into a repository

View file

@ -18,5 +18,7 @@ Contents:
layer
image
container
registry
repository

View file

@ -0,0 +1,16 @@
:title: Registry
:description: Definition of an Registry
:keywords: containers, lxc, concepts, explanation, image, repository, container
.. _registry_def:
Registry
==========
A Registry is a hosted service containing :ref:`repositories<repository_def>`
of :ref:`images<image_def>` which responds to the Registry API.
The default registry can be accessed using a browser at http://images.docker.io
or using the ``sudo docker search`` command.
For more information see :ref:`Working with Repositories<working_with_the_repository>`

View file

@ -0,0 +1,30 @@
:title: Repository
:description: Definition of an Repository
:keywords: containers, lxc, concepts, explanation, image, repository, container
.. _repository_def:
Repository
==========
A repository is a set of images either on your local Docker server, or
shared, by pushing it to a :ref:`Registry<registry_def>` server.
Images can be associated with a repository (or multiple) by giving them an image name
using one of three different commands:
1. At build time (e.g. ``sudo docker build -t IMAGENAME``),
2. When committing a container (e.g. ``sudo docker commit CONTAINERID IMAGENAME``) or
3. When tagging an image id with an image name (e.g. ``sudo docker tag IMAGEID IMAGENAME``).
A `Fully Qualified Image Name` (FQIN) can be made up of 3 parts:
``[registry_hostname[:port]/][user_name/](repository_name[:version_tag])``
``version_tag`` defaults to ``latest``, ``username`` and ``registry_hostname`` default to an empty string.
When ``registry_hostname`` is an empty string, then ``docker push`` will push to ``index.docker.io:80``.
If you create a new repository which you want to share, you will need to set at least the
``user_name``, as the 'default' blank ``user_name`` prefix is reserved for official Docker images.
For more information see :ref:`Working with Repositories<working_with_the_repository>`

View file

@ -50,6 +50,7 @@ Running an interactive shell
# allocate a tty, attach stdin and stdout
# To detach the tty without exiting the shell,
# use the escape sequence Ctrl-p + Ctrl-q
# note: This will continue to exist in a stopped state once exited (see "docker ps -a")
sudo docker run -i -t ubuntu /bin/bash
.. _bind_docker:
@ -121,12 +122,38 @@ Starting a long-running worker process
sudo docker kill $JOB
Listing all running containers
------------------------------
Listing containers
------------------
.. code-block:: bash
sudo docker ps
sudo docker ps # Lists only running containers
sudo docker ps -a # Lists all containers
Controlling containers
----------------------
.. code-block:: bash
# Start a new container
JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Stop the container
docker stop $JOB
# Start the container
docker start $JOB
# Restart the container
docker restart $JOB
# SIGKILL a container
docker kill $JOB
# Remove a container
docker stop $JOB # Container must be stopped to remove it
docker rm $JOB
Bind a service on a TCP port
------------------------------

View file

@ -18,10 +18,11 @@ the docker daemon with the ``-r=false`` so that docker will not automatically
restart your containers when the host is restarted.
When you have finished setting up your image and are happy with your
running container, you may want to use a process manager to manage
running container, you can then attach a process manager to manage
it. When your run ``docker start -a`` docker will automatically attach
to the process and forward all signals so that the process manager can
detect when a container stops and correctly restart it.
to the running container, or start it if needed and forward all signals
so that the process manager can detect when a container stops and correctly
restart it.
Here are a few sample scripts for systemd and upstart to integrate with docker.
@ -29,9 +30,10 @@ Here are a few sample scripts for systemd and upstart to integrate with docker.
Sample Upstart Script
---------------------
In this example we've already created a container to run Redis with an id of
0a7e070b698b. To create an upstart script for our container, we create a file
named ``/etc/init/redis.conf`` and place the following into it:
In this example we've already created a container to run Redis with
``--name redis_server``. To create an upstart script for our container,
we create a file named ``/etc/init/redis.conf`` and place the following
into it:
.. code-block:: bash
@ -46,7 +48,7 @@ named ``/etc/init/redis.conf`` and place the following into it:
while [ ! -e $FILE ] ; do
inotifywait -t 2 -e create $(dirname $FILE)
done
/usr/bin/docker start -a 0a7e070b698b
/usr/bin/docker start -a redis_server
end script
Next, we have to configure docker so that it's run with the option ``-r=false``.
@ -69,8 +71,8 @@ Sample systemd Script
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a 0a7e070b698b
ExecStop=/usr/bin/docker stop -t 2 0a7e070b698b
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
[Install]
WantedBy=local.target

View file

@ -148,6 +148,6 @@ ip link command) and the namespaces infrastructure.
I want more
------------
Jérôme Petazzoni has create ``pipework`` to connect together
Jérôme Petazzoni has created ``pipework`` to connect together
containers in arbitrarily complex scenarios :
https://github.com/jpetazzo/pipework

View file

@ -85,7 +85,7 @@ dynamically allocated ports:
.. code-block:: bash
# Bind to a dynamically allocated port
docker run -p 127.0.0.1::8080 -name dyn-bound <image> <cmd>
docker run -p 127.0.0.1::8080 --name dyn-bound <image> <cmd>
# Lookup the actual port
docker port dyn-bound 8080
@ -121,7 +121,7 @@ Dockerfile:
.. code-block:: bash
# Expose port 80
docker run -expose 80 -name server <image> <cmd>
docker run -expose 80 --name server <image> <cmd>
The ``client`` then links to the ``server``:
@ -149,4 +149,4 @@ This tells ``client`` that a service is running on port 80 of
``server`` and that ``server`` is accessible at the IP address
172.17.0.8
Note: Using the ``-p`` parameter also exposes the port..
Note: Using the ``-p`` parameter also exposes the port.

View file

@ -59,7 +59,7 @@ inter-container communication is set to false.
For example, there is an image called ``crosbymichael/redis`` that exposes the
port 6379 and starts the Redis server. Let's name the container as ``redis``
based on that image and run it as daemon.
based on that image and run it as a daemon.
.. code-block:: bash

View file

@ -7,10 +7,6 @@
Share Directories via Volumes
=============================
.. versionadded:: v0.3.0
Data volumes have been available since version 1 of the
:doc:`../reference/api/docker_remote_api`
A *data volume* is a specially-designated directory within one or more
containers that bypasses the :ref:`ufs_def` to provide several useful
features for persistent or shared data:
@ -24,9 +20,15 @@ features for persistent or shared data:
* **Changes to a data volume will not be included at the next commit**
because they are not recorded as regular filesystem changes in the
top layer of the :ref:`ufs_def`
* **Volumes persist until no containers use them** as they are a reference
counted resource. The container does not need to be running to share its
volumes, but running it can help protect it against accidental removal
via ``docker rm``.
Each container can have zero or more data volumes.
.. versionadded:: v0.3.0
Getting Started
...............
@ -40,7 +42,7 @@ two new volumes::
This command will create the new container with two new volumes that
exits instantly (``true`` is pretty much the smallest, simplest program
that you can run). Once created you can mount its volumes in any other
container using the ``-volumes-from`` option; irrespecive of whether the
container using the ``-volumes-from`` option; irrespective of whether the
container is running or not.
Or, you can use the VOLUME instruction in a Dockerfile to add one or more new
@ -50,7 +52,7 @@ volumes to any container created from that image::
# RUN-USING: docker run -name DATA data
FROM busybox
VOLUME ["/var/volume1", "/var/volume2"]
CMD ["/usr/bin/true"]
CMD ["/bin/true"]
Creating and mounting a Data Volume Container
---------------------------------------------
@ -80,7 +82,7 @@ similar to :ref:`ambassador_pattern_linking <ambassador_pattern_linking>`.
If you remove containers that mount volumes, including the initial DATA container,
or the middleman, the volumes will not be deleted until there are no containers still
referencing those volumes. This allows you to upgrade, or effectivly migrate data volumes
referencing those volumes. This allows you to upgrade, or effectively migrate data volumes
between containers.
Mount a Host Directory as a Container Volume:
@ -90,6 +92,7 @@ Mount a Host Directory as a Container Volume:
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
You must specify an absolute path for ``host-dir``.
If ``host-dir`` is missing from the command, then docker creates a new volume.
If ``host-dir`` is present but points to a non-existent directory on the host,
Docker will automatically create this directory and use it as the source of the
@ -118,6 +121,38 @@ directories`` refer to directories in the ``boot2docker`` virtual machine, not t
Similarly, anytime when the docker daemon is on a remote machine, the ``host directories`` always refer to directories on the daemon's machine.
Backup, restore, or migrate data volumes
----------------------------------------
You cannot back up volumes using ``docker export``, ``docker save`` and ``docker cp``
because they are external to images.
Instead you can use ``--volumes-from`` to start a new container that can access the
data-container's volume. For example::
$ sudo docker run -rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
* ``-rm`` - remove the container when it exits
* ``--volumes-from DATA`` - attach to the volumes shared by the ``DATA`` container
* ``-v $(pwd):/backup`` - bind mount the current directory into the container; to write the tar file to
* ``busybox`` - a small simpler image - good for quick maintenance
* ``tar cvf /backup/backup.tar /data`` - creates an uncompressed tar file of all the files in the ``/data`` directory
Then to restore to the same container, or another that you've made elsewhere::
# create a new data container
$ sudo docker run -v /data -name DATA2 busybox true
# untar the backup files into the new container's data volume
$ sudo docker run -rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run -rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
You can use the basic techniques above to automate backup, migration and restore
testing using your preferred tools.
Known Issues
............

View file

@ -7,9 +7,9 @@
Share Images via Repositories
=============================
A *repository* is a hosted collection of tagged :ref:`images
<image_def>` that together create the file system for a container. The
repository's name is a tag that indicates the provenance of the
A *repository* is a shareable collection of tagged :ref:`images<image_def>`
that together create the file systems for containers. The
repository's name is a label that indicates the provenance of the
repository, i.e. who created it and where the original copy is
located.
@ -19,7 +19,7 @@ tag. The implicit registry is located at ``index.docker.io``, the home
of "top-level" repositories and the Central Index. This registry may
also include public "user" repositories.
So Docker is not only a tool for creating and managing your own
Docker is not only a tool for creating and managing your own
:ref:`containers <container_def>` -- **Docker is also a tool for
sharing**. The Docker project provides a Central Registry to host
public repositories, namespaced by user, and a Central Index which
@ -28,6 +28,12 @@ repositories. You can host your own Registry too! Docker acts as a
client for these services via ``docker search, pull, login`` and
``push``.
Local Repositories
------------------
Docker images which have been created and labeled on your local Docker server
need to be pushed to a Public or Private registry to be shared.
.. _using_public_repositories:
Public Repositories
@ -58,8 +64,8 @@ Find Public Images on the Central Index
---------------------------------------
You can search the Central Index `online <https://index.docker.io>`_
or by the CLI. Searching can find images by name, user name or
description:
or using the command line interface. Searching can find images by name, user
name or description:
.. code-block:: bash
@ -136,13 +142,13 @@ name for the image.
.. _image_push:
Pushing an image to its repository
----------------------------------
Pushing a repository to its registry
------------------------------------
In order to push an image to its repository you need to have committed
your container to a named image (see above)
In order to push an repository to its registry you need to have named an image,
or committed your container to a named image (see above)
Now you can commit this image to the repository designated by its name
Now you can push this repository to the registry designated by its name
or tag.
.. code-block:: bash
@ -156,7 +162,7 @@ Trusted Builds
--------------
Trusted Builds automate the building and updating of images from GitHub, directly
on docker.io servers. It works by adding a commit hook to your selected repository,
on ``docker.io`` servers. It works by adding a commit hook to your selected repository,
triggering a build and update when you push a commit.
To setup a trusted build
@ -180,21 +186,22 @@ If you want to see the status of your Trusted Builds you can go to your
`Trusted Builds page <https://index.docker.io/builds/>`_ on the Docker index,
and it will show you the status of your builds, and the build history.
Once you've created a Trusted Build you can deactive or delete it. You cannot
Once you've created a Trusted Build you can deactivate or delete it. You cannot
however push to a Trusted Build with the ``docker push`` command. You can only
manage it by committing code to your GitHub repository.
You can create multiple Trusted Builds per repository and configure them to
point to specific ``Dockerfile``'s or Git branches.
Private Repositories
--------------------
Private Registry
----------------
Right now (version 0.6), private repositories are only possible by
hosting `your own registry
Private registries and private shared repositories are
only possible by hosting `your own registry
<https://github.com/dotcloud/docker-registry>`_. To push or pull to a
repository on your own registry, you must prefix the tag with the
address of the registry's host, like this:
address of the registry's host (a ``.`` or ``:`` is used to identify a host),
like this:
.. code-block:: bash

View file

@ -3,6 +3,7 @@
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="google-site-verification" content="UxV66EKuPe87dgnH1sbrldrx6VsoWMrx5NjwkgUFxXI" />
<meta name="google-site-verification" content="XzwpAUE5-gjq6j2F0dDqiBYxCZpHd8uVYe5Fnyt3V8Q" />
<title>{{ meta['title'] if meta and meta['title'] else title }} - Docker Documentation</title>
<meta name="description" content="{{ meta['description'] if meta }}" />
<meta name="keywords" content="{{ meta['keywords'] if meta }}" />

View file

@ -428,6 +428,9 @@ dt:hover > a.headerlink {
float: right;
visibility: hidden;
}
h2, h3, h4, h5, h6 {
margin-top: 0.7em;
}
/* =====================================
Miscellaneous information
====================================== */

View file

@ -631,6 +631,10 @@ dt:hover > a.headerlink {
visibility: hidden;
}
h2, h3, h4, h5, h6 {
margin-top: 0.7em;
}
/* =====================================
Miscellaneous information
====================================== */

View file

@ -1,13 +1,14 @@
package engine
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"log"
"os"
"path/filepath"
"runtime"
"sort"
"strings"
)
@ -28,6 +29,10 @@ func Register(name string, handler Handler) error {
return nil
}
func unregister(name string) {
delete(globalHandlers, name)
}
// The Engine is the core of Docker.
// It acts as a store for *containers*, and allows manipulation of these
// containers by executing *jobs*.
@ -84,19 +89,6 @@ func New(root string) (*Engine, error) {
return nil, err
}
// Docker makes some assumptions about the "absoluteness" of root
// ... so let's make sure it has no symlinks
if p, err := filepath.Abs(root); err != nil {
log.Fatalf("Unable to get absolute root (%s): %s", root, err)
} else {
root = p
}
if p, err := filepath.EvalSymlinks(root); err != nil {
log.Fatalf("Unable to canonicalize root (%s): %s", root, err)
} else {
root = p
}
eng := &Engine{
root: root,
handlers: make(map[string]Handler),
@ -105,6 +97,12 @@ func New(root string) (*Engine, error) {
Stderr: os.Stderr,
Stdin: os.Stdin,
}
eng.Register("commands", func(job *Job) Status {
for _, name := range eng.commands() {
job.Printf("%s\n", name)
}
return StatusOK
})
// Copy existing global handlers
for k, v := range globalHandlers {
eng.handlers[k] = v
@ -116,6 +114,17 @@ func (eng *Engine) String() string {
return fmt.Sprintf("%s|%s", eng.Root(), eng.id[:8])
}
// Commands returns a list of all currently registered commands,
// sorted alphabetically.
func (eng *Engine) commands() []string {
names := make([]string, 0, len(eng.handlers))
for name := range eng.handlers {
names = append(names, name)
}
sort.Strings(names)
return names
}
// Job creates a new job which can later be executed.
// This function mimics `Command` from the standard os/exec package.
func (eng *Engine) Job(name string, args ...string) *Job {
@ -136,6 +145,48 @@ func (eng *Engine) Job(name string, args ...string) *Job {
return job
}
// ParseJob creates a new job from a text description using a shell-like syntax.
//
// The following syntax is used to parse `input`:
//
// * Words are separated using standard whitespaces as separators.
// * Quotes and backslashes are not interpreted.
// * Words of the form 'KEY=[VALUE]' are added to the job environment.
// * All other words are added to the job arguments.
//
// For example:
//
// job, _ := eng.ParseJob("VERBOSE=1 echo hello TEST=true world")
//
// The resulting job will have:
// job.Args={"echo", "hello", "world"}
// job.Env={"VERBOSE":"1", "TEST":"true"}
//
func (eng *Engine) ParseJob(input string) (*Job, error) {
// FIXME: use a full-featured command parser
scanner := bufio.NewScanner(strings.NewReader(input))
scanner.Split(bufio.ScanWords)
var (
cmd []string
env Env
)
for scanner.Scan() {
word := scanner.Text()
kv := strings.SplitN(word, "=", 2)
if len(kv) == 2 {
env.Set(kv[0], kv[1])
} else {
cmd = append(cmd, word)
}
}
if len(cmd) == 0 {
return nil, fmt.Errorf("empty command: '%s'", input)
}
job := eng.Job(cmd[0], cmd[1:]...)
job.Env().Init(&env)
return job, nil
}
func (eng *Engine) Logf(format string, args ...interface{}) (n int, err error) {
if os.Getenv("TEST") == "" {
prefixedFormat := fmt.Sprintf("[%s] %s\n", eng, strings.TrimRight(format, "\n"))

View file

@ -1,10 +1,12 @@
package engine
import (
"bytes"
"io/ioutil"
"os"
"path"
"path/filepath"
"strings"
"testing"
)
@ -16,6 +18,8 @@ func TestRegister(t *testing.T) {
if err := Register("dummy1", nil); err == nil {
t.Fatalf("Expecting error, got none")
}
// Register is global so let's cleanup to avoid conflicts
defer unregister("dummy1")
eng := newTestEngine(t)
@ -32,6 +36,7 @@ func TestRegister(t *testing.T) {
if err := eng.Register("dummy2", nil); err == nil {
t.Fatalf("Expecting error, got none")
}
defer unregister("dummy2")
}
func TestJob(t *testing.T) {
@ -48,6 +53,7 @@ func TestJob(t *testing.T) {
}
eng.Register("dummy2", h)
defer unregister("dummy2")
job2 := eng.Job("dummy2", "--level=awesome")
if job2.handler == nil {
@ -59,6 +65,24 @@ func TestJob(t *testing.T) {
}
}
func TestEngineCommands(t *testing.T) {
eng := newTestEngine(t)
defer os.RemoveAll(eng.Root())
handler := func(job *Job) Status { return StatusOK }
eng.Register("foo", handler)
eng.Register("bar", handler)
eng.Register("echo", handler)
eng.Register("die", handler)
var output bytes.Buffer
commands := eng.Job("commands")
commands.Stdout.Add(&output)
commands.Run()
expected := "bar\ncommands\ndie\necho\nfoo\n"
if result := output.String(); result != expected {
t.Fatalf("Unexpected output:\nExpected = %v\nResult = %v\n", expected, result)
}
}
func TestEngineRoot(t *testing.T) {
tmp, err := ioutil.TempDir("", "docker-test-TestEngineCreateDir")
if err != nil {
@ -114,3 +138,40 @@ func TestEngineLogf(t *testing.T) {
t.Fatalf("Test: Logf() should print at least as much as the input\ninput=%d\nprinted=%d", len(input), n)
}
}
func TestParseJob(t *testing.T) {
eng := newTestEngine(t)
defer os.RemoveAll(eng.Root())
// Verify that the resulting job calls to the right place
var called bool
eng.Register("echo", func(job *Job) Status {
called = true
return StatusOK
})
input := "echo DEBUG=1 hello world VERBOSITY=42"
job, err := eng.ParseJob(input)
if err != nil {
t.Fatal(err)
}
if job.Name != "echo" {
t.Fatalf("Invalid job name: %v", job.Name)
}
if strings.Join(job.Args, ":::") != "hello:::world" {
t.Fatalf("Invalid job args: %v", job.Args)
}
if job.Env().Get("DEBUG") != "1" {
t.Fatalf("Invalid job env: %v", job.Env)
}
if job.Env().Get("VERBOSITY") != "42" {
t.Fatalf("Invalid job env: %v", job.Env)
}
if len(job.Env().Map()) != 2 {
t.Fatalf("Invalid job env: %v", job.Env)
}
if err := job.Run(); err != nil {
t.Fatal(err)
}
if !called {
t.Fatalf("Job was not called")
}
}

View file

@ -36,6 +36,13 @@ func (env *Env) Exists(key string) bool {
return exists
}
func (env *Env) Init(src *Env) {
(*env) = make([]string, 0, len(*src))
for _, val := range *src {
(*env) = append((*env), val)
}
}
func (env *Env) GetBool(key string) (value bool) {
s := strings.ToLower(strings.Trim(env.Get(key), " \t"))
if s == "" || s == "0" || s == "no" || s == "false" || s == "none" {

View file

@ -74,7 +74,7 @@ func (job *Job) Run() error {
return err
}
if job.status != 0 {
return fmt.Errorf("%s: %s", job.Name, errorMessage)
return fmt.Errorf("%s", errorMessage)
}
return nil
}
@ -102,6 +102,10 @@ func (job *Job) String() string {
return fmt.Sprintf("%s.%s%s", job.Eng, job.CallString(), job.StatusString())
}
func (job *Job) Env() *Env {
return job.env
}
func (job *Job) EnvExists(key string) (value bool) {
return job.env.Exists(key)
}
@ -197,11 +201,14 @@ func (job *Job) Printf(format string, args ...interface{}) (n int, err error) {
}
func (job *Job) Errorf(format string, args ...interface{}) Status {
if format[len(format)-1] != '\n' {
format = format + "\n"
}
fmt.Fprintf(job.Stderr, format, args...)
return StatusErr
}
func (job *Job) Error(err error) Status {
fmt.Fprintf(job.Stderr, "%s", err)
fmt.Fprintf(job.Stderr, "%s\n", err)
return StatusErr
}

View file

@ -1,101 +0,0 @@
package chroot
import (
"fmt"
"github.com/dotcloud/docker/execdriver"
"github.com/dotcloud/docker/pkg/mount"
"os"
"os/exec"
"syscall"
)
const (
DriverName = "chroot"
Version = "0.1"
)
func init() {
execdriver.RegisterInitFunc(DriverName, func(args *execdriver.InitArgs) error {
if err := mount.ForceMount("proc", "proc", "proc", ""); err != nil {
return err
}
defer mount.ForceUnmount("proc")
cmd := exec.Command(args.Args[0], args.Args[1:]...)
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
cmd.Stdin = os.Stdin
return cmd.Run()
})
}
type driver struct {
}
func NewDriver() (*driver, error) {
return &driver{}, nil
}
func (d *driver) Run(c *execdriver.Command, startCallback execdriver.StartCallback) (int, error) {
params := []string{
"chroot",
c.Rootfs,
"/.dockerinit",
"-driver",
DriverName,
}
params = append(params, c.Entrypoint)
params = append(params, c.Arguments...)
var (
name = params[0]
arg = params[1:]
)
aname, err := exec.LookPath(name)
if err != nil {
aname = name
}
c.Path = aname
c.Args = append([]string{name}, arg...)
if err := c.Start(); err != nil {
return -1, err
}
if startCallback != nil {
startCallback(c)
}
err = c.Wait()
return getExitCode(c), err
}
/// Return the exit code of the process
// if the process has not exited -1 will be returned
func getExitCode(c *execdriver.Command) int {
if c.ProcessState == nil {
return -1
}
return c.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
}
func (d *driver) Kill(p *execdriver.Command, sig int) error {
return p.Process.Kill()
}
func (d *driver) Restore(c *execdriver.Command) error {
panic("Not Implemented")
}
func (d *driver) Info(id string) execdriver.Info {
panic("Not implemented")
}
func (d *driver) Name() string {
return fmt.Sprintf("%s-%s", DriverName, Version)
}
func (d *driver) GetPidsForContainer(id string) ([]int, error) {
return nil, fmt.Errorf("Not supported")
}

View file

@ -2,6 +2,8 @@ package execdriver
import (
"errors"
"io"
"os"
"os/exec"
)
@ -49,6 +51,9 @@ type InitArgs struct {
Args []string
Mtu int
Driver string
Console string
Pipe int
Root string
}
// Driver specific information based on
@ -57,10 +62,21 @@ type Info interface {
IsRunning() bool
}
// Terminal in an interface for drivers to implement
// if they want to support Close and Resize calls from
// the core
type Terminal interface {
io.Closer
Resize(height, width int) error
}
type TtyTerminal interface {
Master() *os.File
}
type Driver interface {
Run(c *Command, startCallback StartCallback) (int, error) // Run executes the process and blocks until the process exits and returns the exit code
Run(c *Command, pipes *Pipes, startCallback StartCallback) (int, error) // Run executes the process and blocks until the process exits and returns the exit code
Kill(c *Command, sig int) error
Restore(c *Command) error // Wait and try to re-attach on an out of process command
Name() string // Driver name
Info(id string) Info // "temporary" hack (until we move state from core to plugins)
GetPidsForContainer(id string) ([]int, error) // Returns a list of pids for the given container.
@ -82,7 +98,6 @@ type Resources struct {
}
// Process wrapps an os/exec.Cmd to add more metadata
// TODO: Rename to Command
type Command struct {
exec.Cmd `json:"-"`
@ -100,14 +115,13 @@ type Command struct {
Config []string `json:"config"` // generic values that specific drivers can consume
Resources *Resources `json:"resources"`
Console string `json:"-"`
Terminal Terminal `json:"-"` // standard or tty terminal
Console string `json:"-"` // dev/console path
ContainerPid int `json:"container_pid"` // the pid for the process inside a container
}
// Return the pid of the process
// If the process is nil -1 will be returned
func (c *Command) Pid() int {
if c.Process == nil {
return -1
}
return c.Process.Pid
return c.ContainerPid
}

View file

@ -76,7 +76,10 @@ func (d *driver) Name() string {
return fmt.Sprintf("%s-%s", DriverName, version)
}
func (d *driver) Run(c *execdriver.Command, startCallback execdriver.StartCallback) (int, error) {
func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallback execdriver.StartCallback) (int, error) {
if err := execdriver.SetTerminal(c, pipes); err != nil {
return -1, err
}
configPath, err := d.generateLXCConfig(c)
if err != nil {
return -1, err
@ -163,9 +166,11 @@ func (d *driver) Run(c *execdriver.Command, startCallback execdriver.StartCallba
}()
// Poll lxc for RUNNING status
if err := d.waitForStart(c, waitLock); err != nil {
pid, err := d.waitForStart(c, waitLock)
if err != nil {
return -1, err
}
c.ContainerPid = pid
if startCallback != nil {
startCallback(c)
@ -186,43 +191,39 @@ func getExitCode(c *execdriver.Command) int {
}
func (d *driver) Kill(c *execdriver.Command, sig int) error {
return d.kill(c, sig)
}
func (d *driver) Restore(c *execdriver.Command) error {
for {
output, err := exec.Command("lxc-info", "-n", c.ID).CombinedOutput()
if err != nil {
return err
}
if !strings.Contains(string(output), "RUNNING") {
return nil
}
time.Sleep(500 * time.Millisecond)
}
return KillLxc(c.ID, sig)
}
func (d *driver) version() string {
version := ""
if output, err := exec.Command("lxc-version").CombinedOutput(); err == nil {
outputStr := string(output)
if len(strings.SplitN(outputStr, ":", 2)) == 2 {
version = strings.TrimSpace(strings.SplitN(outputStr, ":", 2)[1])
var (
version string
output []byte
err error
)
if _, errPath := exec.LookPath("lxc-version"); errPath == nil {
output, err = exec.Command("lxc-version").CombinedOutput()
} else {
output, err = exec.Command("lxc-start", "--version").CombinedOutput()
}
if err == nil {
version = strings.TrimSpace(string(output))
if parts := strings.SplitN(version, ":", 2); len(parts) == 2 {
version = strings.TrimSpace(parts[1])
}
}
return version
}
func (d *driver) kill(c *execdriver.Command, sig int) error {
func KillLxc(id string, sig int) error {
var (
err error
output []byte
)
_, err = exec.LookPath("lxc-kill")
if err == nil {
output, err = exec.Command("lxc-kill", "-n", c.ID, strconv.Itoa(sig)).CombinedOutput()
output, err = exec.Command("lxc-kill", "-n", id, strconv.Itoa(sig)).CombinedOutput()
} else {
output, err = exec.Command("lxc-stop", "-k", "-n", c.ID, strconv.Itoa(sig)).CombinedOutput()
output, err = exec.Command("lxc-stop", "-k", "-n", id, strconv.Itoa(sig)).CombinedOutput()
}
if err != nil {
return fmt.Errorf("Err: %s Output: %s", err, output)
@ -230,7 +231,8 @@ func (d *driver) kill(c *execdriver.Command, sig int) error {
return nil
}
func (d *driver) waitForStart(c *execdriver.Command, waitLock chan struct{}) error {
// wait for the process to start and return the pid for the process
func (d *driver) waitForStart(c *execdriver.Command, waitLock chan struct{}) (int, error) {
var (
err error
output []byte
@ -243,10 +245,7 @@ func (d *driver) waitForStart(c *execdriver.Command, waitLock chan struct{}) err
select {
case <-waitLock:
// If the process dies while waiting for it, just return
return nil
if c.ProcessState != nil && c.ProcessState.Exited() {
return nil
}
return -1, nil
default:
}
@ -254,19 +253,23 @@ func (d *driver) waitForStart(c *execdriver.Command, waitLock chan struct{}) err
if err != nil {
output, err = d.getInfo(c.ID)
if err != nil {
return err
return -1, err
}
}
if strings.Contains(string(output), "RUNNING") {
return nil
info, err := parseLxcInfo(string(output))
if err != nil {
return -1, err
}
if info.Running {
return info.Pid, nil
}
time.Sleep(50 * time.Millisecond)
}
return execdriver.ErrNotRunning
return -1, execdriver.ErrNotRunning
}
func (d *driver) getInfo(id string) ([]byte, error) {
return exec.Command("lxc-info", "-s", "-n", id).CombinedOutput()
return exec.Command("lxc-info", "-n", id).CombinedOutput()
}
type info struct {
@ -298,9 +301,8 @@ func (d *driver) Info(id string) execdriver.Info {
func (d *driver) GetPidsForContainer(id string) ([]int, error) {
pids := []int{}
// memory is chosen randomly, any cgroup used by docker works
subsystem := "memory"
// cpu is chosen because it is the only non optional subsystem in cgroups
subsystem := "cpu"
cgroupRoot, err := cgroups.FindCgroupMountpoint(subsystem)
if err != nil {
return pids, err

50
execdriver/lxc/info.go Normal file
View file

@ -0,0 +1,50 @@
package lxc
import (
"bufio"
"errors"
"strconv"
"strings"
)
var (
ErrCannotParse = errors.New("cannot parse raw input")
)
type lxcInfo struct {
Running bool
Pid int
}
func parseLxcInfo(raw string) (*lxcInfo, error) {
if raw == "" {
return nil, ErrCannotParse
}
var (
err error
s = bufio.NewScanner(strings.NewReader(raw))
info = &lxcInfo{}
)
for s.Scan() {
text := s.Text()
if s.Err() != nil {
return nil, s.Err()
}
parts := strings.Split(text, ":")
if len(parts) < 2 {
continue
}
switch strings.TrimSpace(parts[0]) {
case "state":
info.Running = strings.TrimSpace(parts[1]) == "RUNNING"
case "pid":
info.Pid, err = strconv.Atoi(strings.TrimSpace(parts[1]))
if err != nil {
return nil, err
}
}
}
return info, nil
}

View file

@ -0,0 +1,36 @@
package lxc
import (
"testing"
)
func TestParseRunningInfo(t *testing.T) {
raw := `
state: RUNNING
pid: 50`
info, err := parseLxcInfo(raw)
if err != nil {
t.Fatal(err)
}
if !info.Running {
t.Fatal("info should return a running state")
}
if info.Pid != 50 {
t.Fatalf("info should have pid 50 got %d", info.Pid)
}
}
func TestEmptyInfo(t *testing.T) {
_, err := parseLxcInfo("")
if err == nil {
t.Fatal("error should not be nil")
}
}
func TestBadInfo(t *testing.T) {
_, err := parseLxcInfo("state")
if err != nil {
t.Fatal(err)
}
}

View file

@ -12,6 +12,7 @@ const LxcTemplate = `
lxc.network.type = veth
lxc.network.link = {{.Network.Bridge}}
lxc.network.name = eth0
lxc.network.mtu = {{.Network.Mtu}}
{{else}}
# network is disabled (-n=false)
lxc.network.type = empty

View file

@ -0,0 +1,90 @@
package native
import (
"fmt"
"github.com/dotcloud/docker/execdriver"
"github.com/dotcloud/docker/pkg/cgroups"
"github.com/dotcloud/docker/pkg/libcontainer"
"os"
)
// createContainer populates and configures the container type with the
// data provided by the execdriver.Command
func createContainer(c *execdriver.Command) *libcontainer.Container {
container := getDefaultTemplate()
container.Hostname = getEnv("HOSTNAME", c.Env)
container.Tty = c.Tty
container.User = c.User
container.WorkingDir = c.WorkingDir
container.Env = c.Env
if c.Network != nil {
container.Networks = []*libcontainer.Network{
{
Mtu: c.Network.Mtu,
Address: fmt.Sprintf("%s/%d", c.Network.IPAddress, c.Network.IPPrefixLen),
Gateway: c.Network.Gateway,
Type: "veth",
Context: libcontainer.Context{
"prefix": "veth",
"bridge": c.Network.Bridge,
},
},
}
}
container.Cgroups.Name = c.ID
if c.Privileged {
container.Capabilities = nil
container.Cgroups.DeviceAccess = true
container.Context["apparmor_profile"] = "unconfined"
}
if c.Resources != nil {
container.Cgroups.CpuShares = c.Resources.CpuShares
container.Cgroups.Memory = c.Resources.Memory
container.Cgroups.MemorySwap = c.Resources.MemorySwap
}
// check to see if we are running in ramdisk to disable pivot root
container.NoPivotRoot = os.Getenv("DOCKER_RAMDISK") != ""
return container
}
// getDefaultTemplate returns the docker default for
// the libcontainer configuration file
func getDefaultTemplate() *libcontainer.Container {
return &libcontainer.Container{
Capabilities: libcontainer.Capabilities{
libcontainer.GetCapability("SETPCAP"),
libcontainer.GetCapability("SYS_MODULE"),
libcontainer.GetCapability("SYS_RAWIO"),
libcontainer.GetCapability("SYS_PACCT"),
libcontainer.GetCapability("SYS_ADMIN"),
libcontainer.GetCapability("SYS_NICE"),
libcontainer.GetCapability("SYS_RESOURCE"),
libcontainer.GetCapability("SYS_TIME"),
libcontainer.GetCapability("SYS_TTY_CONFIG"),
libcontainer.GetCapability("MKNOD"),
libcontainer.GetCapability("AUDIT_WRITE"),
libcontainer.GetCapability("AUDIT_CONTROL"),
libcontainer.GetCapability("MAC_OVERRIDE"),
libcontainer.GetCapability("MAC_ADMIN"),
libcontainer.GetCapability("NET_ADMIN"),
},
Namespaces: libcontainer.Namespaces{
libcontainer.GetNamespace("NEWNS"),
libcontainer.GetNamespace("NEWUTS"),
libcontainer.GetNamespace("NEWIPC"),
libcontainer.GetNamespace("NEWPID"),
libcontainer.GetNamespace("NEWNET"),
},
Cgroups: &cgroups.Cgroup{
Parent: "docker",
DeviceAccess: false,
},
Context: libcontainer.Context{
"apparmor_profile": "docker-default",
},
}
}

251
execdriver/native/driver.go Normal file
View file

@ -0,0 +1,251 @@
package native
import (
"encoding/json"
"fmt"
"github.com/dotcloud/docker/execdriver"
"github.com/dotcloud/docker/pkg/cgroups"
"github.com/dotcloud/docker/pkg/libcontainer"
"github.com/dotcloud/docker/pkg/libcontainer/apparmor"
"github.com/dotcloud/docker/pkg/libcontainer/nsinit"
"github.com/dotcloud/docker/pkg/system"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"syscall"
)
const (
DriverName = "native"
Version = "0.1"
)
func init() {
execdriver.RegisterInitFunc(DriverName, func(args *execdriver.InitArgs) error {
var (
container *libcontainer.Container
ns = nsinit.NewNsInit(&nsinit.DefaultCommandFactory{}, &nsinit.DefaultStateWriter{args.Root})
)
f, err := os.Open(filepath.Join(args.Root, "container.json"))
if err != nil {
return err
}
if err := json.NewDecoder(f).Decode(&container); err != nil {
f.Close()
return err
}
f.Close()
cwd, err := os.Getwd()
if err != nil {
return err
}
syncPipe, err := nsinit.NewSyncPipeFromFd(0, uintptr(args.Pipe))
if err != nil {
return err
}
if err := ns.Init(container, cwd, args.Console, syncPipe, args.Args); err != nil {
return err
}
return nil
})
}
type driver struct {
root string
}
func NewDriver(root string) (*driver, error) {
if err := os.MkdirAll(root, 0700); err != nil {
return nil, err
}
if err := apparmor.InstallDefaultProfile(); err != nil {
return nil, err
}
return &driver{
root: root,
}, nil
}
func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallback execdriver.StartCallback) (int, error) {
if err := d.validateCommand(c); err != nil {
return -1, err
}
var (
term nsinit.Terminal
container = createContainer(c)
factory = &dockerCommandFactory{c: c, driver: d}
stateWriter = &dockerStateWriter{
callback: startCallback,
c: c,
dsw: &nsinit.DefaultStateWriter{filepath.Join(d.root, c.ID)},
}
ns = nsinit.NewNsInit(factory, stateWriter)
args = append([]string{c.Entrypoint}, c.Arguments...)
)
if err := d.createContainerRoot(c.ID); err != nil {
return -1, err
}
defer d.removeContainerRoot(c.ID)
if c.Tty {
term = &dockerTtyTerm{
pipes: pipes,
}
} else {
term = &dockerStdTerm{
pipes: pipes,
}
}
c.Terminal = term
if err := d.writeContainerFile(container, c.ID); err != nil {
return -1, err
}
return ns.Exec(container, term, args)
}
func (d *driver) Kill(p *execdriver.Command, sig int) error {
err := syscall.Kill(p.Process.Pid, syscall.Signal(sig))
d.removeContainerRoot(p.ID)
return err
}
func (d *driver) Info(id string) execdriver.Info {
return &info{
ID: id,
driver: d,
}
}
func (d *driver) Name() string {
return fmt.Sprintf("%s-%s", DriverName, Version)
}
// TODO: this can be improved with our driver
// there has to be a better way to do this
func (d *driver) GetPidsForContainer(id string) ([]int, error) {
pids := []int{}
subsystem := "devices"
cgroupRoot, err := cgroups.FindCgroupMountpoint(subsystem)
if err != nil {
return pids, err
}
cgroupDir, err := cgroups.GetThisCgroupDir(subsystem)
if err != nil {
return pids, err
}
filename := filepath.Join(cgroupRoot, cgroupDir, id, "tasks")
if _, err := os.Stat(filename); os.IsNotExist(err) {
filename = filepath.Join(cgroupRoot, cgroupDir, "docker", id, "tasks")
}
output, err := ioutil.ReadFile(filename)
if err != nil {
return pids, err
}
for _, p := range strings.Split(string(output), "\n") {
if len(p) == 0 {
continue
}
pid, err := strconv.Atoi(p)
if err != nil {
return pids, fmt.Errorf("Invalid pid '%s': %s", p, err)
}
pids = append(pids, pid)
}
return pids, nil
}
func (d *driver) writeContainerFile(container *libcontainer.Container, id string) error {
data, err := json.Marshal(container)
if err != nil {
return err
}
return ioutil.WriteFile(filepath.Join(d.root, id, "container.json"), data, 0655)
}
func (d *driver) createContainerRoot(id string) error {
return os.MkdirAll(filepath.Join(d.root, id), 0655)
}
func (d *driver) removeContainerRoot(id string) error {
return os.RemoveAll(filepath.Join(d.root, id))
}
func (d *driver) validateCommand(c *execdriver.Command) error {
// we need to check the Config of the command to make sure that we
// do not have any of the lxc-conf variables
for _, conf := range c.Config {
if strings.Contains(conf, "lxc") {
return fmt.Errorf("%s is not supported by the native driver", conf)
}
}
return nil
}
func getEnv(key string, env []string) string {
for _, pair := range env {
parts := strings.Split(pair, "=")
if parts[0] == key {
return parts[1]
}
}
return ""
}
type dockerCommandFactory struct {
c *execdriver.Command
driver *driver
}
// createCommand will return an exec.Cmd with the Cloneflags set to the proper namespaces
// defined on the container's configuration and use the current binary as the init with the
// args provided
func (d *dockerCommandFactory) Create(container *libcontainer.Container, console string, syncFile *os.File, args []string) *exec.Cmd {
// we need to join the rootfs because nsinit will setup the rootfs and chroot
initPath := filepath.Join(d.c.Rootfs, d.c.InitPath)
d.c.Path = initPath
d.c.Args = append([]string{
initPath,
"-driver", DriverName,
"-console", console,
"-pipe", "3",
"-root", filepath.Join(d.driver.root, d.c.ID),
"--",
}, args...)
// set this to nil so that when we set the clone flags anything else is reset
d.c.SysProcAttr = nil
system.SetCloneFlags(&d.c.Cmd, uintptr(nsinit.GetNamespaceFlags(container.Namespaces)))
d.c.ExtraFiles = []*os.File{syncFile}
d.c.Env = container.Env
d.c.Dir = d.c.Rootfs
return &d.c.Cmd
}
type dockerStateWriter struct {
dsw nsinit.StateWriter
c *execdriver.Command
callback execdriver.StartCallback
}
func (d *dockerStateWriter) WritePid(pid int) error {
d.c.ContainerPid = pid
err := d.dsw.WritePid(pid)
if d.callback != nil {
d.callback(d.c)
}
return err
}
func (d *dockerStateWriter) DeletePid() error {
return d.dsw.DeletePid()
}

21
execdriver/native/info.go Normal file
View file

@ -0,0 +1,21 @@
package native
import (
"os"
"path/filepath"
)
type info struct {
ID string
driver *driver
}
// IsRunning is determined by looking for the
// pid file for a container. If the file exists then the
// container is currently running
func (i *info) IsRunning() bool {
if _, err := os.Stat(filepath.Join(i.driver.root, i.ID, "pid")); err == nil {
return true
}
return false
}

42
execdriver/native/term.go Normal file
View file

@ -0,0 +1,42 @@
/*
These types are wrappers around the libcontainer Terminal interface so that
we can resuse the docker implementations where possible.
*/
package native
import (
"github.com/dotcloud/docker/execdriver"
"io"
"os"
"os/exec"
)
type dockerStdTerm struct {
execdriver.StdConsole
pipes *execdriver.Pipes
}
func (d *dockerStdTerm) Attach(cmd *exec.Cmd) error {
return d.AttachPipes(cmd, d.pipes)
}
func (d *dockerStdTerm) SetMaster(master *os.File) {
// do nothing
}
type dockerTtyTerm struct {
execdriver.TtyConsole
pipes *execdriver.Pipes
}
func (t *dockerTtyTerm) Attach(cmd *exec.Cmd) error {
go io.Copy(t.pipes.Stdout, t.MasterPty)
if t.pipes.Stdin != nil {
go io.Copy(t.MasterPty, t.pipes.Stdin)
}
return nil
}
func (t *dockerTtyTerm) SetMaster(master *os.File) {
t.MasterPty = master
}

23
execdriver/pipes.go Normal file
View file

@ -0,0 +1,23 @@
package execdriver
import (
"io"
)
// Pipes is a wrapper around a containers output for
// stdin, stdout, stderr
type Pipes struct {
Stdin io.ReadCloser
Stdout, Stderr io.Writer
}
func NewPipes(stdin io.ReadCloser, stdout, stderr io.Writer, useStdin bool) *Pipes {
p := &Pipes{
Stdout: stdout,
Stderr: stderr,
}
if useStdin {
p.Stdin = stdin
}
return p
}

126
execdriver/termconsole.go Normal file
View file

@ -0,0 +1,126 @@
package execdriver
import (
"github.com/dotcloud/docker/pkg/term"
"github.com/kr/pty"
"io"
"os"
"os/exec"
)
func SetTerminal(command *Command, pipes *Pipes) error {
var (
term Terminal
err error
)
if command.Tty {
term, err = NewTtyConsole(command, pipes)
} else {
term, err = NewStdConsole(command, pipes)
}
if err != nil {
return err
}
command.Terminal = term
return nil
}
type TtyConsole struct {
MasterPty *os.File
SlavePty *os.File
}
func NewTtyConsole(command *Command, pipes *Pipes) (*TtyConsole, error) {
ptyMaster, ptySlave, err := pty.Open()
if err != nil {
return nil, err
}
tty := &TtyConsole{
MasterPty: ptyMaster,
SlavePty: ptySlave,
}
if err := tty.AttachPipes(&command.Cmd, pipes); err != nil {
tty.Close()
return nil, err
}
command.Console = tty.SlavePty.Name()
return tty, nil
}
func (t *TtyConsole) Master() *os.File {
return t.MasterPty
}
func (t *TtyConsole) Resize(h, w int) error {
return term.SetWinsize(t.MasterPty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)})
}
func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *Pipes) error {
command.Stdout = t.SlavePty
command.Stderr = t.SlavePty
go func() {
if wb, ok := pipes.Stdout.(interface {
CloseWriters() error
}); ok {
defer wb.CloseWriters()
}
io.Copy(pipes.Stdout, t.MasterPty)
}()
if pipes.Stdin != nil {
command.Stdin = t.SlavePty
command.SysProcAttr.Setctty = true
go func() {
defer pipes.Stdin.Close()
io.Copy(t.MasterPty, pipes.Stdin)
}()
}
return nil
}
func (t *TtyConsole) Close() error {
t.SlavePty.Close()
return t.MasterPty.Close()
}
type StdConsole struct {
}
func NewStdConsole(command *Command, pipes *Pipes) (*StdConsole, error) {
std := &StdConsole{}
if err := std.AttachPipes(&command.Cmd, pipes); err != nil {
return nil, err
}
return std, nil
}
func (s *StdConsole) AttachPipes(command *exec.Cmd, pipes *Pipes) error {
command.Stdout = pipes.Stdout
command.Stderr = pipes.Stderr
if pipes.Stdin != nil {
stdin, err := command.StdinPipe()
if err != nil {
return err
}
go func() {
defer stdin.Close()
io.Copy(stdin, pipes.Stdin)
}()
}
return nil
}
func (s *StdConsole) Resize(h, w int) error {
// we do not need to reside a non tty
return nil
}
func (s *StdConsole) Close() error {
// nothing to close here
return nil
}

View file

@ -257,6 +257,7 @@ func setupInitLayer(initLayer string) error {
"/etc/resolv.conf": "file",
"/etc/hosts": "file",
"/etc/hostname": "file",
"/dev/console": "file",
// "var/run": "dir",
// "var/lock": "dir",
} {

View file

@ -34,6 +34,10 @@ import (
"sync"
)
var (
ErrAufsNotSupported = fmt.Errorf("AUFS was not found in /proc/filesystems")
)
func init() {
graphdriver.Register("aufs", Init)
}
@ -100,7 +104,7 @@ func supportsAufs() error {
return nil
}
}
return fmt.Errorf("AUFS was not found in /proc/filesystems")
return ErrAufsNotSupported
}
func (a Driver) rootPath() string {

View file

@ -5,6 +5,7 @@ import (
"encoding/hex"
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/graphdriver"
"io/ioutil"
"os"
"path"
@ -15,15 +16,24 @@ var (
tmp = path.Join(os.TempDir(), "aufs-tests", "aufs")
)
func testInit(dir string, t *testing.T) graphdriver.Driver {
d, err := Init(dir)
if err != nil {
if err == ErrAufsNotSupported {
t.Skip(err)
} else {
t.Fatal(err)
}
}
return d
}
func newDriver(t *testing.T) *Driver {
if err := os.MkdirAll(tmp, 0755); err != nil {
t.Fatal(err)
}
d, err := Init(tmp)
if err != nil {
t.Fatal(err)
}
d := testInit(tmp, t)
return d.(*Driver)
}
@ -32,10 +42,7 @@ func TestNewDriver(t *testing.T) {
t.Fatal(err)
}
d, err := Init(tmp)
if err != nil {
t.Fatal(err)
}
d := testInit(tmp, t)
defer os.RemoveAll(tmp)
if d == nil {
t.Fatalf("Driver should not be nil")
@ -74,12 +81,8 @@ func TestNewDriverFromExistingDir(t *testing.T) {
t.Fatal(err)
}
if _, err := Init(tmp); err != nil {
t.Fatal(err)
}
if _, err := Init(tmp); err != nil {
t.Fatal(err)
}
testInit(tmp, t)
testInit(tmp, t)
os.RemoveAll(tmp)
}

View file

@ -299,7 +299,7 @@ func TestInit(t *testing.T) {
}
}()
}()
// Put all tests in a funciton to make sure the garbage collection will
// Put all tests in a function to make sure the garbage collection will
// occur.
// Call GC to cleanup runtime.Finalizers

View file

@ -1,22 +1,24 @@
# The Docker maintainer manual
# The Docker Maintainer manual
## Introduction
Dear maintainer. Thank you for investing the time and energy to help make Docker as
useful as possible. Maintaining a project is difficult, sometimes unrewarding work.
Sure, you will get to contribute cool features to the project. But most of your time
will be spent reviewing, cleaning up, documenting, answering questions, justifying
design decisions - while everyone has all the fun! But remember - the quality of the
maintainers work is what distinguishes the good projects from the great.
So please be proud of your work, even the unglamourous parts, and encourage a culture
of appreciation and respect for *every* aspect of improving the project - not just the
hot new features.
Dear maintainer. Thank you for investing the time and energy to help
make Docker as useful as possible. Maintaining a project is difficult,
sometimes unrewarding work. Sure, you will get to contribute cool
features to the project. But most of your time will be spent reviewing,
cleaning up, documenting, answering questions, justifying design
decisions - while everyone has all the fun! But remember - the quality
of the maintainers work is what distinguishes the good projects from the
great. So please be proud of your work, even the unglamourous parts,
and encourage a culture of appreciation and respect for *every* aspect
of improving the project - not just the hot new features.
This document is a manual for maintainers old and new. It explains what is expected of
maintainers, how they should work, and what tools are available to them.
This is a living document - if you see something out of date or missing, speak up!
This document is a manual for maintainers old and new. It explains what
is expected of maintainers, how they should work, and what tools are
available to them.
This is a living document - if you see something out of date or missing,
speak up!
## What are a maintainer's responsibility?
@ -24,19 +26,26 @@ It is every maintainer's responsibility to:
* 1) Expose a clear roadmap for improving their component.
* 2) Deliver prompt feedback and decisions on pull requests.
* 3) Be available to anyone with questions, bug reports, criticism etc. on their component. This includes irc, github requests and the mailing list.
* 4) Make sure their component respects the philosophy, design and roadmap of the project.
* 3) Be available to anyone with questions, bug reports, criticism etc.
on their component. This includes IRC, GitHub requests and the mailing
list.
* 4) Make sure their component respects the philosophy, design and
roadmap of the project.
## How are decisions made?
Short answer: with pull requests to the docker repository.
Docker is an open-source project with an open design philosophy. This means that the repository is the source of truth for EVERY aspect of the project,
including its philosophy, design, roadmap and APIs. *If it's part of the project, it's in the repo. It's in the repo, it's part of the project.*
Docker is an open-source project with an open design philosophy. This
means that the repository is the source of truth for EVERY aspect of the
project, including its philosophy, design, roadmap and APIs. *If it's
part of the project, it's in the repo. It's in the repo, it's part of
the project.*
As a result, all decisions can be expressed as changes to the repository. An implementation change is a change to the source code. An API change is a change to
the API specification. A philosophy change is a change to the philosophy manifesto. And so on.
As a result, all decisions can be expressed as changes to the
repository. An implementation change is a change to the source code. An
API change is a change to the API specification. A philosophy change is
a change to the philosophy manifesto. And so on.
All decisions affecting docker, big and small, follow the same 3 steps:
@ -49,25 +58,36 @@ All decisions affecting docker, big and small, follow the same 3 steps:
## Who decides what?
So all decisions are pull requests, and the relevant maintainer makes the decision by accepting or refusing the pull request.
But how do we identify the relevant maintainer for a given pull request?
So all decisions are pull requests, and the relevant maintainer makes
the decision by accepting or refusing the pull request. But how do we
identify the relevant maintainer for a given pull request?
Docker follows the timeless, highly efficient and totally unfair system known as [Benevolent dictator for life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life),
with yours truly, Solomon Hykes, in the role of BDFL.
This means that all decisions are made by default by me. Since making every decision myself would be highly un-scalable, in practice decisions are spread across multiple maintainers.
Docker follows the timeless, highly efficient and totally unfair system
known as [Benevolent dictator for
life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life), with
yours truly, Solomon Hykes, in the role of BDFL. This means that all
decisions are made by default by Solomon. Since making every decision
myself would be highly un-scalable, in practice decisions are spread
across multiple maintainers.
The relevant maintainer for a pull request is assigned in 3 steps:
* Step 1: Determine the subdirectory affected by the pull request. This might be src/registry, docs/source/api, or any other part of the repo.
* Step 1: Determine the subdirectory affected by the pull request. This
might be `src/registry`, `docs/source/api`, or any other part of the repo.
* Step 2: Find the MAINTAINERS file which affects this directory. If the directory itself does not have a MAINTAINERS file, work your way up the repo hierarchy until you find one.
* Step 2: Find the `MAINTAINERS` file which affects this directory. If the
directory itself does not have a `MAINTAINERS` file, work your way up
the repo hierarchy until you find one.
* Step 3: The first maintainer listed is the primary maintainer. The pull request is assigned to him. He may assign it to other listed maintainers, at his discretion.
* Step 3: The first maintainer listed is the primary maintainer. The
pull request is assigned to him. He may assign it to other listed
maintainers, at his discretion.
### I'm a maintainer, should I make pull requests too?
Yes. Nobody should ever push to master directly. All changes should be made through a pull request.
Yes. Nobody should ever push to master directly. All changes should be
made through a pull request.
### Who assigns maintainers?

View file

@ -1,65 +1,94 @@
Dear packager.
# Dear Packager,
If you are looking to make docker available on your favorite software distribution,
this document is for you. It summarizes the requirements for building and running
docker.
If you are looking to make Docker available on your favorite software
distribution, this document is for you. It summarizes the requirements for
building and running the Docker client and the Docker daemon.
## Getting started
## Getting Started
We really want to help you package Docker successfully. Before anything, a good first step
is to introduce yourself on the [docker-dev mailing list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev)
, explain what you''re trying to achieve, and tell us how we can help. Don''t worry, we don''t bite!
There might even be someone already working on packaging for the same distro!
We want to help you package Docker successfully. Before doing any packaging, a
good first step is to introduce yourself on the [docker-dev mailing
list](https://groups.google.com/d/forum/docker-dev), explain what you're trying
to achieve, and tell us how we can help. Don't worry, we don't bite! There might
even be someone already working on packaging for the same distro!
You can also join the IRC channel - #docker and #docker-dev on Freenode are both active and friendly.
You can also join the IRC channel - #docker and #docker-dev on Freenode are both
active and friendly.
## Package name
We like to refer to Tianon ("@tianon" on GitHub and "tianon" on IRC) as our
"Packagers Relations", since he's always working to make sure our packagers have
a good, healthy upstream to work with (both in our communication and in our
build scripts). If you're having any kind of trouble, feel free to ping him
directly. He also likes to keep track of what distributions we have packagers
for, so feel free to reach out to him even just to say "Hi!"
If possible, your package should be called "docker". If that name is already taken, a second
choice is "lxc-docker".
## Package Name
## Official build vs distro build
If possible, your package should be called "docker". If that name is already
taken, a second choice is "lxc-docker", but with the caveat that "LXC" is now an
optional dependency (as noted below). Another possible choice is "docker.io".
The Docker project maintains its own build and release toolchain. It is pretty neat and entirely
based on Docker (surprise!). This toolchain is the canonical way to build Docker, and the only
method supported by the development team. We encourage you to give it a try, and if the circumstances
## Official Build vs Distro Build
The Docker project maintains its own build and release toolchain. It is pretty
neat and entirely based on Docker (surprise!). This toolchain is the canonical
way to build Docker. We encourage you to give it a try, and if the circumstances
allow you to use it, we recommend that you do.
You might not be able to use the official build toolchain - usually because your distribution has a
toolchain and packaging policy of its own. We get it! Your house, your rules. The rest of this document
should give you the information you need to package Docker your way, without denaturing it in
the process.
You might not be able to use the official build toolchain - usually because your
distribution has a toolchain and packaging policy of its own. We get it! Your
house, your rules. The rest of this document should give you the information you
need to package Docker your way, without denaturing it in the process.
## System build dependencies
## Build Dependencies
To build docker, you will need the following system dependencies
To build Docker, you will need the following:
* An amd64 machine
* A recent version of git and mercurial
* Go version 1.2 or later
* A clean checkout of the source added to a valid [Go
workspace](http://golang.org/doc/code.html#Workspaces) under the path
*src/github.com/dotcloud/docker* (unless you plan to use `AUTO_GOPATH`,
explained in more detail below).
To build the Docker daemon, you will additionally need:
* An amd64/x86_64 machine running Linux
* SQLite version 3.7.9 or later
* libdevmapper version 1.02.68-cvs (2012-01-26) or later from lvm2 version 2.02.89 or later
* btrfs-progs version 3.8 or later (including commit e5cb128 from 2013-01-07) for the necessary btrfs headers
* A clean checkout of the source must be added to a valid Go [workspace](http://golang.org/doc/code.html#Workspaces)
under the path *src/github.com/dotcloud/docker*.
* libdevmapper version 1.02.68-cvs (2012-01-26) or later from lvm2 version
2.02.89 or later
* btrfs-progs version 3.8 or later (including commit e5cb128 from 2013-01-07)
for the necessary btrfs headers
## Go dependencies
Be sure to also check out Docker's Dockerfile for the most up-to-date list of
these build-time dependencies.
All Go dependencies are vendored under ./vendor. They are used by the official build,
so the source of truth for the current version is whatever is in ./vendor.
### Go Dependencies
To use the vendored dependencies, simply make sure the path to ./vendor is included in $GOPATH.
All Go dependencies are vendored under "./vendor". They are used by the official
build, so the source of truth for the current version of each dependency is
whatever is in "./vendor".
If you would rather package these dependencies yourself, take a look at ./hack/vendor.sh for an
easy-to-parse list of the exact version for each.
To use the vendored dependencies, simply make sure the path to "./vendor" is
included in `GOPATH` (or use `AUTO_GOPATH`, as explained below).
NOTE: if you''re not able to package the exact version (to the exact commit) of a given dependency,
please get in touch so we can remediate! Who knows what discrepancies can be caused by even the
slightest deviation. We promise to do our best to make everybody happy.
If you would rather (or must, due to distro policy) package these dependencies
yourself, take a look at "./hack/vendor.sh" for an easy-to-parse list of the
exact version for each.
NOTE: if you're not able to package the exact version (to the exact commit) of a
given dependency, please get in touch so we can remediate! Who knows what
discrepancies can be caused by even the slightest deviation. We promise to do
our best to make everybody happy.
## Stripping Binaries
Please, please, please do not strip any compiled binaries. This is really important.
Please, please, please do not strip any compiled binaries. This is really
important.
In our own testing, stripping the resulting binaries sometimes results in a
binary that appears to work, but more often causes random panics, segfaults, and
other issues. Even if the binary appears to work, please don't strip.
See the following quotes from Dave Cheney, which explain this position better
from the upstream Golang perspective.
@ -94,79 +123,172 @@ from the upstream Golang perspective.
## Building Docker
To build the docker binary, run the following command with the source checkout as the
working directory:
Please use our build script ("./hack/make.sh") for all your compilation of
Docker. If there's something you need that it isn't doing, or something it could
be doing to make your life as a packager easier, please get in touch with Tianon
and help us rectify the situation. Chances are good that other packagers have
probably run into the same problems and a fix might already be in the works, but
none of us will know for sure unless you harass Tianon about it. :)
All the commands listed within this section should be run with the Docker source
checkout as the current working directory.
### `AUTO_GOPATH`
If you'd rather not be bothered with the hassles that setting up `GOPATH`
appropriately can be, and prefer to just get a "build that works", you should
add something similar to this to whatever script or process you're using to
build Docker:
```bash
export AUTO_GOPATH=1
```
This will cause the build scripts to set up a reasonable `GOPATH` that
automatically and properly includes both dotcloud/docker from the local
directory, and the local "./vendor" directory as necessary.
### `DOCKER_BUILDTAGS`
If you're building a binary that may need to be used on platforms that include
AppArmor, you will need to set `DOCKER_BUILDTAGS` as follows:
```bash
export DOCKER_BUILDTAGS='apparmor'
```
### Static Daemon
If it is feasible within the constraints of your distribution, you should
seriously consider packaging Docker as a single static binary. A good comparison
is Busybox, which is often packaged statically as a feature to enable mass
portability. Because of the unique way Docker operates, being similarly static
is a "feature".
To build a static Docker daemon binary, run the following command (first
ensuring that all the necessary libraries are available in static form for
linking - see the "Build Dependencies" section above, and the relevant lines
within Docker's own Dockerfile that set up our official build environment):
```bash
./hack/make.sh binary
```
This will create a static binary under *./bundles/$VERSION/binary/docker-$VERSION*, where
*$VERSION* is the contents of the file *./VERSION*.
This will create a static binary under
"./bundles/$VERSION/binary/docker-$VERSION", where "$VERSION" is the contents of
the file "./VERSION". This binary is usually installed somewhere like
"/usr/bin/docker".
You are encouraged to use ./hack/make.sh without modification. If you must absolutely write
your own script (are you really, really sure you need to? make.sh is really not that complicated),
then please take care the respect the following:
### Dynamic Daemon / Client-only Binary
* In *./hack/make.sh*: $LDFLAGS, $BUILDFLAGS, $VERSION and $GITCOMMIT
* In *./hack/make/binary*: the exact build command to run
If you are only interested in a Docker client binary, set `DOCKER_CLIENTONLY` to a non-empty value using something similar to the following: (which will prevent the extra step of compiling dockerinit)
You may be tempted to tweak these settings. In particular, being a rigorous maintainer, you may want
to disable static linking. Please don''t! Docker *needs* to be statically linked to function properly.
You would do the users of your distro a disservice and "void the docker warranty" by changing the flags.
```bash
export DOCKER_CLIENTONLY=1
```
A good comparison is Busybox: all distros package it as a statically linked binary, because it just
makes sense. Docker is the same way.
If you *must* have a non-static Docker binary, please use:
If you need to (due to distro policy, distro library availability, or for other
reasons) create a dynamically compiled daemon binary, or if you are only
interested in creating a client binary for Docker, use something similar to the
following:
```bash
./hack/make.sh dynbinary
```
This will create *./bundles/$VERSION/dynbinary/docker-$VERSION* and *./bundles/$VERSION/binary/dockerinit-$VERSION*.
The first of these would usually be installed at */usr/bin/docker*, while the second must be installed
at */usr/libexec/docker/dockerinit*.
This will create "./bundles/$VERSION/dynbinary/docker-$VERSION", which for
client-only builds is the important file to grab and install as appropriate.
## Testing Docker
For daemon builds, you will also need to grab and install
"./bundles/$VERSION/dynbinary/dockerinit-$VERSION", which is created from the
minimal set of Docker's codebase that _must_ be compiled statically (and is thus
a pure static binary). The acceptable locations Docker will search for this file
are as follows (in order):
Before releasing your binary, make sure to run the tests! Run the following command with the source
checkout as the working directory:
* as "dockerinit" in the same directory as the daemon binary (ie, if docker is
installed at "/usr/bin/docker", then "/usr/bin/dockerinit" will be the first
place this file is searched for)
* "/usr/libexec/docker/dockerinit" or "/usr/local/libexec/docker/dockerinit"
([FHS 3.0 Draft](http://www.linuxbase.org/betaspecs/fhs/fhs.html#usrlibexec))
* "/usr/lib/docker/dockerinit" or "/usr/local/lib/docker/dockerinit" ([FHS
2.3](http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html#USRLIBLIBRARIESFORPROGRAMMINGANDPA))
If (and please, only if) one of the paths above is insufficient due to distro
policy or similar issues, you may use the `DOCKER_INITPATH` environment variable
at compile-time as follows to set a different path for Docker to search:
```bash
./hack/make.sh test
export DOCKER_INITPATH=/usr/lib/docker.io/dockerinit
```
The test suite includes both live integration tests and unit tests, so you will need all runtime
dependencies to be installed (see below).
If you find yourself needing this, please don't hesitate to reach out to Tianon
to see if it would be reasonable or helpful to add more paths to Docker's list,
especially if there's a relevant standard worth referencing (such as the FHS).
The test suite will also download a small test container, so you will need internet connectivity.
Also, it goes without saying, but for the purposes of the daemon please consider
these two binaries ("docker" and "dockerinit") as if they were a single unit.
Mixing and matching can cause undesired consequences, and will fail to run
properly.
## Runtime dependencies
## System Dependencies
To run properly, docker needs the following software to be installed at runtime:
### Runtime Dependencies
To function properly, the Docker daemon needs the following software to be
installed and available at runtime:
* iproute2 version 3.5 or later (build after 2012-05-21), and specifically the "ip" utility
* iptables version 1.4 or later
* The LXC utility scripts (http://lxc.sourceforge.net) version 0.8 or later
* XZ Utils version 4.9 or later
Additionally, the Docker client needs the following software to be installed and
available at runtime:
* Git version 1.7 or later
* XZ Utils 4.9 or later
## Kernel dependencies
### Kernel Requirements
Docker in daemon mode has specific kernel requirements. For details, see
http://docs.docker.io/en/latest/installation/kernel/
The Docker daemon has very specific kernel requirements. Most pre-packaged
kernels already include the necessary options enabled. If you are building your
own kernel, you will either need to discover the options necessary via trial and
error, or check out the [Gentoo
ebuild](https://github.com/tianon/docker-overlay/blob/master/app-emulation/docker/docker-9999.ebuild),
in which a list is maintained (and if there are any issues or discrepancies in
that list, please contact Tianon so they can be rectified).
Note that Docker also has a client mode, which can run on virtually any linux kernel (it even builds
on OSX!).
Note that in client mode, there are no specific kernel requirements, and that
the client will even run on alternative platforms such as Mac OS X / Darwin.
## Init script
### Optional Dependencies
Docker expects to run as a daemon at machine startup. Your package will need to include a script
for your distro''s process supervisor of choice.
Some of Docker's features are activated by using optional command-line flags or
by having support for them in the kernel or userspace. A few examples include:
Docker should be run as root, with the following arguments:
* LXC execution driver (requires version 0.8 or later of the LXC utility scripts)
* AUFS graph driver (requires AUFS patches/support enabled in the kernel, and at
least the "auplink" utility from aufs-tools)
* experimental BTRFS graph driver (requires BTRFS support enabled in the kernel)
## Daemon Init Script
Docker expects to run as a daemon at machine startup. Your package will need to
include a script for your distro's process supervisor of choice. Be sure to
check out the "contrib/init" folder in case a suitable init script already
exists (and if one does not, contact Tianon about whether it might be
appropriate for your distro's init script to live there too!).
In general, Docker should be run as root, similar to the following:
```bash
docker -d
```
Generally, a `DOCKER_OPTS` variable of some kind is available for adding more
flags (such as changing the graph driver to use BTRFS, switching the location of
"/var/lib/docker", etc).
## Communicate
As a final note, please do feel free to reach out to Tianon at any time for
pretty much anything. He really does love hearing from our packagers and wants
to make sure we're not being a "hostile upstream". As should be a given, we
appreciate the work our packagers do to make sure we have broad distribution!

View file

@ -173,9 +173,13 @@ git push origin $VERSION
It's very important that we don't make the tag until after the official
release is uploaded to get.docker.io!
### 10. Go to github to merge the `bump_$VERSION` into release
### 10. Go to github to merge the `bump_$VERSION` branch into release
Merging the pull request to the release branch will automatically
Don't delete the leftover branch just yet, as we will need it for the next step.
### 11. Go to github to merge the `bump_$VERSION` branch into docs
Merging the pull request to the docs branch will automatically
update the documentation on the "latest" revision of the docs. You
should see the updated docs 5-10 minutes after the merge. The docs
will appear on http://docs.docker.io/. For more information about
@ -184,7 +188,7 @@ documentation releases, see `docs/README.md`.
Don't forget to push that pretty blue button to delete the leftover
branch afterwards!
### 11. Create a new pull request to merge release back into master
### 12. Create a new pull request to merge release back into master
```bash
git checkout master
@ -202,7 +206,7 @@ echo "https://github.com/dotcloud/docker/compare/master...merge_release_$VERSION
Again, get two maintainers to validate, then merge, then push that pretty
blue button to delete your branch.
### 12. Rejoice and Evangelize!
### 13. Rejoice and Evangelize!
Congratulations! You're done.

View file

@ -16,7 +16,7 @@ set -e
# in the Dockerfile at the root of the source. In other words:
# DO NOT CALL THIS SCRIPT DIRECTLY.
# - The right way to call this script is to invoke "make" from
# your checkout of the Docker repository.
# your checkout of the Docker repository.
# the Makefile will do a "docker build -t docker ." and then
# "docker run hack/make.sh" in the resulting container image.
#
@ -82,9 +82,23 @@ if [ ! "$GOPATH" ]; then
fi
# Use these flags when compiling the tests and final binary
LDFLAGS='-X github.com/dotcloud/docker/dockerversion.GITCOMMIT "'$GITCOMMIT'" -X github.com/dotcloud/docker/dockerversion.VERSION "'$VERSION'" -w'
LDFLAGS_STATIC='-X github.com/dotcloud/docker/dockerversion.IAMSTATIC true -linkmode external -extldflags "-lpthread -static -Wl,--unresolved-symbols=ignore-in-object-files"'
BUILDFLAGS='-tags netgo -a'
LDFLAGS='
-w
-X github.com/dotcloud/docker/dockerversion.GITCOMMIT "'$GITCOMMIT'"
-X github.com/dotcloud/docker/dockerversion.VERSION "'$VERSION'"
'
LDFLAGS_STATIC='-linkmode external'
EXTLDFLAGS_STATIC='-static'
BUILDFLAGS=( -a -tags "netgo $DOCKER_BUILDTAGS" )
# A few more flags that are specific just to building a completely-static binary (see hack/make/binary)
# PLEASE do not use these anywhere else.
EXTLDFLAGS_STATIC_DOCKER="$EXTLDFLAGS_STATIC -lpthread -Wl,--unresolved-symbols=ignore-in-object-files"
LDFLAGS_STATIC_DOCKER="
$LDFLAGS_STATIC
-X github.com/dotcloud/docker/dockerversion.IAMSTATIC true
-extldflags \"$EXTLDFLAGS_STATIC_DOCKER\"
"
HAVE_GO_TEST_COVER=
if \
@ -101,21 +115,32 @@ fi
#
go_test_dir() {
dir=$1
coverpkg=$2
testcover=()
if [ "$HAVE_GO_TEST_COVER" ]; then
# if our current go install has -cover, we want to use it :)
mkdir -p "$DEST/coverprofiles"
coverprofile="docker${dir#.}"
coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
testcover=( -cover -coverprofile "$coverprofile" )
testcover=( -cover -coverprofile "$coverprofile" $coverpkg )
fi
(
set -x
cd "$dir"
go test ${testcover[@]} -ldflags "$LDFLAGS" $BUILDFLAGS $TESTFLAGS
go test ${testcover[@]} -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS
)
}
# This helper function walks the current directory looking for directories
# holding certain files ($1 parameter), and prints their paths on standard
# output, one per line.
find_dirs() {
find -not \( \
\( -wholename './vendor' -o -wholename './integration' -o -wholename './contrib' -o -wholename './pkg/mflag/example' \) \
-prune \
\) -name "$1" -print0 | xargs -0n1 dirname | sort -u
}
bundle() {
bundlescript=$1
bundle=$(basename $bundlescript)

Some files were not shown because too many files have changed in this diff Show more