Просмотр исходного кода

Merge branch 'master' into dm

Conflicts:
	Dockerfile
	buildfile.go
	container.go
	hack/make/test
	runtime_test.go
	utils/utils.go
Guillaume J. Charmes 11 лет назад
Родитель
Сommit
e9ee860c91
100 измененных файлов с 2379 добавлено и 1301 удалено
  1. 78 0
      CHANGELOG.md
  2. 4 2
      CONTRIBUTING.md
  3. 1 1
      Dockerfile
  4. 1 1
      VERSION
  5. 2 0
      Vagrantfile
  6. 2 3
      api.go
  7. 28 1
      api_test.go
  8. 1 0
      commands_test.go
  9. 74 53
      container.go
  10. 0 1
      contrib/MAINTAINERS
  11. 0 0
      contrib/completion/bash/docker
  12. 242 0
      contrib/completion/zsh/_docker
  13. 67 0
      contrib/mkimage-arch.sh
  14. 22 0
      contrib/vim-syntax/LICENSE
  15. 23 0
      contrib/vim-syntax/README.md
  16. 18 0
      contrib/vim-syntax/doc/dockerfile.txt
  17. 1 0
      contrib/vim-syntax/ftdetect/dockerfile.vim
  18. 22 0
      contrib/vim-syntax/syntax/dockerfile.vim
  19. 87 25
      docs/README.md
  20. 0 1
      docs/sources/api/docker_remote_api.rst
  21. 3 7
      docs/sources/api/remote_api_client_libraries.rst
  22. 630 38
      docs/sources/commandline/cli.rst
  23. 0 59
      docs/sources/commandline/command/attach.rst
  24. 0 65
      docs/sources/commandline/command/build.rst
  25. 0 52
      docs/sources/commandline/command/commit.rst
  26. 0 14
      docs/sources/commandline/command/cp.rst
  27. 0 13
      docs/sources/commandline/command/diff.rst
  28. 0 34
      docs/sources/commandline/command/events.rst
  29. 0 13
      docs/sources/commandline/command/export.rst
  30. 0 13
      docs/sources/commandline/command/history.rst
  31. 0 26
      docs/sources/commandline/command/images.rst
  32. 0 44
      docs/sources/commandline/command/import.rst
  33. 0 13
      docs/sources/commandline/command/info.rst
  34. 0 23
      docs/sources/commandline/command/insert.rst
  35. 0 13
      docs/sources/commandline/command/inspect.rst
  36. 0 13
      docs/sources/commandline/command/kill.rst
  37. 0 24
      docs/sources/commandline/command/login.rst
  38. 0 13
      docs/sources/commandline/command/logs.rst
  39. 0 13
      docs/sources/commandline/command/port.rst
  40. 0 17
      docs/sources/commandline/command/ps.rst
  41. 0 13
      docs/sources/commandline/command/pull.rst
  42. 0 13
      docs/sources/commandline/command/push.rst
  43. 0 13
      docs/sources/commandline/command/restart.rst
  44. 0 13
      docs/sources/commandline/command/rm.rst
  45. 0 13
      docs/sources/commandline/command/rmi.rst
  46. 0 85
      docs/sources/commandline/command/run.rst
  47. 0 14
      docs/sources/commandline/command/search.rst
  48. 0 13
      docs/sources/commandline/command/start.rst
  49. 0 15
      docs/sources/commandline/command/stop.rst
  50. 0 15
      docs/sources/commandline/command/tag.rst
  51. 0 13
      docs/sources/commandline/command/top.rst
  52. 0 7
      docs/sources/commandline/command/version.rst
  53. 0 13
      docs/sources/commandline/command/wait.rst
  54. 0 0
      docs/sources/commandline/docker_images.gif
  55. 2 33
      docs/sources/commandline/index.rst
  56. 1 1
      docs/sources/contributing/devenvironment.rst
  57. 22 10
      docs/sources/examples/postgresql_service.rst
  58. 0 2
      docs/sources/index.rst
  59. 0 4
      docs/sources/installation/archlinux.rst
  60. 0 5
      docs/sources/installation/gentoolinux.rst
  61. 10 12
      docs/sources/installation/kernel.rst
  62. 31 24
      docs/sources/installation/ubuntulinux.rst
  63. BIN
      docs/sources/terms/images/docker-filesystems-busyboxrw.png
  64. BIN
      docs/sources/terms/images/docker-filesystems-debian.png
  65. BIN
      docs/sources/terms/images/docker-filesystems-debianrw.png
  66. BIN
      docs/sources/terms/images/docker-filesystems-generic.png
  67. BIN
      docs/sources/terms/images/docker-filesystems-multilayer.png
  68. BIN
      docs/sources/terms/images/docker-filesystems-multiroot.png
  69. 49 183
      docs/sources/terms/images/docker-filesystems.svg
  70. 8 2
      docs/sources/use/builder.rst
  71. 17 5
      hack/RELEASE-CHECKLIST.md
  72. 80 2
      hack/infrastructure/README.md
  73. 43 2
      hack/infrastructure/docker-ci.rst
  74. 43 0
      hack/infrastructure/docker-ci/Dockerfile
  75. 0 0
      hack/infrastructure/docker-ci/MAINTAINERS
  76. 26 0
      hack/infrastructure/docker-ci/README.rst
  77. 1 0
      hack/infrastructure/docker-ci/buildbot/README.rst
  78. 0 0
      hack/infrastructure/docker-ci/buildbot/buildbot.conf
  79. 19 13
      hack/infrastructure/docker-ci/buildbot/github.py
  80. 26 25
      hack/infrastructure/docker-ci/buildbot/master.cfg
  81. 0 0
      hack/infrastructure/docker-ci/buildbot/requirements.txt
  82. 17 7
      hack/infrastructure/docker-ci/buildbot/setup.sh
  83. 155 0
      hack/infrastructure/docker-ci/deployment.py
  84. 32 0
      hack/infrastructure/docker-ci/docker-coverage/coverage-docker.sh
  85. 35 0
      hack/infrastructure/docker-ci/docker-test/test_docker.sh
  86. 0 0
      hack/infrastructure/docker-ci/functionaltests/test_index.py
  87. 26 0
      hack/infrastructure/docker-ci/functionaltests/test_registry.sh
  88. 37 0
      hack/infrastructure/docker-ci/nightlyrelease/Dockerfile
  89. 45 0
      hack/infrastructure/docker-ci/nightlyrelease/dockerbuild
  90. 1 0
      hack/infrastructure/docker-ci/nightlyrelease/release_credentials.json
  91. 28 0
      hack/infrastructure/docker-ci/report/Dockerfile
  92. 130 0
      hack/infrastructure/docker-ci/report/deployment.py
  93. 145 0
      hack/infrastructure/docker-ci/report/report.py
  94. 10 2
      network_proxy.go
  95. 3 1
      runtime.go
  96. 23 0
      runtime_test.go
  97. 8 0
      term/term.go
  98. 0 58
      testing/README.rst
  99. 0 74
      testing/Vagrantfile
  100. 0 1
      testing/buildbot/README.rst

+ 78 - 0
CHANGELOG.md

@@ -1,5 +1,83 @@
 # Changelog
 # Changelog
 
 
+## 0.6.4 (2013-10-16)
+- Runtime: Add cleanup of container when Start() fails
+- Testing: Catch errClosing error when TCP and UDP proxies are terminated
+- Testing: Add aggregated docker-ci email report
+- Testing: Remove a few errors in tests
+* Contrib: Reorganize contributed completion scripts to add zsh completion
+* Contrib: Add vim syntax highlighting for Dockerfiles from @honza
+* Runtime: Add better comments to utils/stdcopy.go
+- Testing: add cleanup to remove leftover containers
+* Documentation: Document how to edit and release docs
+* Documentation: Add initial draft of the Docker infrastructure doc
+* Contrib: Add mkimage-arch.sh
+- Builder: Abort build if mergeConfig returns an error and fix duplicate error message
+- Runtime: Remove error messages which are not actually errors
+* Testing: Only run certain tests with TESTFLAGS='-run TestName' make.sh
+* Testing: Prevent docker-ci to test closing PRs
+- Documentation: Minor updates to postgresql_service.rst
+* Testing: Add nightly release to docker-ci
+* Hack: Improve network performance for VirtualBox
+* Hack: Add vagrant user to the docker group
+* Runtime: Add utils.Errorf for error logging
+- Packaging: Remove deprecated packaging directory
+* Hack: Revamp install.sh to be usable by more people, and to use official install methods whenever possible (apt repo, portage tree, etc.)
+- Hack: Fix contrib/mkimage-debian.sh apt caching prevention
+* Documentation: Clarify LGTM process to contributors
+- Documentation: Small fixes to parameter names in docs for ADD command
+* Runtime: Record termination time in state.
+- Registry: Use correct auth config when logging in.
+- Documentation: Corrected error in the package name
+* Documentation: Document what `vagrant up` is actually doing
+- Runtime: Fix `docker rm` with volumes
+- Runtime: Use empty string so TempDir uses the OS's temp dir automatically
+- Runtime: Make sure to close the network allocators
+* Testing: Replace panic by log.Fatal in tests
++ Documentation: improve doc search results
+- Runtime: Fix some error cases where a HTTP body might not be closed
+* Hack: Add proper bash completion for "docker push"
+* Documentation: Add devenvironment link to CONTRIBUTING.md
+* Documentation: Cleanup whitespace in API 1.5 docs
+* Documentation: use angle brackets in MAINTAINER example email
+- Testing: Increase TestRunDetach timeout
+* Documentation: Fix help text for -v option
++ Hack: Added Dockerfile.tmLanguage to contrib
++ Runtime: Autorestart containers by default
+* Testing: Adding more tests around auth.ResolveAuthConfig
+* Hack: Configured FPM to make /etc/init/docker.conf a config file
+* Hack: Add xz utils as a runtime dep
+* Documentation: Add `apt-get install curl` to Ubuntu docs
+* Documentation: Remove Gentoo install notes about #1422 workaround
+* Documentation: Fix Ping endpoint documentation
+* Runtime: Bump vendor kr/pty to commit 3b1f6487b (syscall.O_NOCTTY)
+* Runtime: lxc: Allow set_file_cap capability in container
+* Documentation: Update archlinux.rst
+- Documentation: Fix ironic typo in changelog
+* Documentation: Add explanation for export restrictions
+* Hack: Add cleanup/refactor portion of #2010 for hack and Dockerfile updates
++ Documentation: Changes to a new style for the docs. Includes version switcher.
+* Documentation: Formatting, add information about multiline json
++ Hack: Add contrib/mkimage-centos.sh back (from #1621), and associated documentation link
+- Runtime: Fix panic with wrong dockercfg file
+- Runtime: Fix the attach behavior with -i
+* Documentation: Add .dockercfg doc
+- Runtime: Move run -rm to the cli only
+* Hack: Enable SSH Agent forwarding in Vagrant VM
++ Runtime: Add -rm to docker run for removing a container on exit
+* Documentation: Improve registry and index REST API documentation
+* Runtime: Split stdout stderr
+- Documentation: Replace deprecated upgrading reference to docker-latest.tgz, which hasn't been updated since 0.5.3
+* Documentation: Update Gentoo installation documentation now that we're in the portage tree proper
+- Registry: Fix the error message so it is the same as the regex
+* Runtime: Always create a new session for the container
+* Hack: Add several of the small make.sh fixes from #1920, and make the output more consistent and contributor-friendly
+* Documentation: Various command fixes in postgres example
+* Documentation: Cleanup and reorganize docs and tooling for contributors and maintainers
+- Documentation: Minor spelling correction of protocoll -> protocol
+* Hack: Several small tweaks/fixes for contrib/mkimage-debian.sh
++ Hack: Add @tianon to hack/MAINTAINERS
+
 ## 0.6.3 (2013-09-23)
 ## 0.6.3 (2013-09-23)
 * Packaging: Update tar vendor dependency
 * Packaging: Update tar vendor dependency
 - Client: Fix detach issue
 - Client: Fix detach issue

+ 4 - 2
CONTRIBUTING.md

@@ -59,8 +59,10 @@ Submit unit tests for your changes.  Go has a great test framework built in; use
 it! Take a look at existing tests for inspiration. Run the full test suite on
 it! Take a look at existing tests for inspiration. Run the full test suite on
 your branch before submitting a pull request.
 your branch before submitting a pull request.
 
 
-Make sure you include relevant updates or additions to documentation when
-creating or modifying features.
+Update the documentation when creating or modifying features. Test
+your documentation changes for clarity, concision, and correctness, as
+well as a clean docmuent build. See ``docs/README.md`` for more
+information on building the docs and how docs get released.
 
 
 Write clean code. Universally formatted code promotes ease of writing, reading,
 Write clean code. Universally formatted code promotes ease of writing, reading,
 and maintenance. Always run `go fmt` before committing your changes. Most
 and maintenance. Always run `go fmt` before committing your changes. Most

+ 1 - 1
Dockerfile

@@ -12,7 +12,7 @@
 #
 #
 #
 #
 # # Run the test suite:
 # # Run the test suite:
-# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined docker go test -v
+# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined docker hack/make.sh test
 #
 #
 # # Publish a release:
 # # Publish a release:
 # docker run -privileged -lxc-conf=lxc.aa_profile=unconfined \
 # docker run -privileged -lxc-conf=lxc.aa_profile=unconfined \

+ 1 - 1
VERSION

@@ -1 +1 @@
-0.6.3-dev
+0.6.4-dev

+ 2 - 0
Vagrantfile

@@ -80,6 +80,8 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
   config.vm.provider :virtualbox do |vb|
   config.vm.provider :virtualbox do |vb|
     config.vm.box = BOX_NAME
     config.vm.box = BOX_NAME
     config.vm.box_url = BOX_URI
     config.vm.box_url = BOX_URI
+    vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
+    vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
   end
   end
 end
 end
 
 

+ 2 - 3
api.go

@@ -349,7 +349,7 @@ func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Req
 		return err
 		return err
 	}
 	}
 	config := &Config{}
 	config := &Config{}
-	if err := json.NewDecoder(r.Body).Decode(config); err != nil {
+	if err := json.NewDecoder(r.Body).Decode(config); err != nil && err != io.EOF {
 		utils.Errorf("%s", err)
 		utils.Errorf("%s", err)
 	}
 	}
 	repo := r.Form.Get("repo")
 	repo := r.Form.Get("repo")
@@ -909,8 +909,7 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
 	b := NewBuildFile(srv, utils.NewWriteFlusher(w), !suppressOutput, !noCache, rm)
 	b := NewBuildFile(srv, utils.NewWriteFlusher(w), !suppressOutput, !noCache, rm)
 	id, err := b.Build(context)
 	id, err := b.Build(context)
 	if err != nil {
 	if err != nil {
-		fmt.Fprintf(w, "Error build: %s\n", err)
-		return err
+		return fmt.Errorf("Error build: %s", err)
 	}
 	}
 	if repoName != "" {
 	if repoName != "" {
 		srv.runtime.repositories.Set(repoName, tag, id, false)
 		srv.runtime.repositories.Set(repoName, tag, id, false)

+ 28 - 1
api_test.go

@@ -5,6 +5,7 @@ import (
 	"bufio"
 	"bufio"
 	"bytes"
 	"bytes"
 	"encoding/json"
 	"encoding/json"
+	"fmt"
 	"github.com/dotcloud/docker/utils"
 	"github.com/dotcloud/docker/utils"
 	"io"
 	"io"
 	"net"
 	"net"
@@ -12,6 +13,7 @@ import (
 	"net/http/httptest"
 	"net/http/httptest"
 	"os"
 	"os"
 	"path"
 	"path"
+	"strings"
 	"testing"
 	"testing"
 	"time"
 	"time"
 )
 )
@@ -40,6 +42,25 @@ func TestGetBoolParam(t *testing.T) {
 	}
 	}
 }
 }
 
 
+func TesthttpError(t *testing.T) {
+	r := httptest.NewRecorder()
+
+	httpError(r, fmt.Errorf("No such method"))
+	if r.Code != http.StatusNotFound {
+		t.Fatalf("Expected %d, got %d", http.StatusNotFound, r.Code)
+	}
+
+	httpError(r, fmt.Errorf("This accound hasn't been activated"))
+	if r.Code != http.StatusForbidden {
+		t.Fatalf("Expected %d, got %d", http.StatusForbidden, r.Code)
+	}
+
+	httpError(r, fmt.Errorf("Some error"))
+	if r.Code != http.StatusInternalServerError {
+		t.Fatalf("Expected %d, got %d", http.StatusInternalServerError, r.Code)
+	}
+}
+
 func TestGetVersion(t *testing.T) {
 func TestGetVersion(t *testing.T) {
 	var err error
 	var err error
 	runtime := mkRuntime(t)
 	runtime := mkRuntime(t)
@@ -244,7 +265,11 @@ func TestGetImagesJSON(t *testing.T) {
 		t.Fatalf("Error expected, received none")
 		t.Fatalf("Error expected, received none")
 	}
 	}
 
 
-	httpError(r4, err)
+	if !strings.HasPrefix(err.Error(), "Bad parameter") {
+		t.Fatalf("Error should starts with \"Bad parameter\"")
+	}
+	http.Error(r4, err.Error(), http.StatusBadRequest)
+
 	if r4.Code != http.StatusBadRequest {
 	if r4.Code != http.StatusBadRequest {
 		t.Fatalf("%d Bad Request expected, received %d\n", http.StatusBadRequest, r4.Code)
 		t.Fatalf("%d Bad Request expected, received %d\n", http.StatusBadRequest, r4.Code)
 	}
 	}
@@ -784,6 +809,8 @@ func TestPostContainersStart(t *testing.T) {
 		t.Fatal(err)
 		t.Fatal(err)
 	}
 	}
 
 
+	req.Header.Set("Content-Type", "application/json")
+
 	r := httptest.NewRecorder()
 	r := httptest.NewRecorder()
 	if err := postContainersStart(srv, APIVERSION, r, req, map[string]string{"name": container.ID}); err != nil {
 	if err := postContainersStart(srv, APIVERSION, r, req, map[string]string{"name": container.ID}); err != nil {
 		t.Fatal(err)
 		t.Fatal(err)

+ 1 - 0
commands_test.go

@@ -545,6 +545,7 @@ func TestAttachDisconnect(t *testing.T) {
 
 
 // Expected behaviour: container gets deleted automatically after exit
 // Expected behaviour: container gets deleted automatically after exit
 func TestRunAutoRemove(t *testing.T) {
 func TestRunAutoRemove(t *testing.T) {
+	t.Skip("Fixme. Skipping test for now, race condition")
 	stdout, stdoutPipe := io.Pipe()
 	stdout, stdoutPipe := io.Pipe()
 	cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
 	cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
 	defer cleanup(globalRuntime)
 	defer cleanup(globalRuntime)

+ 74 - 53
container.go

@@ -396,9 +396,9 @@ func (container *Container) startPty() error {
 	// Copy the PTYs to our broadcasters
 	// Copy the PTYs to our broadcasters
 	go func() {
 	go func() {
 		defer container.stdout.CloseWriters()
 		defer container.stdout.CloseWriters()
-		utils.Debugf("[startPty] Begin of stdout pipe")
+		utils.Debugf("startPty: begin of stdout pipe")
 		io.Copy(container.stdout, ptyMaster)
 		io.Copy(container.stdout, ptyMaster)
-		utils.Debugf("[startPty] End of stdout pipe")
+		utils.Debugf("startPty: end of stdout pipe")
 	}()
 	}()
 
 
 	// stdin
 	// stdin
@@ -407,9 +407,9 @@ func (container *Container) startPty() error {
 		container.cmd.SysProcAttr.Setctty = true
 		container.cmd.SysProcAttr.Setctty = true
 		go func() {
 		go func() {
 			defer container.stdin.Close()
 			defer container.stdin.Close()
-			utils.Debugf("[startPty] Begin of stdin pipe")
+			utils.Debugf("startPty: begin of stdin pipe")
 			io.Copy(ptyMaster, container.stdin)
 			io.Copy(ptyMaster, container.stdin)
-			utils.Debugf("[startPty] End of stdin pipe")
+			utils.Debugf("startPty: end of stdin pipe")
 		}()
 		}()
 	}
 	}
 	if err := container.cmd.Start(); err != nil {
 	if err := container.cmd.Start(); err != nil {
@@ -429,9 +429,9 @@ func (container *Container) start() error {
 		}
 		}
 		go func() {
 		go func() {
 			defer stdin.Close()
 			defer stdin.Close()
-			utils.Debugf("Begin of stdin pipe [start]")
+			utils.Debugf("start: begin of stdin pipe")
 			io.Copy(stdin, container.stdin)
 			io.Copy(stdin, container.stdin)
-			utils.Debugf("End of stdin pipe [start]")
+			utils.Debugf("start: end of stdin pipe")
 		}()
 		}()
 	}
 	}
 	return container.cmd.Start()
 	return container.cmd.Start()
@@ -448,8 +448,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 			errors <- err
 			errors <- err
 		} else {
 		} else {
 			go func() {
 			go func() {
-				utils.Debugf("[start] attach stdin\n")
-				defer utils.Debugf("[end] attach stdin\n")
+				utils.Debugf("attach: stdin: begin")
+				defer utils.Debugf("attach: stdin: end")
 				// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
 				// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
 				if container.Config.StdinOnce && !container.Config.Tty {
 				if container.Config.StdinOnce && !container.Config.Tty {
 					defer cStdin.Close()
 					defer cStdin.Close()
@@ -467,7 +467,7 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 					_, err = io.Copy(cStdin, stdin)
 					_, err = io.Copy(cStdin, stdin)
 				}
 				}
 				if err != nil {
 				if err != nil {
-					utils.Errorf("[error] attach stdin: %s\n", err)
+					utils.Errorf("attach: stdin: %s", err)
 				}
 				}
 				// Discard error, expecting pipe error
 				// Discard error, expecting pipe error
 				errors <- nil
 				errors <- nil
@@ -481,8 +481,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 		} else {
 		} else {
 			cStdout = p
 			cStdout = p
 			go func() {
 			go func() {
-				utils.Debugf("[start] attach stdout\n")
-				defer utils.Debugf("[end]  attach stdout\n")
+				utils.Debugf("attach: stdout: begin")
+				defer utils.Debugf("attach: stdout: end")
 				// If we are in StdinOnce mode, then close stdin
 				// If we are in StdinOnce mode, then close stdin
 				if container.Config.StdinOnce && stdin != nil {
 				if container.Config.StdinOnce && stdin != nil {
 					defer stdin.Close()
 					defer stdin.Close()
@@ -491,8 +491,11 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 					defer stdinCloser.Close()
 					defer stdinCloser.Close()
 				}
 				}
 				_, err := io.Copy(stdout, cStdout)
 				_, err := io.Copy(stdout, cStdout)
+				if err == io.ErrClosedPipe {
+					err = nil
+				}
 				if err != nil {
 				if err != nil {
-					utils.Errorf("[error] attach stdout: %s\n", err)
+					utils.Errorf("attach: stdout: %s", err)
 				}
 				}
 				errors <- err
 				errors <- err
 			}()
 			}()
@@ -502,9 +505,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 			if stdinCloser != nil {
 			if stdinCloser != nil {
 				defer stdinCloser.Close()
 				defer stdinCloser.Close()
 			}
 			}
-
 			if cStdout, err := container.StdoutPipe(); err != nil {
 			if cStdout, err := container.StdoutPipe(); err != nil {
-				utils.Errorf("Error stdout pipe")
+				utils.Errorf("attach: stdout pipe: %s", err)
 			} else {
 			} else {
 				io.Copy(&utils.NopWriter{}, cStdout)
 				io.Copy(&utils.NopWriter{}, cStdout)
 			}
 			}
@@ -517,8 +519,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 		} else {
 		} else {
 			cStderr = p
 			cStderr = p
 			go func() {
 			go func() {
-				utils.Debugf("[start] attach stderr\n")
-				defer utils.Debugf("[end]  attach stderr\n")
+				utils.Debugf("attach: stderr: begin")
+				defer utils.Debugf("attach: stderr: end")
 				// If we are in StdinOnce mode, then close stdin
 				// If we are in StdinOnce mode, then close stdin
 				if container.Config.StdinOnce && stdin != nil {
 				if container.Config.StdinOnce && stdin != nil {
 					defer stdin.Close()
 					defer stdin.Close()
@@ -527,8 +529,11 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 					defer stdinCloser.Close()
 					defer stdinCloser.Close()
 				}
 				}
 				_, err := io.Copy(stderr, cStderr)
 				_, err := io.Copy(stderr, cStderr)
+				if err == io.ErrClosedPipe {
+					err = nil
+				}
 				if err != nil {
 				if err != nil {
-					utils.Errorf("[error] attach stderr: %s\n", err)
+					utils.Errorf("attach: stderr: %s", err)
 				}
 				}
 				errors <- err
 				errors <- err
 			}()
 			}()
@@ -540,7 +545,7 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 			}
 			}
 
 
 			if cStderr, err := container.StderrPipe(); err != nil {
 			if cStderr, err := container.StderrPipe(); err != nil {
-				utils.Errorf("Error stdout pipe")
+				utils.Errorf("attach: stdout pipe: %s", err)
 			} else {
 			} else {
 				io.Copy(&utils.NopWriter{}, cStderr)
 				io.Copy(&utils.NopWriter{}, cStderr)
 			}
 			}
@@ -554,24 +559,29 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
 		if cStderr != nil {
 		if cStderr != nil {
 			defer cStderr.Close()
 			defer cStderr.Close()
 		}
 		}
-		// FIXME: how do clean up the stdin goroutine without the unwanted side effect
+		// FIXME: how to clean up the stdin goroutine without the unwanted side effect
 		// of closing the passed stdin? Add an intermediary io.Pipe?
 		// of closing the passed stdin? Add an intermediary io.Pipe?
 		for i := 0; i < nJobs; i += 1 {
 		for i := 0; i < nJobs; i += 1 {
-			utils.Debugf("Waiting for job %d/%d\n", i+1, nJobs)
+			utils.Debugf("attach: waiting for job %d/%d", i+1, nJobs)
 			if err := <-errors; err != nil {
 			if err := <-errors; err != nil {
-				utils.Errorf("Job %d returned error %s. Aborting all jobs\n", i+1, err)
+				utils.Errorf("attach: job %d returned error %s, aborting all jobs", i+1, err)
 				return err
 				return err
 			}
 			}
-			utils.Debugf("Job %d completed successfully\n", i+1)
+			utils.Debugf("attach: job %d completed successfully", i+1)
 		}
 		}
-		utils.Debugf("All jobs completed successfully\n")
+		utils.Debugf("attach: all jobs completed successfully")
 		return nil
 		return nil
 	})
 	})
 }
 }
 
 
-func (container *Container) Start(hostConfig *HostConfig) error {
+func (container *Container) Start(hostConfig *HostConfig) (err error) {
 	container.State.Lock()
 	container.State.Lock()
 	defer container.State.Unlock()
 	defer container.State.Unlock()
+	defer func() {
+		if err != nil {
+			container.cleanup()
+		}
+	}()
 
 
 	if hostConfig == nil { // in docker start of docker restart we want to reuse previous HostConfigFile
 	if hostConfig == nil { // in docker start of docker restart we want to reuse previous HostConfigFile
 		hostConfig, _ = container.ReadHostConfig()
 		hostConfig, _ = container.ReadHostConfig()
@@ -824,7 +834,6 @@ func (container *Container) Start(hostConfig *HostConfig) error {
 
 
 	container.cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true}
 	container.cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true}
 
 
-	var err error
 	if container.Config.Tty {
 	if container.Config.Tty {
 		err = container.startPty()
 		err = container.startPty()
 	} else {
 	} else {
@@ -870,9 +879,14 @@ func (container *Container) Output() (output []byte, err error) {
 	return output, err
 	return output, err
 }
 }
 
 
-// StdinPipe() returns a pipe connected to the standard input of the container's
-// active process.
-//
+// Container.StdinPipe returns a WriteCloser which can be used to feed data
+// to the standard input of the container's active process.
+// Container.StdoutPipe and Container.StderrPipe each return a ReadCloser
+// which can be used to retrieve the standard output (and error) generated
+// by the container's active process. The output (and error) are actually
+// copied and delivered to all StdoutPipe and StderrPipe consumers, using
+// a kind of "broadcaster".
+
 func (container *Container) StdinPipe() (io.WriteCloser, error) {
 func (container *Container) StdinPipe() (io.WriteCloser, error) {
 	return container.stdinPipe, nil
 	return container.stdinPipe, nil
 }
 }
@@ -950,7 +964,7 @@ func (container *Container) allocateNetwork() error {
 }
 }
 
 
 func (container *Container) releaseNetwork() {
 func (container *Container) releaseNetwork() {
-	if container.Config.NetworkDisabled {
+	if container.Config.NetworkDisabled || container.network == nil {
 		return
 		return
 	}
 	}
 	container.network.Release()
 	container.network.Release()
@@ -974,21 +988,24 @@ func (container *Container) waitLxc() error {
 
 
 func (container *Container) monitor(hostConfig *HostConfig) {
 func (container *Container) monitor(hostConfig *HostConfig) {
 	// Wait for the program to exit
 	// Wait for the program to exit
-	utils.Debugf("Waiting for process")
 
 
-	// If the command does not exists, try to wait via lxc
+	// If the command does not exist, try to wait via lxc
+	// (This probably happens only for ghost containers, i.e. containers that were running when Docker started)
 	if container.cmd == nil {
 	if container.cmd == nil {
+		utils.Debugf("monitor: waiting for container %s using waitLxc", container.ID)
 		if err := container.waitLxc(); err != nil {
 		if err := container.waitLxc(); err != nil {
 			// Discard the error as any signals or non 0 returns will generate an error
 			// Discard the error as any signals or non 0 returns will generate an error
-			utils.Debugf("%s: Process: %s", container.ShortID(), err)
+			utils.Debugf("monitor: while waiting for container %s, waitLxc had a problem: %s", container.ShortID(), err)
 		}
 		}
 	} else {
 	} else {
+		utils.Debugf("monitor: waiting for container %s using cmd.Wait", container.ID)
 		if err := container.cmd.Wait(); err != nil {
 		if err := container.cmd.Wait(); err != nil {
-			// Discard the error as any signals or non 0 returns will generate an error
-			utils.Debugf("%s: Process: %s", container.ShortID(), err)
+			// Since non-zero exit status and signal terminations will cause err to be non-nil,
+			// we have to actually discard it. Still, log it anyway, just in case.
+			utils.Debugf("monitor: cmd.Wait reported exit status %s for container %s", err, container.ID)
 		}
 		}
 	}
 	}
-	utils.Debugf("Process finished")
+	utils.Debugf("monitor: container %s finished", container.ID)
 
 
 	exitCode := -1
 	exitCode := -1
 	if container.cmd != nil {
 	if container.cmd != nil {
@@ -1003,6 +1020,28 @@ func (container *Container) monitor(hostConfig *HostConfig) {
 	}
 	}
 
 
 	// Cleanup
 	// Cleanup
+	container.cleanup()
+
+	// Re-create a brand new stdin pipe once the container exited
+	if container.Config.OpenStdin {
+		container.stdin, container.stdinPipe = io.Pipe()
+	}
+
+	// Release the lock
+	close(container.waitLock)
+
+	if err := container.ToDisk(); err != nil {
+		// FIXME: there is a race condition here which causes this to fail during the unit tests.
+		// If another goroutine was waiting for Wait() to return before removing the container's root
+		// from the filesystem... At this point it may already have done so.
+		// This is because State.setStopped() has already been called, and has caused Wait()
+		// to return.
+		// FIXME: why are we serializing running state to disk in the first place?
+		//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
+	}
+}
+
+func (container *Container) cleanup() {
 	container.releaseNetwork()
 	container.releaseNetwork()
 	if container.Config.OpenStdin {
 	if container.Config.OpenStdin {
 		if err := container.stdin.Close(); err != nil {
 		if err := container.stdin.Close(); err != nil {
@@ -1025,24 +1064,6 @@ func (container *Container) monitor(hostConfig *HostConfig) {
 	if err := container.Unmount(); err != nil {
 	if err := container.Unmount(); err != nil {
 		log.Printf("%v: Failed to umount filesystem: %v", container.ID, err)
 		log.Printf("%v: Failed to umount filesystem: %v", container.ID, err)
 	}
 	}
-
-	// Re-create a brand new stdin pipe once the container exited
-	if container.Config.OpenStdin {
-		container.stdin, container.stdinPipe = io.Pipe()
-	}
-
-	// Release the lock
-	close(container.waitLock)
-
-	if err := container.ToDisk(); err != nil {
-		// FIXME: there is a race condition here which causes this to fail during the unit tests.
-		// If another goroutine was waiting for Wait() to return before removing the container's root
-		// from the filesystem... At this point it may already have done so.
-		// This is because State.setStopped() has already been called, and has caused Wait()
-		// to return.
-		// FIXME: why are we serializing running state to disk in the first place?
-		//log.Printf("%s: Failed to dump configuration to the disk: %s", container.ID, err)
-	}
 }
 }
 
 
 func (container *Container) kill() error {
 func (container *Container) kill() error {

+ 0 - 1
contrib/MAINTAINERS

@@ -1,2 +1 @@
 Tianon Gravi <admwiggin@gmail.com> (@tianon)
 Tianon Gravi <admwiggin@gmail.com> (@tianon)
-Kawsar Saiyeed <kawsar.saiyeed@projiris.com> (@KSid)

+ 0 - 0
contrib/docker.bash → contrib/completion/bash/docker


+ 242 - 0
contrib/completion/zsh/_docker

@@ -0,0 +1,242 @@
+#compdef docker 
+#
+# zsh completion for docker (http://docker.io)
+#
+# version:  0.2.2
+# author:   Felix Riedel
+# license:  BSD License
+# github:   https://github.com/felixr/docker-zsh-completion
+#
+
+__parse_docker_list() {
+    sed -e '/^ID/d' -e 's/[ ]\{2,\}/|/g' -e 's/ \([hdwm]\)\(inutes\|ays\|ours\|eeks\)/\1/' | awk ' BEGIN {FS="|"} { printf("%s:%7s, %s\n", $1, $4, $2)}'
+}
+
+__docker_stoppedcontainers() {
+    local expl
+    declare -a stoppedcontainers 
+    stoppedcontainers=(${(f)"$(docker ps -a | grep --color=never 'Exit' |  __parse_docker_list )"})
+    _describe -t containers-stopped "Stopped Containers" stoppedcontainers 
+}
+
+__docker_runningcontainers() {
+    local expl
+    declare -a containers 
+
+    containers=(${(f)"$(docker ps | __parse_docker_list)"})
+    _describe -t containers-active "Running Containers" containers 
+}
+
+__docker_containers () {
+    __docker_stoppedcontainers 
+    __docker_runningcontainers
+}
+
+__docker_images () {
+    local expl
+    declare -a images
+    images=(${(f)"$(docker images | awk '(NR > 1){printf("%s\\:%s\n", $1,$2)}')"})
+    images=($images ${(f)"$(docker images | awk '(NR > 1){printf("%s:%-15s in %s\n", $3,$2,$1)}')"})
+    _describe -t docker-images "Images" images
+}
+
+__docker_tags() {
+    local expl
+    declare -a tags
+    tags=(${(f)"$(docker images | awk '(NR>1){print $2}'| sort | uniq)"})
+    _describe -t docker-tags "tags" tags
+}
+
+__docker_search() {
+    # declare -a dockersearch
+    local cache_policy
+    zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
+    if [[ -z "$cache_policy" ]]; then
+        zstyle ":completion:${curcontext}:" cache-policy __docker_caching_policy 
+    fi
+
+    local searchterm cachename
+    searchterm="${words[$CURRENT]%/}"
+    cachename=_docker-search-$searchterm
+
+    local expl
+    local -a result 
+    if ( [[ ${(P)+cachename} -eq 0 ]] || _cache_invalid ${cachename#_} ) \
+        && ! _retrieve_cache ${cachename#_}; then
+        _message "Searching for ${searchterm}..."
+        result=(${(f)"$(docker search ${searchterm} | awk '(NR>2){print $1}')"})
+        _store_cache ${cachename#_} result
+    fi 
+    _wanted dockersearch expl 'Available images' compadd -a result 
+}
+
+__docker_caching_policy()
+{
+  # oldp=( "$1"(Nmh+24) )     # 24 hour
+  oldp=( "$1"(Nmh+1) )     # 24 hour
+  (( $#oldp ))
+}
+
+
+__docker_repositories () {
+    local expl
+    declare -a repos
+    repos=(${(f)"$(docker images | sed -e '1d' -e 's/[ ].*//' | sort | uniq)"})
+    _describe -t docker-repos "Repositories" repos
+}
+
+__docker_commands () {
+    # local -a  _docker_subcommands
+    local cache_policy
+
+    zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
+    if [[ -z "$cache_policy" ]]; then
+        zstyle ":completion:${curcontext}:" cache-policy __docker_caching_policy 
+    fi
+
+    if ( [[ ${+_docker_subcommands} -eq 0 ]] || _cache_invalid docker_subcommands) \
+        && ! _retrieve_cache docker_subcommands; 
+    then
+        _docker_subcommands=(${${(f)"$(_call_program commands 
+        docker 2>&1 | sed -e '1,6d' -e '/^[ ]*$/d' -e 's/[ ]*\([^ ]\+\)\s*\([^ ].*\)/\1:\2/' )"}})
+        _docker_subcommands=($_docker_subcommands 'help:Show help for a command') 
+        _store_cache docker_subcommands _docker_subcommands
+    fi
+    _describe -t docker-commands "docker command" _docker_subcommands
+}
+
+__docker_subcommand () {
+    local -a _command_args
+    case "$words[1]" in
+        (attach|wait)
+            _arguments ':containers:__docker_runningcontainers'
+            ;;
+        (build)
+            _arguments \
+                '-t=-:repository:__docker_repositories' \
+                ':path or URL:_directories'
+            ;;
+        (commit)
+            _arguments \
+                ':container:__docker_containers' \
+                ':repository:__docker_repositories' \
+                ':tag: '
+            ;;
+        (diff|export|logs)
+            _arguments '*:containers:__docker_containers'
+            ;;
+        (history)
+            _arguments '*:images:__docker_images'
+            ;;
+        (images)
+            _arguments \
+                '-a[Show all images]' \
+                ':repository:__docker_repositories'
+            ;;
+        (inspect)
+            _arguments '*:containers:__docker_containers'
+            ;;
+        (history)
+            _arguments ':images:__docker_images'
+            ;;
+        (insert)
+            _arguments '1:containers:__docker_containers' \
+                       '2:URL:(http:// file://)' \
+                       '3:file:_files'
+            ;;
+        (kill)
+            _arguments '*:containers:__docker_runningcontainers'
+            ;;
+        (port)
+            _arguments '1:containers:__docker_runningcontainers'
+            ;;
+        (start)
+            _arguments '*:containers:__docker_stoppedcontainers'
+            ;;
+        (rm)
+            _arguments '-v[Remove the volumes associated to the container]' \
+                '*:containers:__docker_stoppedcontainers'
+            ;;
+        (rmi)
+            _arguments '-v[Remove the volumes associated to the container]' \
+                '*:images:__docker_images'
+            ;;
+        (top)
+            _arguments '1:containers:__docker_runningcontainers'
+            ;;
+        (restart|stop)
+            _arguments '-t=-[Number of seconds to try to stop for before killing the container]:seconds to before killing:(1 5 10 30 60)' \
+                '*:containers:__docker_runningcontainers'
+            ;;
+        (top)
+            _arguments ':containers:__docker_runningcontainers'
+            ;;
+        (ps)
+            _arguments '-a[Show all containers. Only running containers are shown by default]' \
+                '-h[Show help]' \
+                '-beforeId=-[Show only container created before Id, include non-running one]:containers:__docker_containers' \
+            '-n=-[Show n last created containers, include non-running one]:n:(1 5 10 25 50)'
+            ;;
+        (tag)
+            _arguments \
+                '-f[force]'\
+                ':image:__docker_images'\
+                ':repository:__docker_repositories' \
+                ':tag:__docker_tags'
+            ;;
+        (run)
+            _arguments \
+                '-a=-[Attach to stdin, stdout or stderr]:toggle:(true false)' \
+                '-c=-[CPU shares (relative weight)]:CPU shares: ' \
+                '-d[Detached mode: leave the container running in the background]' \
+                '*-dns=[Set custom dns servers]:dns server: ' \
+                '*-e=[Set environment variables]:environment variable: ' \
+                '-entrypoint=-[Overwrite the default entrypoint of the image]:entry point: ' \
+                '-h=-[Container host name]:hostname:_hosts' \
+                '-i[Keep stdin open even if not attached]' \
+                '-m=-[Memory limit (in bytes)]:limit: ' \
+                '*-p=-[Expose a container''s port to the host]:port:_ports' \
+                '-t=-[Allocate a pseudo-tty]:toggle:(true false)' \
+                '-u=-[Username or UID]:user:_users' \
+                '*-v=-[Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)]:volume: '\
+                '-volumes-from=-[Mount volumes from the specified container]:volume: ' \
+                '(-):images:__docker_images' \
+                '(-):command: _command_names -e' \
+                '*::arguments: _normal'
+                ;;
+        (pull|search)
+            _arguments ':name:__docker_search'
+            ;;
+        (help)
+            _arguments ':subcommand:__docker_commands'
+            ;;
+        (*)
+            _message 'Unknown sub command'
+    esac
+
+}
+
+_docker () {
+    local curcontext="$curcontext" state line
+    typeset -A opt_args
+
+    _arguments -C \
+      '-H=-[tcp://host:port to bind/connect to]:socket: ' \
+         '(-): :->command' \
+         '(-)*:: :->option-or-argument' 
+
+    if (( CURRENT == 1 )); then
+
+    fi
+    case $state in 
+        (command)
+            __docker_commands
+            ;;
+        (option-or-argument)
+            curcontext=${curcontext%:*:*}:docker-$words[1]:
+            __docker_subcommand 
+            ;;
+    esac
+}
+
+_docker "$@"

+ 67 - 0
contrib/mkimage-arch.sh

@@ -0,0 +1,67 @@
+#!/bin/bash
+# Generate a minimal filesystem for archlinux and load it into the local
+# docker as "archlinux"
+# requires root
+set -e
+
+PACSTRAP=$(which pacstrap)
+[ "$PACSTRAP" ] || {
+    echo "Could not find pacstrap. Run pacman -S arch-install-scripts"
+    exit 1
+}
+EXPECT=$(which expect)
+[ "$EXPECT" ] || {
+    echo "Could not find expect. Run pacman -S expect"
+    exit 1
+}
+
+ROOTFS=~/rootfs-arch-$$-$RANDOM
+mkdir $ROOTFS
+
+#packages to ignore for space savings
+PKGIGNORE=linux,jfsutils,lvm2,cryptsetup,groff,man-db,man-pages,mdadm,pciutils,pcmciautils,reiserfsprogs,s-nail,xfsprogs
+ 
+expect <<EOF
+  set timeout 60
+  set send_slow {1 1}
+  spawn pacstrap -c -d -G -i $ROOTFS base haveged --ignore $PKGIGNORE
+  expect {
+    "Install anyway?" { send n\r; exp_continue }
+    "(default=all)" { send \r; exp_continue }
+    "Proceed with installation?" { send "\r"; exp_continue }
+    "skip the above package" {send "y\r"; exp_continue }
+    "checking" { exp_continue }
+    "loading" { exp_continue }
+    "installing" { exp_continue }
+  }
+EOF
+
+arch-chroot $ROOTFS /bin/sh -c "haveged -w 1024; pacman-key --init; pkill haveged; pacman -Rs --noconfirm haveged; pacman-key --populate archlinux"
+arch-chroot $ROOTFS /bin/sh -c "ln -s /usr/share/zoneinfo/UTC /etc/localtime"
+cat > $ROOTFS/etc/locale.gen <<DELIM
+en_US.UTF-8 UTF-8
+en_US ISO-8859-1
+DELIM
+arch-chroot $ROOTFS locale-gen
+arch-chroot $ROOTFS /bin/sh -c 'echo "Server = http://mirrors.kernel.org/archlinux/\$repo/os/\$arch" > /etc/pacman.d/mirrorlist'
+
+# udev doesn't work in containers, rebuild /dev
+DEV=${ROOTFS}/dev
+mv ${DEV} ${DEV}.old
+mkdir -p ${DEV}
+mknod -m 666 ${DEV}/null c 1 3
+mknod -m 666 ${DEV}/zero c 1 5
+mknod -m 666 ${DEV}/random c 1 8
+mknod -m 666 ${DEV}/urandom c 1 9
+mkdir -m 755 ${DEV}/pts
+mkdir -m 1777 ${DEV}/shm
+mknod -m 666 ${DEV}/tty c 5 0
+mknod -m 600 ${DEV}/console c 5 1
+mknod -m 666 ${DEV}/tty0 c 4 0
+mknod -m 666 ${DEV}/full c 1 7
+mknod -m 600 ${DEV}/initctl p
+mknod -m 666 ${DEV}/ptmx c 5 2
+
+tar -C $ROOTFS -c . | docker import - archlinux
+docker run -i -t archlinux echo Success.
+rm -rf $ROOTFS

+ 22 - 0
contrib/vim-syntax/LICENSE

@@ -0,0 +1,22 @@
+Copyright (c) 2013 Honza Pokorny
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright
+   notice, this list of conditions and the following disclaimer in the
+   documentation and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

+ 23 - 0
contrib/vim-syntax/README.md

@@ -0,0 +1,23 @@
+dockerfile.vim
+==============
+
+Syntax highlighting for Dockerfiles
+
+Installation
+------------
+
+Via pathogen, the usual way...
+
+Features
+--------
+
+The syntax highlighting includes:
+
+* The directives (e.g. `FROM`)
+* Strings
+* Comments
+
+License
+-------
+
+BSD, short and sweet

+ 18 - 0
contrib/vim-syntax/doc/dockerfile.txt

@@ -0,0 +1,18 @@
+*dockerfile.txt*  Syntax highlighting for Dockerfiles
+
+Author: Honza Pokorny <http://honza.ca>
+License: BSD
+
+INSTALLATION                                                     *installation*
+
+Drop it on your Pathogen path and you're all set.
+
+FEATURES                                                             *features*
+
+The syntax highlighting includes:
+
+* The directives (e.g. FROM)
+* Strings
+* Comments
+
+ vim:tw=78:et:ft=help:norl:

+ 1 - 0
contrib/vim-syntax/ftdetect/dockerfile.vim

@@ -0,0 +1 @@
+au BufNewFile,BufRead Dockerfile set filetype=dockerfile

+ 22 - 0
contrib/vim-syntax/syntax/dockerfile.vim

@@ -0,0 +1,22 @@
+" dockerfile.vim - Syntax highlighting for Dockerfiles
+" Maintainer:   Honza Pokorny <http://honza.ca>
+" Version:      0.5
+
+
+if exists("b:current_syntax")
+    finish
+endif
+
+let b:current_syntax = "dockerfile"
+
+syntax case ignore
+
+syntax match dockerfileKeyword /\v^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s/
+syntax match dockerfileKeyword /\v^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s/
+highlight link dockerfileKeyword Keyword
+
+syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/
+highlight link dockerfileString String
+
+syntax match dockerfileComment "\v^\s*#.*$"
+highlight link dockerfileComment Comment

+ 87 - 25
docs/README.md

@@ -1,38 +1,93 @@
 Docker Documentation
 Docker Documentation
 ====================
 ====================
 
 
-Documentation
--------------
-This is your definite place to contribute to the docker documentation. After each push to master the documentation
-is automatically generated and made available on [docs.docker.io](http://docs.docker.io)
-
-Each of the .rst files under sources reflects a page on the documentation. 
+Overview
+--------
 
 
-Installation
-------------
+The source for Docker documentation is here under ``sources/`` in the
+form of .rst files. These files use
+[reStructuredText](http://docutils.sourceforge.net/rst.html)
+formatting with [Sphinx](http://sphinx-doc.org/) extensions for
+structure, cross-linking and indexing.
+
+The HTML files are built and hosted on
+[readthedocs.org](https://readthedocs.org/projects/docker/), appearing
+via proxy on https://docs.docker.io. The HTML files update
+automatically after each change to the master or release branch of the
+[docker files on GitHub](https://github.com/dotcloud/docker) thanks to
+post-commit hooks. The "release" branch maps to the "latest"
+documentation and the "master" branch maps to the "master"
+documentation. 
+
+**Warning**: The "master" documentation may include features not yet
+part of any official docker release. "Master" docs should be used only
+for understanding bleeding-edge development and "latest" should be
+used for the latest official release.
+
+If you need to manually trigger a build of an existing branch, then
+you can do that through the [readthedocs
+interface](https://readthedocs.org/builds/docker/). If you would like
+to add new build targets, including new branches or tags, then you
+must contact one of the existing maintainers and get your
+readthedocs.org account added to the maintainers list, or just file an
+issue on GitHub describing the branch/tag and why it needs to be added
+to the docs, and one of the maintainers will add it for you.
+
+Getting Started
+---------------
+
+To edit and test the docs, you'll need to install the Sphinx tool and
+its dependencies. There are two main ways to install this tool:
+
+Native Installation
+...................
 
 
-* Work in your own fork of the code, we accept pull requests.
 * Install sphinx: `pip install sphinx`
 * Install sphinx: `pip install sphinx`
-    * Mac OS X: `[sudo] pip-2.7 install sphinx`)
+    * Mac OS X: `[sudo] pip-2.7 install sphinx`
 * Install sphinx httpdomain contrib package: `pip install sphinxcontrib-httpdomain`
 * Install sphinx httpdomain contrib package: `pip install sphinxcontrib-httpdomain`
     * Mac OS X: `[sudo] pip-2.7 install sphinxcontrib-httpdomain`
     * Mac OS X: `[sudo] pip-2.7 install sphinxcontrib-httpdomain`
 * If pip is not available you can probably install it using your favorite package manager as **python-pip**
 * If pip is not available you can probably install it using your favorite package manager as **python-pip**
 
 
+Alternative Installation: Docker Container
+..........................................
+
+If you're running ``docker`` on your development machine then you may
+find it easier and cleaner to use the Dockerfile. This installs Sphinx
+in a container, adds the local ``docs/`` directory and builds the HTML
+docs inside the container, even starting a simple HTTP server on port
+8000 so that you can connect and see your changes. Just run ``docker
+build .`` and run the resulting image. This is the equivalent to
+``make clean server`` since each container starts clean.
+
 Usage
 Usage
 -----
 -----
-* Change the `.rst` files with your favorite editor to your liking.
-* Run `make docs` to clean up old files and generate new ones.
-* Your static website can now be found in the `_build` directory.
-* To preview what you have generated run `make server` and open http://localhost:8000/ in your favorite browser.
+* Follow the contribution guidelines (``../CONTRIBUTING.md``)
+* Work in your own fork of the code, we accept pull requests.
+* Change the ``.rst`` files with your favorite editor -- try to keep the
+  lines short and respect RST and Sphinx conventions. 
+* Run ``make clean docs`` to clean up old files and generate new ones,
+  or just ``make docs`` to update after small changes.
+* Your static website can now be found in the ``_build`` directory.
+* To preview what you have generated run ``make server`` and open
+  http://localhost:8000/ in your favorite browser.
+
+``make clean docs`` must complete without any warnings or errors.
 
 
 Working using GitHub's file editor
 Working using GitHub's file editor
 ----------------------------------
 ----------------------------------
-Alternatively, for small changes and typo's you might want to use GitHub's built in file editor. It allows
-you to preview your changes right online. Just be careful not to create many commits.
+
+Alternatively, for small changes and typos you might want to use
+GitHub's built in file editor. It allows you to preview your changes
+right online (though there can be some differences between GitHub
+markdown and Sphinx RST). Just be careful not to create many commits.
 
 
 Images
 Images
 ------
 ------
-When you need to add images, try to make them as small as possible (e.g. as gif).
+
+When you need to add images, try to make them as small as possible
+(e.g. as gif). Usually images should go in the same directory as the
+.rst file which references them, or in a subdirectory if one already
+exists.
 
 
 Notes
 Notes
 -----
 -----
@@ -41,7 +96,7 @@ lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.les
 
 
 Guides on using sphinx
 Guides on using sphinx
 ----------------------
 ----------------------
-* To make links to certain pages create a link target like so:
+* To make links to certain sections create a link target like so:
 
 
   ```
   ```
     .. _hello_world:
     .. _hello_world:
@@ -52,7 +107,10 @@ Guides on using sphinx
     This is.. (etc.)
     This is.. (etc.)
   ```
   ```
 
 
-  The ``_hello_world:`` will make it possible to link to this position (page and marker) from all other pages.
+  The ``_hello_world:`` will make it possible to link to this position
+  (page and section heading) from all other pages. See the [Sphinx
+  docs](http://sphinx-doc.org/markup/inline.html#role-ref) for more
+  information and examples.
 
 
 * Notes, warnings and alarms
 * Notes, warnings and alarms
 
 
@@ -68,13 +126,17 @@ Guides on using sphinx
 
 
 * Code examples
 * Code examples
 
 
-  Start without $, so it's easy to copy and paste.
+  * Start without $, so it's easy to copy and paste.
+  * Use "sudo" with docker to ensure that your command is runnable
+    even if they haven't [used the *docker*
+    group](http://docs.docker.io/en/latest/use/basics/#why-sudo).
 
 
 Manpages
 Manpages
 --------
 --------
 
 
-* To make the manpages, simply run 'make man'. Please note there is a bug in spinx 1.1.3 which makes this fail.
-Upgrade to the latest version of sphinx.
-* Then preview the manpage by running `man _build/man/docker.1`, where _build/man/docker.1 is the path to the generated
-manfile
-* The manpages are also autogenerated by our hosted readthedocs here: http://docs-docker.dotcloud.com/projects/docker/downloads/
+* To make the manpages, run ``make man``. Please note there is a bug
+  in spinx 1.1.3 which makes this fail.  Upgrade to the latest version
+  of Sphinx.
+* Then preview the manpage by running ``man _build/man/docker.1``,
+  where ``_build/man/docker.1`` is the path to the generated manfile
+

+ 0 - 1
docs/sources/api/docker_remote_api.rst

@@ -22,7 +22,6 @@ Docker Remote API
 - Since API version 1.2, the auth configuration is now handled client
 - Since API version 1.2, the auth configuration is now handled client
   side, so the client has to send the authConfig as POST in
   side, so the client has to send the authConfig as POST in
   /images/(name)/push
   /images/(name)/push
-- Known client libraries may be found in :ref:`remote_api_client_libs`
 
 
 2. Versions
 2. Versions
 ===========
 ===========

+ 3 - 7
docs/sources/api/remote_api_client_libraries.rst

@@ -3,18 +3,14 @@
 :keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, Javascript, Erlang, Go
 :keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, Javascript, Erlang, Go
 
 
 
 
-.. _remote_api_client_libs:
-
 ==================================
 ==================================
 Docker Remote API Client Libraries
 Docker Remote API Client Libraries
 ==================================
 ==================================
 
 
 These libraries have not been tested by the Docker Maintainers for
 These libraries have not been tested by the Docker Maintainers for
-compatibility with the :doc:`docker_remote_api`. Please file issues
-with the library owners.  If you find more library implementations,
-please list them in `Docker doc issues
-<https://github.com/dotcloud/docker/issues?direction=desc&labels=doc&sort=updated&state=open>`_
-or make a pull request, and we will add the libraries here.
+compatibility. Please file issues with the library owners.  If you
+find more library implementations, please list them in Docker doc bugs
+and we will add the libraries here.
 
 
 +----------------------+----------------+--------------------------------------------+
 +----------------------+----------------+--------------------------------------------+
 | Language/Framework   | Name           | Repository                                 |
 | Language/Framework   | Name           | Repository                                 |

+ 630 - 38
docs/sources/commandline/cli.rst

@@ -4,11 +4,8 @@
 
 
 .. _cli:
 .. _cli:
 
 
-Overview
-======================
-
-Docker Usage
-~~~~~~~~~~~~~~~~~~
+Command Line Help
+-----------------
 
 
 To list available commands, either run ``docker`` with no parameters or execute
 To list available commands, either run ``docker`` with no parameters or execute
 ``docker help``::
 ``docker help``::
@@ -21,71 +18,666 @@ To list available commands, either run ``docker`` with no parameters or execute
 
 
     ...
     ...
 
 
+.. _cli_attach:
+
+``attach``
+----------
+
+::
+
+    Usage: docker attach CONTAINER
+
+    Attach to a running container.
+
+You can detach from the container again (and leave it running) with
+``CTRL-c`` (for a quiet exit) or ``CTRL-\`` to get a stacktrace of
+the Docker client when it quits.
+
+To stop a container, use ``docker stop``
+
+To kill the container, use ``docker kill``
+
+.. _cli_attach_examples:
+ 
+Examples:
+~~~~~~~~~
+
+.. code-block:: bash
+
+     $ ID=$(sudo docker run -d ubuntu /usr/bin/top -b)
+     $ sudo docker attach $ID
+     top - 02:05:52 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
+     Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+     Cpu(s):  0.1%us,  0.2%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+     Mem:    373572k total,   355560k used,    18012k free,    27872k buffers
+     Swap:   786428k total,        0k used,   786428k free,   221740k cached
+
+     PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
+      1 root      20   0 17200 1116  912 R    0  0.3   0:00.03 top                
+
+      top - 02:05:55 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
+      Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+      Cpu(s):  0.0%us,  0.2%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+      Mem:    373572k total,   355244k used,    18328k free,    27872k buffers
+      Swap:   786428k total,        0k used,   786428k free,   221776k cached
+
+        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
+	    1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top                
+
+
+      top - 02:05:58 up  3:06,  0 users,  load average: 0.01, 0.02, 0.05
+      Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
+      Cpu(s):  0.2%us,  0.3%sy,  0.0%ni, 99.5%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
+      Mem:    373572k total,   355780k used,    17792k free,    27880k buffers
+      Swap:   786428k total,        0k used,   786428k free,   221776k cached
+
+      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
+           1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top                
+     ^C$ 
+     $ sudo docker stop $ID
+
+.. _cli_build:
+
+``build``
+---------
+
+::
+
+    Usage: docker build [OPTIONS] PATH | URL | -
+    Build a new container image from the source code at PATH
+      -t="": Repository name (and optionally a tag) to be applied to the resulting image in case of success.
+      -q=false: Suppress verbose build output.
+      -no-cache: Do not use the cache when building the image.
+      -rm: Remove intermediate containers after a successful build
+    When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
+
+.. _cli_build_examples:
+
+Examples
+~~~~~~~~
+
+.. code-block:: bash
+
+    sudo docker build .
+
+This will read the ``Dockerfile`` from the current directory. It will
+also send any other files and directories found in the current
+directory to the ``docker`` daemon.
+
+The contents of this directory would be used by ``ADD`` commands found
+within the ``Dockerfile``.  This will send a lot of data to the
+``docker`` daemon if the current directory contains a lot of data.  If
+the absolute path is provided instead of ``.`` then only the files and
+directories required by the ADD commands from the ``Dockerfile`` will be
+added to the context and transferred to the ``docker`` daemon.
+
+.. code-block:: bash
+
+   sudo docker build -t vieux/apache:2.0 .
+
+This will build like the previous example, but it will then tag the
+resulting image. The repository name will be ``vieux/apache`` and the
+tag will be ``2.0``
+
+
+.. code-block:: bash
+
+    sudo docker build - < Dockerfile
+
+This will read a ``Dockerfile`` from *stdin* without context. Due to
+the lack of a context, no contents of any local directory will be sent
+to the ``docker`` daemon.  ``ADD`` doesn't work when running in this
+mode because the absence of the context provides no source files to
+copy to the container.
+
+
+.. code-block:: bash
+
+    sudo docker build github.com/creack/docker-firefox
+
+This will clone the Github repository and use it as context. The
+``Dockerfile`` at the root of the repository is used as
+``Dockerfile``.  Note that you can specify an arbitrary git repository
+by using the ``git://`` schema.
+
+
+.. _cli_commit:
+
+``commit``
+----------
+
+::
+
+    Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY [TAG]]
+
+    Create a new image from a container's changes
+
+      -m="": Commit message
+      -author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
+      -run="": Configuration to be applied when the image is launched with `docker run`. 
+               (ex: '{"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
+
+Full -run example (multiline is ok within a single quote ``'``)
+
+::
+
+  $ sudo docker commit -run='
+  {
+      "Entrypoint" : null,
+      "Privileged" : false,
+      "User" : "",
+      "VolumesFrom" : "",
+      "Cmd" : ["cat", "-e", "/etc/resolv.conf"],
+      "Dns" : ["8.8.8.8", "8.8.4.4"],
+      "MemorySwap" : 0,
+      "AttachStdin" : false,
+      "AttachStderr" : false,
+      "CpuShares" : 0,
+      "OpenStdin" : false,
+      "Volumes" : null,
+      "Hostname" : "122612f45831",
+      "PortSpecs" : ["22", "80", "443"],
+      "Image" : "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+      "Tty" : false,
+      "Env" : [
+         "HOME=/",
+         "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+      ],
+      "StdinOnce" : false,
+      "Domainname" : "",
+      "WorkingDir" : "/",
+      "NetworkDisabled" : false,
+      "Memory" : 0,
+      "AttachStdout" : false
+  }' $CONTAINER_ID
+
+.. _cli_cp:
+
+``cp``
+------
+
+::
+
+    Usage: docker cp CONTAINER:RESOURCE HOSTPATH
+
+    Copy files/folders from the containers filesystem to the host
+    path.  Paths are relative to the root of the filesystem.
+
+.. _cli_diff:
+
+``diff``
+--------
+
+::
+
+    Usage: docker diff CONTAINER [OPTIONS]
+
+    Inspect changes on a container's filesystem
+
+.. _cli_events:
+
+``events``
+----------
+
+::
+
+    Usage: docker events
+
+    Get real time events from the server
+
+.. _cli_events_example:
+
+Examples
+~~~~~~~~
+
+You'll need two shells for this example.
+
+Shell 1: Listening for events
+.............................
+
+.. code-block:: bash
+    
+    $ sudo docker events
+
+Shell 2: Start and Stop a Container
+...................................
+
+.. code-block:: bash
+
+    $ sudo docker start 4386fb97867d
+    $ sudo docker stop 4386fb97867d
+
+Shell 1: (Again .. now showing events)
+......................................
+
+.. code-block:: bash
+
+    [2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
+    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
+
+
+.. _cli_export:
+
+``export``
+----------
+
+::
+
+    Usage: docker export CONTAINER
+
+    Export the contents of a filesystem as a tar archive
+
+.. _cli_history:
+
+``history``
+-----------
+
+::
+
+    Usage: docker history [OPTIONS] IMAGE
+
+    Show the history of an image
+
+.. _cli_images:
+
+``images``
+----------
+
+::
+
+    Usage: docker images [OPTIONS] [NAME]
+
+    List images
+
+      -a=false: show all images
+      -q=false: only show numeric IDs
+      -viz=false: output in graphviz format
+
+Displaying images visually
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    sudo docker images -viz | dot -Tpng -o docker.png
+
+.. image:: docker_images.gif
+   :alt: Example inheritance graph of Docker images.
+
+.. _cli_import:
+
+``import``
+----------
+
+::
+
+    Usage: docker import URL|- [REPOSITORY [TAG]]
+
+    Create a new filesystem image from the contents of a tarball
+
+At this time, the URL must start with ``http`` and point to a single
+file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) containing a
+root filesystem. If you would like to import from a local directory or
+archive, you can use the ``-`` parameter to take the data from
+standard in.
+
+Examples
+~~~~~~~~
+
+Import from a remote location
+.............................
+
+``$ sudo docker import http://example.com/exampleimage.tgz exampleimagerepo``
+
+Import from a local file
+........................
+
+Import to docker via pipe and standard in
+
+``$ cat exampleimage.tgz | sudo docker import - exampleimagelocal``
+
+Import from a local directory
+.............................
+
+``$ sudo tar -c . | docker import - exampleimagedir``
+
+Note the ``sudo`` in this example -- you must preserve the ownership
+of the files (especially root ownership) during the archiving with
+tar. If you are not root (or sudo) when you tar, then the ownerships
+might not get preserved.
+
+.. _cli_info:
+
+``info``
+--------
+
+::
+
+    Usage: docker info
+
+    Display system-wide information.
+
+.. _cli_insert:
+
+``insert``
+----------
+
+::
+
+    Usage: docker insert IMAGE URL PATH
+
+    Insert a file from URL in the IMAGE at PATH
+
+Examples
+~~~~~~~~
+
+Insert file from github
+.......................
+
+.. code-block:: bash
+
+    $ sudo docker insert 8283e18b24bc https://raw.github.com/metalivedev/django/master/postinstall /tmp/postinstall.sh
+
+.. _cli_inspect:
+
+``inspect``
+-----------
+
+::
+
+    Usage: docker inspect [OPTIONS] CONTAINER
+
+    Return low-level information on a container
+
+.. _cli_kill:
+
+``kill``
+--------
+
+::
+
+    Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
+
+    Kill a running container
+
+.. _cli_login:
+
+``login``
+---------
+
+::
+
+    Usage: docker login [OPTIONS] [SERVER]
+
+    Register or Login to the docker registry server
+
+    -e="": email
+    -p="": password
+    -u="": username
+
+    If you want to login to a private registry you can
+    specify this by adding the server name.
+
+    example:
+    docker login localhost:8080
+
+
+.. _cli_logs:
+
+``logs``
+--------
+
+
+::
+
+    Usage: docker logs [OPTIONS] CONTAINER
+
+    Fetch the logs of a container
+
+
+.. _cli_port:
+
+``port``
+--------
+
+::
+
+    Usage: docker port [OPTIONS] CONTAINER PRIVATE_PORT
+
+    Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
+
+
+.. _cli_ps:
+
+``ps``
+------
+
+::
+
+    Usage: docker ps [OPTIONS]
+
+    List containers
+
+      -a=false: Show all containers. Only running containers are shown by default.
+      -notrunc=false: Don't truncate output
+      -q=false: Only display numeric IDs
+
+.. _cli_pull:
+
+``pull``
+--------
+
+::
+
+    Usage: docker pull NAME
+
+    Pull an image or a repository from the registry
+
+
+.. _cli_push:
+
+``push``
+--------
+
+::
+
+    Usage: docker push NAME
+
+    Push an image or a repository to the registry
+
+
+.. _cli_restart:
+
+``restart``
+-----------
+
+::
+
+    Usage: docker restart [OPTIONS] NAME
+
+    Restart a running container
+
+.. _cli_rm:
+
+``rm``
+------
+
+::
+
+    Usage: docker rm [OPTIONS] CONTAINER
+
+    Remove one or more containers
+
+.. _cli_rmi:
+
+``rmi``
+-------
+
+::
+
+    Usage: docker rmi IMAGE [IMAGE...]
+
+    Remove one or more images
+
+.. _cli_run:
+
+``run``
+-------
+
+::
+
+    Usage: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
+
+    Run a command in a new container
+
+      -a=map[]: Attach to stdin, stdout or stderr.
+      -c=0: CPU shares (relative weight)
+      -cidfile="": Write the container ID to the file
+      -d=false: Detached mode: Run container in the background, print new container id
+      -e=[]: Set environment variables
+      -h="": Container host name
+      -i=false: Keep stdin open even if not attached
+      -privileged=false: Give extended privileges to this container
+      -m=0: Memory limit (in bytes)
+      -n=true: Enable networking for this container
+      -p=[]: Map a network port to the container
+      -rm=false: Automatically remove the container when it exits (incompatible with -d)
+      -t=false: Allocate a pseudo-tty
+      -u="": Username or UID
+      -dns=[]: Set custom dns servers for the container
+      -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "container-dir" is missing, then docker creates a new volume.
+      -volumes-from="": Mount all volumes from the given container.
+      -entrypoint="": Overwrite the default entrypoint set by the image.
+      -w="": Working directory inside the container
+      -lxc-conf=[]: Add custom lxc options -lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
+
+Examples
+~~~~~~~~
+
+.. code-block:: bash
+
+    sudo docker run -cidfile /tmp/docker_test.cid ubuntu echo "test"
+
+This will create a container and print "test" to the console. The
+``cidfile`` flag makes docker attempt to create a new file and write the
+container ID to it. If the file exists already, docker will return an
+error. Docker will close this file when docker run exits.
+
+.. code-block:: bash
+
+   docker run mount -t tmpfs none /var/spool/squid
+
+This will *not* work, because by default, most potentially dangerous
+kernel capabilities are dropped; including ``cap_sys_admin`` (which is
+required to mount filesystems). However, the ``-privileged`` flag will
+allow it to run:
+
+.. code-block:: bash
+
+   docker run -privileged mount -t tmpfs none /var/spool/squid
+
+The ``-privileged`` flag gives *all* capabilities to the container,
+and it also lifts all the limitations enforced by the ``device``
+cgroup controller. In other words, the container can then do almost
+everything that the host can do. This flag exists to allow special
+use-cases, like running Docker within Docker.
+
+.. code-block:: bash
+
+   docker  run -w /path/to/dir/ -i -t  ubuntu pwd
+
+The ``-w`` lets the command being executed inside directory given, 
+here /path/to/dir/. If the path does not exists it is created inside the 
+container.
+
+.. code-block:: bash
+
+   docker  run  -v `pwd`:`pwd` -w `pwd` -i -t  ubuntu pwd
+
+The ``-v`` flag mounts the current working directory into the container. 
+The ``-w`` lets the command being executed inside the current 
+working directory, by changing into the directory to the value
+returned by ``pwd``. So this combination executes the command
+using the container, but inside the current working directory.
+
+.. _cli_search:
+
+``search``
+----------
+
+::
+
+    Usage: docker search TERM
 
 
+    Searches for the TERM parameter on the Docker index and prints out
+    a list of repositories that match.
 
 
-Available Commands
-~~~~~~~~~~~~~~~~~~
+.. _cli_start:
 
 
-.. include:: command/attach.rst
+``start``
+---------
 
 
-.. include:: command/build.rst
+::
 
 
-.. include:: command/commit.rst
+    Usage: docker start [OPTIONS] NAME
 
 
-.. include:: command/cp.rst
+    Start a stopped container
 
 
-.. include:: command/diff.rst
+.. _cli_stop:
 
 
-.. include:: command/events.rst
+``stop``
+--------
 
 
-.. include:: command/export.rst
+::
 
 
-.. include:: command/history.rst
+    Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
 
 
-.. include:: command/images.rst
+    Stop a running container
 
 
-.. include:: command/import.rst
+      -t=10: Number of seconds to wait for the container to stop before killing it.
 
 
-.. include:: command/info.rst
+.. _cli_tag:
 
 
-.. include:: command/insert.rst
+``tag``
+-------
 
 
-.. include:: command/inspect.rst
+::
 
 
-.. include:: command/kill.rst
+    Usage: docker tag [OPTIONS] IMAGE REPOSITORY [TAG]
 
 
-.. include:: command/login.rst
+    Tag an image into a repository
 
 
-.. include:: command/logs.rst
+      -f=false: Force
 
 
-.. include:: command/port.rst
+.. _cli_top:
 
 
-.. include:: command/ps.rst
+``top``
+-------
 
 
-.. include:: command/pull.rst
+::
 
 
-.. include:: command/push.rst
+    Usage: docker top CONTAINER
 
 
-.. include:: command/restart.rst
+    Lookup the running processes of a container
 
 
-.. include:: command/rm.rst
+.. _cli_version:
 
 
-.. include:: command/rmi.rst
+``version``
+-----------
 
 
-.. include:: command/run.rst
+Show the version of the docker client, daemon, and latest released version.
 
 
-.. include:: command/search.rst
 
 
-.. include:: command/start.rst
+.. _cli_wait:
 
 
-.. include:: command/stop.rst
+``wait``
+--------
 
 
-.. include:: command/tag.rst
+::
 
 
-.. include:: command/top.rst
+    Usage: docker wait [OPTIONS] NAME
 
 
-.. include:: command/version.rst
+    Block until a container stops, then print its exit code.
 
 
-.. include:: command/wait.rst
 
 
 
 

+ 0 - 59
docs/sources/commandline/command/attach.rst

@@ -1,59 +0,0 @@
-:title: Attach Command
-:description: Attach to a running container
-:keywords: attach, container, docker, documentation
-
-===========================================
-``attach`` -- Attach to a running container
-===========================================
-
-::
-
-    Usage: docker attach CONTAINER
-
-    Attach to a running container.
-
-You can detach from the container again (and leave it running) with
-``CTRL-c`` (for a quiet exit) or ``CTRL-\`` to get a stacktrace of
-the Docker client when it quits.
-
-To stop a container, use ``docker stop``
-
-To kill the container, use ``docker kill``
- 
-Examples:
----------
-
-.. code-block:: bash
-
-     $ ID=$(sudo docker run -d ubuntu /usr/bin/top -b)
-     $ sudo docker attach $ID
-     top - 02:05:52 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
-     Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
-     Cpu(s):  0.1%us,  0.2%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
-     Mem:    373572k total,   355560k used,    18012k free,    27872k buffers
-     Swap:   786428k total,        0k used,   786428k free,   221740k cached
-
-     PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
-      1 root      20   0 17200 1116  912 R    0  0.3   0:00.03 top                
-
-      top - 02:05:55 up  3:05,  0 users,  load average: 0.01, 0.02, 0.05
-      Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
-      Cpu(s):  0.0%us,  0.2%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
-      Mem:    373572k total,   355244k used,    18328k free,    27872k buffers
-      Swap:   786428k total,        0k used,   786428k free,   221776k cached
-
-        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
-	    1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top                
-
-
-      top - 02:05:58 up  3:06,  0 users,  load average: 0.01, 0.02, 0.05
-      Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
-      Cpu(s):  0.2%us,  0.3%sy,  0.0%ni, 99.5%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
-      Mem:    373572k total,   355780k used,    17792k free,    27880k buffers
-      Swap:   786428k total,        0k used,   786428k free,   221776k cached
-
-      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
-           1 root      20   0 17208 1144  932 R    0  0.3   0:00.03 top                
-     ^C$ 
-     $ sudo docker stop $ID
-

+ 0 - 65
docs/sources/commandline/command/build.rst

@@ -1,65 +0,0 @@
-:title: Build Command
-:description: Build a new image from the Dockerfile passed via stdin
-:keywords: build, docker, container, documentation
-
-================================================
-``build`` -- Build a container from a Dockerfile
-================================================
-
-::
-
-    Usage: docker build [OPTIONS] PATH | URL | -
-    Build a new container image from the source code at PATH
-      -t="": Repository name (and optionally a tag) to be applied to the resulting image in case of success.
-      -q=false: Suppress verbose build output.
-      -no-cache: Do not use the cache when building the image.
-      -rm: Remove intermediate containers after a successful build
-    When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
-
-
-Examples
---------
-
-.. code-block:: bash
-
-    sudo docker build .
-
-This will read the ``Dockerfile`` from the current directory. It will
-also send any other files and directories found in the current
-directory to the ``docker`` daemon.
-
-The contents of this directory would be used by ``ADD`` commands found
-within the ``Dockerfile``.  This will send a lot of data to the
-``docker`` daemon if the current directory contains a lot of data.  If
-the absolute path is provided instead of ``.`` then only the files and
-directories required by the ADD commands from the ``Dockerfile`` will be
-added to the context and transferred to the ``docker`` daemon.
-
-.. code-block:: bash
-
-   sudo docker build -t vieux/apache:2.0 .
-
-This will build like the previous example, but it will then tag the
-resulting image. The repository name will be ``vieux/apache`` and the
-tag will be ``2.0``
-
-
-.. code-block:: bash
-
-    sudo docker build - < Dockerfile
-
-This will read a ``Dockerfile`` from *stdin* without context. Due to
-the lack of a context, no contents of any local directory will be sent
-to the ``docker`` daemon.  ``ADD`` doesn't work when running in this
-mode because the absence of the context provides no source files to
-copy to the container.
-
-
-.. code-block:: bash
-
-    sudo docker build github.com/creack/docker-firefox
-
-This will clone the Github repository and use it as context. The
-``Dockerfile`` at the root of the repository is used as
-``Dockerfile``.  Note that you can specify an arbitrary git repository
-by using the ``git://`` schema.

+ 0 - 52
docs/sources/commandline/command/commit.rst

@@ -1,52 +0,0 @@
-:title: Commit Command
-:description: Create a new image from a container's changes
-:keywords: commit, docker, container, documentation
-
-===========================================================
-``commit`` -- Create a new image from a container's changes
-===========================================================
-
-::
-
-    Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY [TAG]]
-
-    Create a new image from a container's changes
-
-      -m="": Commit message
-      -author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
-      -run="": Configuration to be applied when the image is launched with `docker run`. 
-               (ex: '{"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
-
-Full -run example (multiline is ok within a single quote ``'``)
-
-::
-
-  $ sudo docker commit -run='
-  {
-      "Entrypoint" : null,
-      "Privileged" : false,
-      "User" : "",
-      "VolumesFrom" : "",
-      "Cmd" : ["cat", "-e", "/etc/resolv.conf"],
-      "Dns" : ["8.8.8.8", "8.8.4.4"],
-      "MemorySwap" : 0,
-      "AttachStdin" : false,
-      "AttachStderr" : false,
-      "CpuShares" : 0,
-      "OpenStdin" : false,
-      "Volumes" : null,
-      "Hostname" : "122612f45831",
-      "PortSpecs" : ["22", "80", "443"],
-      "Image" : "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
-      "Tty" : false,
-      "Env" : [
-         "HOME=/",
-         "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
-      ],
-      "StdinOnce" : false,
-      "Domainname" : "",
-      "WorkingDir" : "/",
-      "NetworkDisabled" : false,
-      "Memory" : 0,
-      "AttachStdout" : false
-  }' $CONTAINER_ID

+ 0 - 14
docs/sources/commandline/command/cp.rst

@@ -1,14 +0,0 @@
-:title: Cp Command
-:description: Copy files/folders from the containers filesystem to the host path
-:keywords: cp, docker, container, documentation, copy
-
-============================================================================
-``cp`` -- Copy files/folders from the containers filesystem to the host path
-============================================================================
-
-::
-
-    Usage: docker cp CONTAINER:RESOURCE HOSTPATH
-
-    Copy files/folders from the containers filesystem to the host
-    path.  Paths are relative to the root of the filesystem.

+ 0 - 13
docs/sources/commandline/command/diff.rst

@@ -1,13 +0,0 @@
-:title: Diff Command
-:description: Inspect changes on a container's filesystem
-:keywords: diff, docker, container, documentation
-
-=======================================================
-``diff`` -- Inspect changes on a container's filesystem
-=======================================================
-
-::
-
-    Usage: docker diff CONTAINER [OPTIONS]
-
-    Inspect changes on a container's filesystem

+ 0 - 34
docs/sources/commandline/command/events.rst

@@ -1,34 +0,0 @@
-:title: Events Command
-:description: Get real time events from the server
-:keywords: events, docker, documentation
-
-=================================================================
-``events`` -- Get real time events from the server
-=================================================================
-
-::
-
-    Usage: docker events
-
-    Get real time events from the server
-
-Examples
---------
-
-Starting and stopping a container
-.................................
-
-.. code-block:: bash
-
-    $ sudo docker start 4386fb97867d
-    $ sudo docker stop 4386fb97867d
-
-In another shell
-
-.. code-block:: bash
-    
-    $ sudo docker events
-    [2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start
-    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
-    [2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop
-

+ 0 - 13
docs/sources/commandline/command/export.rst

@@ -1,13 +0,0 @@
-:title: Export Command
-:description: Export the contents of a filesystem as a tar archive
-:keywords: export, docker, container, documentation
-
-=================================================================
-``export`` -- Stream the contents of a container as a tar archive
-=================================================================
-
-::
-
-    Usage: docker export CONTAINER
-
-    Export the contents of a filesystem as a tar archive

+ 0 - 13
docs/sources/commandline/command/history.rst

@@ -1,13 +0,0 @@
-:title: History Command
-:description: Show the history of an image
-:keywords: history, docker, container, documentation
-
-===========================================
-``history`` -- Show the history of an image
-===========================================
-
-::
-
-    Usage: docker history [OPTIONS] IMAGE
-
-    Show the history of an image

+ 0 - 26
docs/sources/commandline/command/images.rst

@@ -1,26 +0,0 @@
-:title: Images Command
-:description: List images
-:keywords: images, docker, container, documentation
-
-=========================
-``images`` -- List images
-=========================
-
-::
-
-    Usage: docker images [OPTIONS] [NAME]
-
-    List images
-
-      -a=false: show all images
-      -q=false: only show numeric IDs
-      -viz=false: output in graphviz format
-
-Displaying images visually
---------------------------
-
-::
-
-    sudo docker images -viz | dot -Tpng -o docker.png
-
-.. image:: https://docs.docker.io/en/latest/_static/docker_images.gif

+ 0 - 44
docs/sources/commandline/command/import.rst

@@ -1,44 +0,0 @@
-:title: Import Command
-:description: Create a new filesystem image from the contents of a tarball
-:keywords: import, tarball, docker, url, documentation
-
-==========================================================================
-``import`` -- Create a new filesystem image from the contents of a tarball
-==========================================================================
-
-::
-
-    Usage: docker import URL|- [REPOSITORY [TAG]]
-
-    Create a new filesystem image from the contents of a tarball
-
-At this time, the URL must start with ``http`` and point to a single
-file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) containing a
-root filesystem. If you would like to import from a local directory or
-archive, you can use the ``-`` parameter to take the data from
-standard in.
-
-Examples
---------
-
-Import from a remote location
-.............................
-
-``$ sudo docker import http://example.com/exampleimage.tgz exampleimagerepo``
-
-Import from a local file
-........................
-
-Import to docker via pipe and standard in
-
-``$ cat exampleimage.tgz | sudo docker import - exampleimagelocal``
-
-Import from a local directory
-.............................
-
-``$ sudo tar -c . | docker import - exampleimagedir``
-
-Note the ``sudo`` in this example -- you must preserve the ownership
-of the files (especially root ownership) during the archiving with
-tar. If you are not root (or sudo) when you tar, then the ownerships
-might not get preserved.

+ 0 - 13
docs/sources/commandline/command/info.rst

@@ -1,13 +0,0 @@
-:title: Info Command
-:description: Display system-wide information.
-:keywords: info, docker, information, documentation
-
-===========================================
-``info`` -- Display system-wide information
-===========================================
-
-::
-
-    Usage: docker info
-
-    Display system-wide information.

+ 0 - 23
docs/sources/commandline/command/insert.rst

@@ -1,23 +0,0 @@
-:title: Insert Command
-:description: Insert a file in an image
-:keywords: insert, image, docker, documentation
-
-==========================================================================
-``insert`` -- Insert a file in an image
-==========================================================================
-
-::
-
-    Usage: docker insert IMAGE URL PATH
-
-    Insert a file from URL in the IMAGE at PATH
-
-Examples
---------
-
-Insert file from github
-.......................
-
-.. code-block:: bash
-
-    $ sudo docker insert 8283e18b24bc https://raw.github.com/metalivedev/django/master/postinstall /tmp/postinstall.sh

+ 0 - 13
docs/sources/commandline/command/inspect.rst

@@ -1,13 +0,0 @@
-:title: Inspect Command
-:description: Return low-level information on a container
-:keywords: inspect, container, docker, documentation
-
-==========================================================
-``inspect`` -- Return low-level information on a container
-==========================================================
-
-::
-
-    Usage: docker inspect [OPTIONS] CONTAINER
-
-    Return low-level information on a container

+ 0 - 13
docs/sources/commandline/command/kill.rst

@@ -1,13 +0,0 @@
-:title: Kill Command
-:description: Kill a running container
-:keywords: kill, container, docker, documentation
-
-====================================
-``kill`` -- Kill a running container
-====================================
-
-::
-
-    Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
-
-    Kill a running container

+ 0 - 24
docs/sources/commandline/command/login.rst

@@ -1,24 +0,0 @@
-:title: Login Command
-:description: Register or Login to the docker registry server
-:keywords: login, docker, documentation
-
-============================================================
-``login`` -- Register or Login to the docker registry server
-============================================================
-
-::
-
-    Usage: docker login [OPTIONS] [SERVER]
-
-    Register or Login to the docker registry server
-
-    -e="": email
-    -p="": password
-    -u="": username
-
-    If you want to login to a private registry you can
-    specify this by adding the server name.
-
-    example:
-    docker login localhost:8080
-

+ 0 - 13
docs/sources/commandline/command/logs.rst

@@ -1,13 +0,0 @@
-:title: Logs Command
-:description: Fetch the logs of a container
-:keywords: logs, container, docker, documentation
-
-=========================================
-``logs`` -- Fetch the logs of a container
-=========================================
-
-::
-
-    Usage: docker logs [OPTIONS] CONTAINER
-
-    Fetch the logs of a container

+ 0 - 13
docs/sources/commandline/command/port.rst

@@ -1,13 +0,0 @@
-:title: Port Command
-:description: Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
-:keywords: port, docker, container, documentation
-
-=========================================================================
-``port`` -- Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
-=========================================================================
-
-::
-
-    Usage: docker port [OPTIONS] CONTAINER PRIVATE_PORT
-
-    Lookup the public-facing port which is NAT-ed to PRIVATE_PORT

+ 0 - 17
docs/sources/commandline/command/ps.rst

@@ -1,17 +0,0 @@
-:title: Ps Command
-:description: List containers
-:keywords: ps, docker, documentation, container
-
-=========================
-``ps`` -- List containers
-=========================
-
-::
-
-    Usage: docker ps [OPTIONS]
-
-    List containers
-
-      -a=false: Show all containers. Only running containers are shown by default.
-      -notrunc=false: Don't truncate output
-      -q=false: Only display numeric IDs

+ 0 - 13
docs/sources/commandline/command/pull.rst

@@ -1,13 +0,0 @@
-:title: Pull Command
-:description: Pull an image or a repository from the registry
-:keywords: pull, image, repo, repository, documentation, docker
-
-=========================================================================
-``pull`` -- Pull an image or a repository from the docker registry server
-=========================================================================
-
-::
-
-    Usage: docker pull NAME
-
-    Pull an image or a repository from the registry

+ 0 - 13
docs/sources/commandline/command/push.rst

@@ -1,13 +0,0 @@
-:title: Push Command
-:description: Push an image or a repository to the registry
-:keywords: push, docker, image, repository, documentation, repo
-
-=======================================================================
-``push`` -- Push an image or a repository to the docker registry server
-=======================================================================
-
-::
-
-    Usage: docker push NAME
-
-    Push an image or a repository to the registry

+ 0 - 13
docs/sources/commandline/command/restart.rst

@@ -1,13 +0,0 @@
-:title: Restart Command
-:description: Restart a running container
-:keywords: restart, container, docker, documentation
-
-==========================================
-``restart`` -- Restart a running container
-==========================================
-
-::
-
-    Usage: docker restart [OPTIONS] NAME
-
-    Restart a running container

+ 0 - 13
docs/sources/commandline/command/rm.rst

@@ -1,13 +0,0 @@
-:title: Rm Command
-:description: Remove a container
-:keywords: remove, container, docker, documentation, rm
-
-============================
-``rm`` -- Remove a container
-============================
-
-::
-
-    Usage: docker rm [OPTIONS] CONTAINER
-
-    Remove one or more containers

+ 0 - 13
docs/sources/commandline/command/rmi.rst

@@ -1,13 +0,0 @@
-:title: Rmi Command
-:description: Remove an image
-:keywords: rmi, remove, image, docker, documentation
-
-==========================
-``rmi`` -- Remove an image
-==========================
-
-::
-
-    Usage: docker rmi IMAGE [IMAGE...]
-
-    Remove one or more images

+ 0 - 85
docs/sources/commandline/command/run.rst

@@ -1,85 +0,0 @@
-:title: Run Command
-:description: Run a command in a new container
-:keywords: run, container, docker, documentation 
-
-===========================================
-``run`` -- Run a command in a new container
-===========================================
-
-::
-
-    Usage: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
-
-    Run a command in a new container
-
-      -a=map[]: Attach to stdin, stdout or stderr.
-      -c=0: CPU shares (relative weight)
-      -cidfile="": Write the container ID to the file
-      -d=false: Detached mode: Run container in the background, print new container id
-      -e=[]: Set environment variables
-      -h="": Container host name
-      -i=false: Keep stdin open even if not attached
-      -privileged=false: Give extended privileges to this container
-      -m=0: Memory limit (in bytes)
-      -n=true: Enable networking for this container
-      -p=[]: Map a network port to the container
-      -rm=false: Automatically remove the container when it exits (incompatible with -d)
-      -t=false: Allocate a pseudo-tty
-      -u="": Username or UID
-      -dns=[]: Set custom dns servers for the container
-      -v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "container-dir" is missing, then docker creates a new volume.
-      -volumes-from="": Mount all volumes from the given container.
-      -entrypoint="": Overwrite the default entrypoint set by the image.
-      -w="": Working directory inside the container
-      -lxc-conf=[]: Add custom lxc options -lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
-
-Examples
---------
-
-.. code-block:: bash
-
-    sudo docker run -cidfile /tmp/docker_test.cid ubuntu echo "test"
-
-This will create a container and print "test" to the console. The
-``cidfile`` flag makes docker attempt to create a new file and write the
-container ID to it. If the file exists already, docker will return an
-error. Docker will close this file when docker run exits.
-
-.. code-block:: bash
-
-   docker run mount -t tmpfs none /var/spool/squid
-
-This will *not* work, because by default, most potentially dangerous
-kernel capabilities are dropped; including ``cap_sys_admin`` (which is
-required to mount filesystems). However, the ``-privileged`` flag will
-allow it to run:
-
-.. code-block:: bash
-
-   docker run -privileged mount -t tmpfs none /var/spool/squid
-
-The ``-privileged`` flag gives *all* capabilities to the container,
-and it also lifts all the limitations enforced by the ``device``
-cgroup controller. In other words, the container can then do almost
-everything that the host can do. This flag exists to allow special
-use-cases, like running Docker within Docker.
-
-.. code-block:: bash
-
-   docker  run -w /path/to/dir/ -i -t  ubuntu pwd
-
-The ``-w`` lets the command being executed inside directory given, 
-here /path/to/dir/. If the path does not exists it is created inside the 
-container.
-
-.. code-block:: bash
-
-   docker  run  -v `pwd`:`pwd` -w `pwd` -i -t  ubuntu pwd
-
-The ``-v`` flag mounts the current working directory into the container. 
-The ``-w`` lets the command being executed inside the current 
-working directory, by changing into the directory to the value
-returned by ``pwd``. So this combination executes the command
-using the container, but inside the current working directory.
-
-

+ 0 - 14
docs/sources/commandline/command/search.rst

@@ -1,14 +0,0 @@
-:title: Search Command
-:description: Searches for the TERM parameter on the Docker index and prints out a list of repositories that match.
-:keywords: search, docker, image, documentation 
-
-===================================================================
-``search`` -- Search for an image in the docker index
-===================================================================
-
-::
-
-    Usage: docker search TERM
-
-    Searches for the TERM parameter on the Docker index and prints out
-    a list of repositories that match.

+ 0 - 13
docs/sources/commandline/command/start.rst

@@ -1,13 +0,0 @@
-:title: Start Command
-:description: Start a stopped container
-:keywords: start, docker, container, documentation
-
-======================================
-``start`` -- Start a stopped container
-======================================
-
-::
-
-    Usage: docker start [OPTIONS] NAME
-
-    Start a stopped container

+ 0 - 15
docs/sources/commandline/command/stop.rst

@@ -1,15 +0,0 @@
-:title: Stop Command
-:description: Stop a running container
-:keywords: stop, container, docker, documentation
-
-====================================
-``stop`` -- Stop a running container
-====================================
-
-::
-
-    Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
-
-    Stop a running container
-
-      -t=10: Number of seconds to wait for the container to stop before killing it.

+ 0 - 15
docs/sources/commandline/command/tag.rst

@@ -1,15 +0,0 @@
-:title: Tag Command
-:description: Tag an image into a repository
-:keywords: tag, docker, image, repository, documentation, repo
-
-=========================================
-``tag`` -- Tag an image into a repository
-=========================================
-
-::
-
-    Usage: docker tag [OPTIONS] IMAGE REPOSITORY [TAG]
-
-    Tag an image into a repository
-
-      -f=false: Force

+ 0 - 13
docs/sources/commandline/command/top.rst

@@ -1,13 +0,0 @@
-:title: Top Command
-:description: Lookup the running processes of a container
-:keywords: top, docker, container, documentation
-
-=======================================================
-``top`` -- Lookup the running processes of a container
-=======================================================
-
-::
-
-    Usage: docker top CONTAINER
-
-    Lookup the running processes of a container

+ 0 - 7
docs/sources/commandline/command/version.rst

@@ -1,7 +0,0 @@
-:title: Version Command
-:description: 
-:keywords: version, docker, documentation
-
-==================================================
-``version`` -- Show the docker version information
-==================================================

+ 0 - 13
docs/sources/commandline/command/wait.rst

@@ -1,13 +0,0 @@
-:title: Wait Command
-:description: Block until a container stops, then print its exit code.
-:keywords: wait, docker, container, documentation
-
-===================================================================
-``wait`` -- Block until a container stops, then print its exit code
-===================================================================
-
-::
-
-    Usage: docker wait [OPTIONS] NAME
-
-    Block until a container stops, then print its exit code.

+ 0 - 0
docs/sources/static_files/docker_images.gif → docs/sources/commandline/docker_images.gif


+ 2 - 33
docs/sources/commandline/index.rst

@@ -1,6 +1,6 @@
 :title: Commands
 :title: Commands
-:description: -- todo: change me
-:keywords: todo, commands, command line, help, docker, documentation
+:description: docker command line interface
+:keywords: commands, command line, help, docker
 
 
 
 
 Commands
 Commands
@@ -12,34 +12,3 @@ Contents:
   :maxdepth: 1
   :maxdepth: 1
 
 
   cli
   cli
-  attach  <command/attach>
-  build   <command/build>
-  commit  <command/commit>
-  cp      <command/cp>
-  diff    <command/diff>
-  events  <command/events>
-  export  <command/export>
-  history <command/history>
-  images  <command/images>
-  import  <command/import>
-  info    <command/info>
-  insert  <command/insert>
-  inspect <command/inspect>
-  kill    <command/kill>
-  login   <command/login>
-  logs    <command/logs>
-  port    <command/port>
-  ps      <command/ps>
-  pull    <command/pull>
-  push    <command/push>
-  restart <command/restart>
-  rm      <command/rm>
-  rmi     <command/rmi>
-  run     <command/run>
-  search  <command/search>
-  start   <command/start>
-  stop    <command/stop>
-  tag     <command/tag>
-  top     <command/top>
-  version <command/version>
-  wait    <command/wait>

+ 1 - 1
docs/sources/contributing/devenvironment.rst

@@ -124,7 +124,7 @@ You can run an interactive session in the newly built container:
 
 
 
 
 
 
-.. note:: The binary is availalbe outside the container in the directory  ``./bundles/<version>-dev/binary/``.
+.. note:: The binary is available outside the container in the directory  ``./bundles/<version>-dev/binary/``. You can swap your host docker executable with this binary for live testing - for example, on ubuntu: ``sudo service docker stop ; sudo cp $(which docker) $(which docker)_ ; sudo cp ./bundles/<version>-dev/binary/docker-<version>-dev $(which docker);sudo service docker start``.
 
 
 
 
 **Need More Help?**
 **Need More Help?**

+ 22 - 10
docs/sources/examples/postgresql_service.rst

@@ -43,8 +43,8 @@ Install ``python-software-properties``.
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
-    apt-get install python-software-properties
-    apt-get install software-properties-common
+    apt-get -y install python-software-properties
+    apt-get -y install software-properties-common
 
 
 Add Pitti's PostgreSQL repository. It contains the most recent stable release
 Add Pitti's PostgreSQL repository. It contains the most recent stable release
 of PostgreSQL i.e. ``9.2``.
 of PostgreSQL i.e. ``9.2``.
@@ -77,7 +77,8 @@ role.
 
 
 Adjust PostgreSQL configuration so that remote connections to the
 Adjust PostgreSQL configuration so that remote connections to the
 database are possible. Make sure that inside
 database are possible. Make sure that inside
-``/etc/postgresql/9.2/main/pg_hba.conf`` you have following line:
+``/etc/postgresql/9.2/main/pg_hba.conf`` you have following line (you will need
+to install an editor, e.g. ``apt-get install vim``):
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
@@ -90,9 +91,17 @@ uncomment ``listen_addresses`` so it is as follows:
 
 
     listen_addresses='*'
     listen_addresses='*'
 
 
-*Note:* this PostgreSQL setup is for development only purposes. Refer
-to PostgreSQL documentation how to fine-tune these settings so that it
-is enough secure.
+.. note::
+
+    This PostgreSQL setup is for development only purposes. Refer
+    to PostgreSQL documentation how to fine-tune these settings so that it
+    is enough secure.
+
+Exit.
+
+.. code-block:: bash
+
+    exit
 
 
 Create an image and assign it a name. ``<container_id>`` is in the
 Create an image and assign it a name. ``<container_id>`` is in the
 Bash prompt; you can also locate it using ``docker ps -a``.
 Bash prompt; you can also locate it using ``docker ps -a``.
@@ -111,7 +120,9 @@ Finally, run PostgreSQL server via ``docker``.
         -D /var/lib/postgresql/9.2/main \
         -D /var/lib/postgresql/9.2/main \
         -c config_file=/etc/postgresql/9.2/main/postgresql.conf')
         -c config_file=/etc/postgresql/9.2/main/postgresql.conf')
 
 
-Connect the PostgreSQL server using ``psql``.
+Connect the PostgreSQL server using ``psql`` (You will need postgres installed
+on the machine.  For ubuntu, use something like
+``sudo apt-get install postgresql``).
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
@@ -128,7 +139,7 @@ As before, create roles or databases if needed.
     docker=# CREATE DATABASE foo OWNER=docker;
     docker=# CREATE DATABASE foo OWNER=docker;
     CREATE DATABASE
     CREATE DATABASE
 
 
-Additionally, publish there your newly created image on Docker Index.
+Additionally, publish your newly created image on Docker Index.
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
@@ -149,10 +160,11 @@ container starts.
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
-    sudo docker commit <container_id> <your username> postgresql -run='{"Cmd": \
+    sudo docker commit -run='{"Cmd": \
       ["/bin/su", "postgres", "-c", "/usr/lib/postgresql/9.2/bin/postgres -D \
       ["/bin/su", "postgres", "-c", "/usr/lib/postgresql/9.2/bin/postgres -D \
       /var/lib/postgresql/9.2/main -c \
       /var/lib/postgresql/9.2/main -c \
-      config_file=/etc/postgresql/9.2/main/postgresql.conf"], "PortSpecs": ["5432"]}'
+      config_file=/etc/postgresql/9.2/main/postgresql.conf"], "PortSpecs": ["5432"]}' \
+      <container_id> <your username>/postgresql
 
 
 From now on, just type ``docker run <your username>/postgresql`` and
 From now on, just type ``docker run <your username>/postgresql`` and
 PostgreSQL should automatically start.
 PostgreSQL should automatically start.

+ 0 - 2
docs/sources/index.rst

@@ -2,8 +2,6 @@
 :description: An overview of the Docker Documentation
 :description: An overview of the Docker Documentation
 :keywords: containers, lxc, concepts, explanation
 :keywords: containers, lxc, concepts, explanation
 
 
-.. image:: https://www.docker.io/static/img/linked/dockerlogo-horizontal.png
-
 Introduction
 Introduction
 ------------
 ------------
 
 

+ 0 - 4
docs/sources/installation/archlinux.rst

@@ -19,10 +19,6 @@ The lxc-docker-git package will build from the current master branch.
 Dependencies
 Dependencies
 ------------
 ------------
 
 
-.. versionchanged:: v0.7
-   This section may need to be updated since Docker no longer depends
-   on AUFS. Please see :ref:`kernel`.
-
 Docker depends on several packages which are specified as dependencies in
 Docker depends on several packages which are specified as dependencies in
 either AUR package.
 either AUR package.
 
 

+ 0 - 5
docs/sources/installation/gentoolinux.rst

@@ -25,11 +25,6 @@ properly installing and using the overlay can be found in `the overlay README
 Installation
 Installation
 ^^^^^^^^^^^^
 ^^^^^^^^^^^^
 
 
-.. versionchanged:: v0.7
-   This section may need to be updated since Docker no longer depends
-   on AUFS. Please see :ref:`kernel`.
-
-
 The package should properly pull in all the necessary dependencies and prompt
 The package should properly pull in all the necessary dependencies and prompt
 for all necessary kernel options.  For the most straightforward installation
 for all necessary kernel options.  For the most straightforward installation
 experience, use ``sys-kernel/aufs-sources`` as your kernel sources.  If you
 experience, use ``sys-kernel/aufs-sources`` as your kernel sources.  If you

+ 10 - 12
docs/sources/installation/kernel.rst

@@ -11,7 +11,7 @@ In short, Docker has the following kernel requirements:
 
 
 - Linux version 3.8 or above.
 - Linux version 3.8 or above.
 
 
-- `Device Mapper support <http://www.sourceware.org/dm/>`_.
+- `AUFS support <http://aufs.sourceforge.net/>`_.
 
 
 - Cgroups and namespaces must be enabled.
 - Cgroups and namespaces must be enabled.
 
 
@@ -48,17 +48,15 @@ detects something older than 3.8.
 See issue `#407 <https://github.com/dotcloud/docker/issues/407>`_ for details.
 See issue `#407 <https://github.com/dotcloud/docker/issues/407>`_ for details.
 
 
 
 
-Device Mapper support
----------------------
+AUFS support
+------------
 
 
-The `Device Mapper <http://www.sourceware.org/dm/>`_ replaces the
-previous Docker dependency on AUFS and has been in the kernel since
-2.6.9, so the device-mapper module is more broadly-supported across
-Linux distributions. Docker uses `thin-provisioning
-<https://github.com/mirrors/linux/blob/master/Documentation/device-mapper/thin-provisioning.txt>`_
-to provide a :ref:`unioning file system <ufs_def>`. If you'd like to
-check for the presence of the device-mapper module, please see the
-`LVM-HOWTO. <http://www.tldp.org/HOWTO/LVM-HOWTO/builddmmod.html>`_
+Docker currently relies on AUFS, an unioning filesystem.
+While AUFS is included in the kernels built by the Debian and Ubuntu
+distributions, is not part of the standard kernel. This means that if
+you decide to roll your own kernel, you will have to patch your
+kernel tree to add AUFS. The process is documented on
+`AUFS webpage <http://aufs.sourceforge.net/>`_.
 
 
 
 
 Cgroups and namespaces
 Cgroups and namespaces
@@ -71,7 +69,7 @@ to run LXC containers. Note that 2.6.32 has some documented issues regarding
 network namespace setup and teardown; those issues are not a risk if you
 network namespace setup and teardown; those issues are not a risk if you
 run containers in a private environment, but can lead to denial-of-service
 run containers in a private environment, but can lead to denial-of-service
 attacks if you want to run untrusted code in your containers. For more details,
 attacks if you want to run untrusted code in your containers. For more details,
-see `LP#720095 <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/720095>`_.
+see `[LP#720095 <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/720095>`_.
 
 
 Kernels 2.6.38, and every version since 3.2, have been deployed successfully
 Kernels 2.6.38, and every version since 3.2, have been deployed successfully
 to run containerized production workloads. Feature-wise, there is no huge
 to run containerized production workloads. Feature-wise, there is no huge

+ 31 - 24
docs/sources/installation/ubuntulinux.rst

@@ -7,6 +7,11 @@
 Ubuntu Linux
 Ubuntu Linux
 ============
 ============
 
 
+.. warning::
+
+   These instructions have changed for 0.6. If you are upgrading from
+   an earlier version, you will need to follow them again.
+
 .. include:: install_header.inc
 .. include:: install_header.inc
 
 
 Right now, the officially supported distribution are:
 Right now, the officially supported distribution are:
@@ -14,10 +19,10 @@ Right now, the officially supported distribution are:
 - :ref:`ubuntu_precise`
 - :ref:`ubuntu_precise`
 - :ref:`ubuntu_raring`
 - :ref:`ubuntu_raring`
 
 
-Docker has the following dependencies (read more in :ref:`kernel`):
+Docker has the following dependencies
 
 
-* Linux kernel 3.8 
-* Device-mapper module
+* Linux kernel 3.8 (read more about :ref:`kernel`)
+* AUFS file system support (we are working on BTRFS support as an alternative)
 
 
 Please read :ref:`ufw`, if you plan to use `UFW (Uncomplicated
 Please read :ref:`ufw`, if you plan to use `UFW (Uncomplicated
 Firewall) <https://help.ubuntu.com/community/UFW>`_
 Firewall) <https://help.ubuntu.com/community/UFW>`_
@@ -37,12 +42,12 @@ Dependencies
 
 
 Due to a bug in LXC, docker works best on the 3.8 kernel. Precise
 Due to a bug in LXC, docker works best on the 3.8 kernel. Precise
 comes with a 3.2 kernel, so we need to upgrade it. The kernel you'll
 comes with a 3.2 kernel, so we need to upgrade it. The kernel you'll
-install when following these steps comes with device-mapper built
-in. We also include the generic headers to enable packages that depend
-on them, like ZFS and the VirtualBox guest additions. If you didn't
-install the headers for your "precise" kernel, then you can skip these
-headers for the "raring" kernel. But it is safer to include them if
-you're not sure.
+install when following these steps comes with AUFS built in. We also
+include the generic headers to enable packages that depend on them,
+like ZFS and the VirtualBox guest additions. If you didn't install the
+headers for your "precise" kernel, then you can skip these headers for
+the "raring" kernel. But it is safer to include them if you're not
+sure.
 
 
 
 
 .. code-block:: bash
 .. code-block:: bash
@@ -58,7 +63,8 @@ you're not sure.
 Installation
 Installation
 ------------
 ------------
 
 
-.. versionchanged:: v0.6
+.. warning::
+
    These instructions have changed for 0.6. If you are upgrading from
    These instructions have changed for 0.6. If you are upgrading from
    an earlier version, you will need to follow them again.
    an earlier version, you will need to follow them again.
 
 
@@ -100,19 +106,13 @@ Ubuntu Raring 13.04 (64 bit)
 Dependencies
 Dependencies
 ------------
 ------------
 
 
-.. versionchanged:: v0.7
-   Starting in 0.7 you no longer need to add support for AUFS.
-   We now use the device-mapper module instead, and this module
-   is included with kernels since kernel version 2.6
+**AUFS filesystem support**
 
 
-Ubuntu Raring already comes with the 3.8 kernel, so we don't need to
-install it. However, not all systems have AUFS filesystem support
-enabled, so if you're on a Docker version before 0.7, then we need to
-install it.
+Ubuntu Raring already comes with the 3.8 kernel, so we don't need to install it. However, not all systems
+have AUFS filesystem support enabled, so we need to install it.
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
-   # Only required for versions before v0.7
    sudo apt-get update
    sudo apt-get update
    sudo apt-get install linux-image-extra-`uname -r`
    sudo apt-get install linux-image-extra-`uname -r`
 
 
@@ -122,9 +122,8 @@ Installation
 
 
 Docker is available as a Debian package, which makes installation easy.
 Docker is available as a Debian package, which makes installation easy.
 
 
-*Please note that these instructions have changed for 0.6. If you are
-upgrading from an earlier version, you will need to follow them
-again.*
+*Please note that these instructions have changed for 0.6. If you are upgrading from an earlier version, you will need
+to follow them again.*
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
@@ -161,8 +160,8 @@ Verify it worked
 Docker and UFW
 Docker and UFW
 ^^^^^^^^^^^^^^
 ^^^^^^^^^^^^^^
 
 
-Docker uses a bridge to manage container networking, and by default
-UFW drops all `forwarding`. A first step is to enable forwarding:
+Docker uses a bridge to manage containers networking, by default UFW
+drop all `forwarding`, a first step is to enable forwarding:
 
 
 .. code-block:: bash
 .. code-block:: bash
 
 
@@ -180,3 +179,11 @@ Then reload UFW:
    sudo ufw reload
    sudo ufw reload
 
 
 
 
+UFW's default set of rules denied all `incoming`, so if you want to be
+able to reach your containers from another host, you should allow
+incoming connections on the docker port (default 4243):
+
+.. code-block:: bash
+
+   sudo ufw allow 4243/tcp
+

BIN
docs/sources/terms/images/docker-filesystems-busyboxrw.png


BIN
docs/sources/terms/images/docker-filesystems-debian.png


BIN
docs/sources/terms/images/docker-filesystems-debianrw.png


BIN
docs/sources/terms/images/docker-filesystems-generic.png


BIN
docs/sources/terms/images/docker-filesystems-multilayer.png


BIN
docs/sources/terms/images/docker-filesystems-multiroot.png


+ 49 - 183
docs/sources/terms/images/docker-filesystems.svg

@@ -9,15 +9,15 @@
    xmlns="http://www.w3.org/2000/svg"
    xmlns="http://www.w3.org/2000/svg"
    xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
    xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
    xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
    xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   inkscape:version="0.48.2 r9819"
-   version="1.1"
-   id="svg2"
-   height="600"
-   width="800"
-   sodipodi:docname="docker-filesystems.svg"
-   inkscape:export-filename="/Users/arothfusz/src/metalivedev/docker/docs/sources/terms/images/docker-filesystems-debianrw.png"
+   inkscape:export-ydpi="90"
    inkscape:export-xdpi="90"
    inkscape:export-xdpi="90"
-   inkscape:export-ydpi="90">
+   inkscape:export-filename="/Users/arothfusz/src/metalivedev/docker/docs/sources/terms/images/docker-filesystems-multiroot.png"
+   sodipodi:docname="docker-filesystems.svg"
+   width="800"
+   height="600"
+   id="svg2"
+   version="1.1"
+   inkscape:version="0.48.2 r9819">
   <sodipodi:namedview
   <sodipodi:namedview
      id="base"
      id="base"
      pagecolor="#ffffff"
      pagecolor="#ffffff"
@@ -25,15 +25,15 @@
      borderopacity="1.0"
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
      inkscape:pageshadow="2"
-     inkscape:zoom="1.3983333"
-     inkscape:cx="406.90609"
-     inkscape:cy="305.75331"
+     inkscape:zoom="0.82666667"
+     inkscape:cx="236.08871"
+     inkscape:cy="300"
      inkscape:document-units="px"
      inkscape:document-units="px"
-     inkscape:current-layer="layer4"
+     inkscape:current-layer="layer2"
      showgrid="false"
      showgrid="false"
      width="800px"
      width="800px"
-     inkscape:window-width="1513"
-     inkscape:window-height="1057"
+     inkscape:window-width="1327"
+     inkscape:window-height="714"
      inkscape:window-x="686"
      inkscape:window-x="686"
      inkscape:window-y="219"
      inkscape:window-y="219"
      inkscape:window-maximized="0"
      inkscape:window-maximized="0"
@@ -98,32 +98,6 @@
   </sodipodi:namedview>
   </sodipodi:namedview>
   <defs
   <defs
      id="defs4">
      id="defs4">
-    <marker
-       inkscape:stockid="DotS"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="DotS"
-       style="overflow:visible">
-      <path
-         id="path8167"
-         d="M -2.5,-1.0 C -2.5,1.7600000 -4.7400000,4.0 -7.5,4.0 C -10.260000,4.0 -12.5,1.7600000 -12.5,-1.0 C -12.5,-3.7600000 -10.260000,-6.0 -7.5,-6.0 C -4.7400000,-6.0 -2.5,-3.7600000 -2.5,-1.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;marker-end:none"
-         transform="scale(0.2) translate(7.4, 1)" />
-    </marker>
-    <marker
-       inkscape:stockid="Arrow1Send"
-       orient="auto"
-       refY="0.0"
-       refX="0.0"
-       id="Arrow1Send"
-       style="overflow:visible;">
-      <path
-         id="path8114"
-         d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
-         style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
-         transform="scale(0.2) rotate(180) translate(6,0)" />
-    </marker>
     <inkscape:perspective
     <inkscape:perspective
        sodipodi:type="inkscape:persp3d"
        sodipodi:type="inkscape:persp3d"
        inkscape:vp_x="-406.34117 : 522.93291 : 1"
        inkscape:vp_x="-406.34117 : 522.93291 : 1"
@@ -175,7 +149,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:format>image/svg+xml</dc:format>
         <dc:type
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
       </cc:Work>
     </rdf:RDF>
     </rdf:RDF>
   </metadata>
   </metadata>
@@ -321,146 +295,69 @@
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
     </g>
     </g>
     <g
     <g
-       id="text5882"
-       style="font-size:40px;font-style:normal;font-variant:normal;font-weight:500;font-stretch:normal;text-align:start;line-height:100%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Futura;-inkscape-font-specification:Futura Medium"
-       transform="matrix(0.91165875,0,0,0.91165875,15.751943,37.18624)">
-      <path
-         id="path6606"
-         d="m 140.73219,317.85782 c 0,0 -0.0873,5.55779 -0.0873,5.55779 -0.43141,-1.42414 -0.8212,-2.45406 -1.16959,-3.09096 -0.34195,-0.64508 -0.74791,-1.10195 -1.21764,-1.37091 -0.73495,-0.42077 -1.35274,-0.22179 -1.85425,0.59516 -0.50072,0.8157 -0.76397,2.04625 -0.79035,3.69288 -0.027,1.68388 0.18359,3.20381 0.63233,4.56135 0.45531,1.3634 1.05012,2.26362 1.78525,2.69967 0.46986,0.27871 0.88868,0.30856 1.25622,0.089 0.35613,-0.21432 0.7802,-0.7894 1.27246,-1.72611 0,0 -0.0868,5.52838 -0.0868,5.52838 -0.81383,0.40222 -1.61851,0.36385 -2.41401,-0.1133 -1.30744,-0.78421 -2.38182,-2.33577 -3.22555,-4.6516 -0.84074,-2.3204 -1.2385,-4.81309 -1.19538,-7.48315 0.0431,-2.66913 0.51891,-4.66899 1.42938,-6.00318 0.91312,-1.33803 2.01599,-1.64334 3.31108,-0.91052 0.83569,0.4729 1.6205,1.34754 2.35409,2.62548"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6608"
-         d="m 150.82647,340.96288 c -0.0131,0.87258 -0.0398,1.62863 -0.08,2.2681 -0.0343,0.65574 -0.0795,1.22195 -0.13563,1.69858 -0.16635,1.3031 -0.4689,2.32489 -0.90726,3.06494 -0.82564,1.43101 -1.932,1.71716 -3.31563,0.86419 -1.16285,-0.71687 -2.10629,-1.96199 -2.83232,-3.7338 -0.74705,-1.81878 -1.16014,-3.95285 -1.24083,-6.40479 0,0 2.00268,1.20784 2.00268,1.20784 0.0632,0.94707 0.17997,1.71862 0.35049,2.31467 0.39832,1.39237 0.99665,2.33265 1.79618,2.8202 1.4796,0.90224 2.25122,-0.55518 2.30914,-4.37435 0,0 0.039,-2.57027 0.039,-2.57027 -0.83169,1.24408 -1.7708,1.55059 -2.81606,0.92363 -1.18403,-0.71018 -2.1363,-2.18905 -2.85893,-4.43399 -0.72607,-2.26692 -1.06707,-4.76012 -1.02458,-7.48376 0.0413,-2.64808 0.42623,-4.69673 1.15596,-6.14902 0.78629,-1.53742 1.80238,-1.95542 3.05093,-1.24893 1.09844,0.62158 2.00193,2.00023 2.70881,4.13858 0,0 0.0315,-2.0745 0.0315,-2.0745 0,0 2.03744,1.15472 2.03744,1.15472 0,0 -0.27084,18.01795 -0.27084,18.01795 m -1.82524,-9.899 c 0.0271,-1.78724 -0.17989,-3.34746 -0.62034,-4.67924 -0.44532,-1.35802 -1.02683,-2.24176 -1.74376,-2.65223 -0.76277,-0.43668 -1.37235,-0.18699 -1.8298,0.74724 -0.41344,0.83211 -0.63299,2.08993 -0.65905,3.7745 -0.0257,1.65998 0.15163,3.13736 0.53239,4.43335 0.41543,1.41916 1.00715,2.35725 1.77622,2.81342 0.77111,0.4574 1.39129,0.22902 1.85949,-0.68705 0.43103,-0.82529 0.65946,-2.07564 0.68485,-3.74999"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6610"
-         d="m 153.70796,324.42452 c 0,0 2.07042,1.17341 2.07042,1.17341 0,0 -0.0245,1.66019 -0.0245,1.66019 0.39413,-0.60992 0.74171,-0.98397 1.04259,-1.12171 0.30754,-0.14708 0.66877,-0.10336 1.08391,0.13153 0.55236,0.31258 1.12528,1.01078 1.71893,2.09589 0,0 -1.0196,3.40534 -1.0196,3.40534 -0.38912,-0.81646 -0.77267,-1.33268 -1.1507,-1.54932 -1.13735,-0.65174 -1.73043,0.7925 -1.78256,4.33201 0,0 -0.14218,9.6527 -0.14218,9.6527 0,0 -2.07191,-1.24083 -2.07191,-1.24083 0,0 0.27555,-18.53921 0.27555,-18.53921"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6612"
-         d="m 160.27101,337.45789 c 0.0391,-2.70842 0.54498,-4.74049 1.51987,-6.09991 0.97778,-1.36344 2.15372,-1.65855 3.5304,-0.87956 1.38947,0.78627 2.56023,2.42413 3.50989,4.91738 0.94008,2.49467 1.39183,5.1616 1.35276,7.99504 -0.0395,2.86061 -0.56066,4.96949 -1.56131,6.32281 -1.0039,1.33239 -2.21202,1.57282 -3.62161,0.72734 -1.39028,-0.83389 -2.53635,-2.49756 -3.4408,-4.98756 -0.90174,-2.45583 -1.33071,-5.11919 -1.2892,-7.99554 m 2.16535,1.33944 c -0.0269,1.88171 0.19777,3.51537 0.67464,4.90251 0.4902,1.41007 1.15096,2.36244 1.98335,2.85618 0.84109,0.49891 1.5228,0.34654 2.04392,-0.45912 0.52192,-0.80691 0.79607,-2.13484 0.82183,-3.9825 0.0257,-1.84723 -0.20719,-3.46972 -0.69809,-4.86589 -0.49622,-1.41023 -1.16202,-2.35371 -1.99632,-2.83141 -0.8193,-0.46907 -1.48923,-0.29318 -2.01085,0.52575 -0.52082,0.81767 -0.79344,2.10208 -0.81848,3.85448"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6614"
-         d="m 174.85055,336.40707 c 0,0 -0.14891,11.02965 -0.14891,11.02965 -0.043,3.18616 0.56349,5.15225 1.82345,5.89961 1.26523,0.75049 1.92109,-0.47689 1.96361,-3.68352 0,0 0.14715,-11.10042 0.14715,-11.10042 0,0 2.25188,1.27625 2.25188,1.27625 0,0 -0.1474,11.24227 -0.1474,11.24227 -0.0204,1.55606 -0.13518,2.84223 -0.34422,3.85793 -0.20068,0.89949 -0.53701,1.61728 -1.00855,2.15311 -0.77732,0.85998 -1.74991,0.93856 -2.91578,0.23927 -1.15481,-0.69267 -2.09623,-1.91466 -2.82609,-3.66448 -0.44717,-1.07494 -0.75841,-2.16712 -0.93417,-3.27725 -0.17053,-1.00046 -0.24376,-2.37932 -0.21978,-4.13682 0,0 0.15125,-11.08673 0.15125,-11.08673 0,0 2.20756,1.25113 2.20756,1.25113"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6616"
-         d="m 185.65723,373.02651 c 0,0 -2.29305,-1.41345 -2.29305,-1.41345 0,0 0.38988,-30.15992 0.38988,-30.15992 0,0 2.29008,1.2979 2.29008,1.2979 0,0 -0.0272,2.12995 -0.0272,2.12995 0.92995,-1.26701 1.97448,-1.57442 3.13494,-0.91778 1.38588,0.78424 2.51926,2.43305 3.39754,4.94975 0.89517,2.51915 1.32518,5.30287 1.28785,8.34596 -0.0365,2.97631 -0.52463,5.1866 -1.46238,6.6271 -0.92815,1.42724 -2.07686,1.72823 -3.44344,0.90856 -1.17539,-0.70501 -2.21154,-2.25474 -3.10963,-4.64542 0,0 -0.16457,12.87735 -0.16457,12.87735 m 5.77736,-17.14782 c 0.0235,-1.89456 -0.22075,-3.58905 -0.73212,-5.08175 -0.51718,-1.50737 -1.17874,-2.49094 -1.98384,-2.95192 -0.85099,-0.48721 -1.54632,-0.30376 -2.0871,0.54835 -0.53995,0.85081 -0.82161,2.21398 -0.8456,4.09085 -0.0235,1.8371 0.21948,3.50705 0.72966,5.01163 0.50462,1.47645 1.18211,2.46768 2.03354,2.97271 0.80551,0.47781 1.47892,0.2782 2.01926,-0.6008 0.55503,-0.8722 0.84399,-2.20234 0.8662,-3.98907"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6618"
-         d="m 202.46252,355.50616 c 0,0 -2.02148,0.89514 -2.02148,0.89514 -0.29922,-1.41522 -0.68019,-2.25477 -1.14267,-2.51956 -0.22034,-0.12613 -0.41028,-0.0902 -0.56989,0.10775 -0.15938,0.18417 -0.24166,0.49511 -0.2469,0.93289 -0.009,0.76622 0.4431,1.79204 1.35891,3.08034 1.26537,1.79779 2.1171,3.27615 2.55169,4.43073 0.43524,1.15634 0.64426,2.49403 0.62654,4.01197 -0.0227,1.94615 -0.41602,3.35262 -1.17861,4.21763 -0.73862,0.80711 -1.61833,0.90353 -2.63781,0.29204 -1.73902,-1.04308 -2.94495,-3.41075 -3.62361,-7.09984 0,0 2.05404,-0.62372 2.05404,-0.62372 0.27175,1.12496 0.48061,1.86197 0.62642,2.21031 0.28474,0.69301 0.62973,1.15991 1.03516,1.40039 0.81243,0.48191 1.2279,0.008 1.24483,-1.42137 0.01,-0.82472 -0.29531,-1.77886 -0.91422,-2.8608 -0.23929,-0.37501 -0.47847,-0.74288 -0.71753,-1.1036 -0.23888,-0.36043 -0.48103,-0.72949 -0.72645,-1.10718 -0.68603,-1.0613 -1.16577,-2.00109 -1.44019,-2.82057 -0.34953,-1.04091 -0.51619,-2.21537 -0.50033,-3.52419 0.021,-1.73109 0.34475,-2.98724 0.97217,-3.7698 0.64272,-0.77593 1.41455,-0.9098 2.31657,-0.3994 1.33289,0.75425 2.31172,2.64351 2.93336,5.67084"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6620"
-         d="m 206.7322,368.8215 c 0,0 2.17482,2.92292 2.17482,2.92292 0,0 -3.25769,10.03818 -3.25769,10.03818 0,0 -1.63779,-2.31835 -1.63779,-2.31835 0,0 2.72066,-10.64275 2.72066,-10.64275"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6622"
-         d="m 219.42398,361.66906 c 0,0 2.57673,1.46036 2.57673,1.46036 0,0 -0.0201,1.91673 -0.0201,1.91673 0.91913,-1.14976 1.94712,-1.40461 3.08517,-0.76065 1.31212,0.7425 2.33071,2.0892 3.05315,4.04145 0.62517,1.66947 0.92345,3.98491 0.89383,6.94449 0,0 -0.12896,12.8786 -0.12896,12.8786 0,0 -2.63579,-1.57852 -2.63579,-1.57852 0,0 0.11901,-11.68793 0.11901,-11.68793 0.021,-2.06076 -0.11926,-3.57344 -0.42047,-4.53768 -0.29303,-0.97307 -0.83657,-1.68661 -1.62938,-2.14052 -0.85992,-0.4923 -1.47441,-0.3147 -1.8449,0.5312 -0.36236,0.8353 -0.55694,2.54574 -0.58402,5.13257 0,0 -0.10498,10.02852 -0.10498,10.02852 0,0 -2.57927,-1.54467 -2.57927,-1.54467 0,0 0.21995,-20.68395 0.21995,-20.68395"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6624"
-         d="m 241.04019,373.92004 c 0,0 2.7519,1.55964 2.7519,1.55964 0,0 -0.19485,21.48192 -0.19485,21.48192 0,0 -2.75486,-1.64983 -2.75486,-1.64983 0,0 0.0207,-2.24294 0.0207,-2.24294 -1.14067,1.23372 -2.35327,1.46311 -3.63673,0.69329 -1.61336,-0.96771 -2.93104,-2.82378 -3.95624,-5.56468 -1.0134,-2.77024 -1.50298,-5.76236 -1.47132,-8.98244 0.0311,-3.16101 0.57268,-5.49827 1.62704,-7.016 1.05738,-1.52205 2.37014,-1.84168 3.94157,-0.95249 1.36204,0.77074 2.57909,2.48663 3.64983,5.15233 0,0 0.0229,-2.4788 0.0229,-2.4788 m -6.51523,6.88348 c -0.0196,2.02998 0.26215,3.85849 0.84614,5.48754 0.60094,1.65624 1.36906,2.76273 2.30543,3.31815 1.00329,0.59513 1.82204,0.44851 2.45479,-0.44233 0.63421,-0.93629 0.96099,-2.40859 0.97953,-4.41531 0.0185,-2.0062 -0.27757,-3.84035 -0.88736,-5.50031 -0.60884,-1.62738 -1.4072,-2.72292 -2.39382,-3.28783 -0.92783,-0.5312 -1.70779,-0.33537 -2.34104,0.58521 -0.62437,0.93812 -0.94533,2.3559 -0.96367,4.25488"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6626"
-         d="m 247.28249,377.45786 c 0,0 2.79111,1.58186 2.79111,1.58186 0,0 -0.0173,2.0015 -0.0173,2.0015 0.54896,-0.70124 1.01679,-1.12175 1.40319,-1.26085 0.41199,-0.13991 0.92736,-0.035 1.5466,0.31541 1.38478,0.78361 2.47507,2.48167 3.2682,5.09697 0.91546,-1.64577 2.14323,-2.03484 3.68688,-1.16137 2.82364,1.59782 4.22073,5.4225 4.17382,11.46769 0,0 -0.10635,13.69844 -0.10635,13.69844 0,0 -2.92456,-1.75145 -2.92456,-1.75145 0,0 0.0975,-12.25817 0.0975,-12.25817 0.0168,-2.11284 -0.11722,-3.69057 -0.40182,-4.73289 -0.2928,-1.06127 -0.78133,-1.78748 -1.46479,-2.17881 -0.79285,-0.45392 -1.37569,-0.25984 -1.74962,0.58089 -0.36513,0.84484 -0.55712,2.42916 -0.57621,4.75404 0,0 -0.0931,11.32679 -0.0931,11.32679 0,0 -2.86491,-1.71574 -2.86491,-1.71574 0,0 0.10135,-12.06241 0.10135,-12.06241 0.0326,-3.88232 -0.5811,-6.18348 -1.83765,-6.90292 -0.79343,-0.45426 -1.37725,-0.25868 -1.75259,0.58536 -0.36667,0.84805 -0.55973,2.40692 -0.5794,4.67769 0,0 -0.0971,11.20768 -0.0971,11.20768 0,0 -2.79419,-1.67338 -2.79419,-1.67338 0,0 0.19107,-21.59633 0.19107,-21.59633"
-         inkscape:connector-curvature="0" />
+       id="text3655"
+       style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Arial;-inkscape-font-specification:Arial"
+       transform="matrix(0.67123869,0,0,0.67123869,53.68199,126.56876)">
       <path
       <path
-         id="path6628"
-         d="m 279.65651,408.40996 c 0,0 -9.35262,-5.46708 -9.35262,-5.46708 0.0658,1.88322 0.39914,3.5481 1.00079,4.99605 0.60288,1.43532 1.3839,2.43821 2.34415,3.00779 0.74867,0.44409 1.3715,0.5038 1.86778,0.1781 0.48795,-0.33142 1.04752,-1.14891 1.67897,-2.45366 0,0 2.55498,3.97748 2.55498,3.97748 -0.40621,0.92857 -0.83351,1.67996 -1.28179,2.25436 -0.44769,0.55823 -0.92541,0.95765 -1.43302,1.1985 -0.50684,0.22504 -1.05235,0.28654 -1.63634,0.18485 -0.58306,-0.10154 -1.21336,-0.35566 -1.89062,-0.76188 -1.93548,-1.16091 -3.47356,-3.1605 -4.61915,-5.9959 -1.14136,-2.84047 -1.69708,-6.03236 -1.67022,-9.58212 0.0266,-3.51768 0.60754,-6.04329 1.74528,-7.58141 1.14969,-1.50706 2.66598,-1.73029 4.55393,-0.662 1.91602,1.08423 3.4311,3.00239 4.54025,5.75717 1.10404,2.74368 1.64484,5.98707 1.61956,9.72443 0,0 -0.0219,1.22533 -0.0219,1.22533 m -3.09311,-6.08665 c -0.40196,-3.01984 -1.40751,-4.98881 -3.01161,-5.90725 -0.36437,-0.2086 -0.70688,-0.30457 -1.02762,-0.28808 -0.32036,10e-4 -0.61454,0.10951 -0.88262,0.32487 -0.25895,0.205 -0.48296,0.50671 -0.67208,0.9051 -0.18905,0.39823 -0.33441,0.89057 -0.43613,1.47709 0,0 6.03006,3.48827 6.03006,3.48827"
+         id="path3662"
+         d="m 132.8684,367.78607 c 0,0 0.71572,-54.35962 0.71572,-54.35962 0,0 2.66242,1.51122 2.66242,1.51122 0,0 -0.71153,54.62187 -0.71153,54.62187 0,0 -2.66661,-1.77347 -2.66661,-1.77347"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6630"
-         d="m 290.89618,406.10951 c 0,0 -2.60986,0.84507 -2.60986,0.84507 -0.39883,-1.64607 -0.89865,-2.6404 -1.49908,-2.98417 -0.28602,-0.16374 -0.53138,-0.13935 -0.73616,0.073 -0.20459,0.19661 -0.30843,0.54502 -0.31163,1.04533 -0.006,0.87567 0.58768,2.08933 1.78293,3.64497 1.65243,2.17083 2.76745,3.93987 3.33988,5.30109 0.57339,1.3636 0.85526,2.91459 0.84485,4.65146 -0.0134,2.22693 -0.51132,3.80158 -1.49202,4.72174 -0.94985,0.85582 -2.08785,0.8847 -3.41208,0.0904 -2.25746,-1.35404 -3.83527,-4.17302 -4.74203,-8.45342 0,0 2.64901,-0.5274 2.64901,-0.5274 0.36052,1.31152 0.63659,2.17366 0.82798,2.58546 0.37389,0.81887 0.82391,1.38461 1.35034,1.69686 1.05517,0.6259 1.589,0.12188 1.59915,-1.51318 0.006,-0.94315 -0.39683,-2.06297 -1.20658,-3.35724 -0.31265,-0.45076 -0.62507,-0.89328 -0.93724,-1.32757 -0.3119,-0.43388 -0.62807,-0.8779 -0.94849,-1.33201 -0.89555,-1.27548 -1.52325,-2.39318 -1.88454,-3.35478 -0.46008,-1.22133 -0.68496,-2.57895 -0.67514,-4.07396 0.013,-1.97731 0.421,-3.38511 1.2253,-4.22509 0.82431,-0.83127 1.82106,-0.91717 2.9918,-0.25471 1.73084,0.97944 3.01357,3.22759 3.84361,6.74814"
+         id="path3664"
+         d="m 137.92667,371.15014 c 0,0 6.14809,-16.99741 6.14809,-16.99741 0,0 -5.19986,-22.51479 -5.19986,-22.51479 0,0 3.39897,2.02031 3.39897,2.02031 0,0 2.36954,10.8944 2.36954,10.8944 0.44814,2.07993 0.80843,3.81608 1.08051,5.20679 0.47284,-1.39022 0.90795,-2.61465 1.30519,-3.67276 0,0 2.89882,-7.87895 2.89882,-7.87895 0,0 3.37501,2.00607 3.37501,2.00607 0,0 -5.97372,15.60005 -5.97372,15.60005 0,0 5.92178,25.52797 5.92178,25.52797 0,0 -3.4783,-2.3133 -3.4783,-2.3133 0,0 -3.23409,-14.8189 -3.23409,-14.8189 0,0 -0.8528,-3.95585 -0.8528,-3.95585 0,0 -4.46772,13.08538 -4.46772,13.08538 0,0 -3.29142,-2.18901 -3.29142,-2.18901"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6632"
-         d="m 297.65906,442.06529 c 0,0 -3.20358,-1.97471 -3.20358,-1.97471 0,0 0.20811,-35.77949 0.20811,-35.77949 0,0 3.19718,1.812 3.19718,1.812 0,0 -0.0142,2.52779 -0.0142,2.52779 1.27959,-1.39527 2.73004,-1.63675 4.35368,-0.71802 1.94048,1.09806 3.53992,3.19592 4.79412,6.29791 1.27855,3.10878 1.91078,6.47749 1.89303,10.09851 -0.0174,3.54182 -0.67438,6.1143 -1.9679,7.71245 -1.27962,1.58277 -2.87888,1.79589 -4.7933,0.64761 -1.64534,-0.98688 -3.10516,-2.96019 -4.38135,-5.91477 0,0 -0.0858,15.29072 -0.0858,15.29072 m 7.85777,-19.67674 c 0.0115,-2.25271 -0.34836,-4.29981 -1.07845,-6.13877 -0.7381,-1.8565 -1.67131,-3.10657 -2.79825,-3.75181 -1.19059,-0.68165 -2.15647,-0.54701 -2.89952,0.40095 -0.74169,0.94624 -1.11828,2.53234 -1.13081,4.76009 -0.0123,2.18065 0.34427,4.19544 1.07079,6.04698 0.71872,1.81803 1.67298,3.08105 2.86454,3.78783 1.12784,0.66901 2.06441,0.51547 2.80811,-0.4635 0.76448,-0.96932 1.1527,-2.51718 1.16359,-4.64177"
+         id="path3666"
+         d="m 166.82131,374.91047 c 0,0 2.93572,2.79373 2.93572,2.79373 -0.37761,4.62343 -1.24922,7.86985 -2.61073,9.73548 -1.34456,1.83887 -2.96947,2.11901 -4.86973,0.85217 -2.3637,-1.5758 -4.23108,-4.67579 -5.61088,-9.29124 -1.36166,-4.61024 -1.99867,-10.32878 -1.91636,-17.16995 0.0532,-4.42099 0.40174,-8.10179 1.04648,-11.0477 0.64585,-2.95094 1.59765,-4.88106 2.85839,-5.78928 1.27692,-0.93132 2.65738,-0.95975 4.14303,-0.0791 1.88674,1.11849 3.42575,3.18947 4.61182,6.21733 1.19146,3.01472 1.93755,6.74983 2.23475,11.20086 0,0 -2.92082,-0.72724 -2.92082,-0.72724 -0.24353,-2.97398 -0.70922,-5.3811 -1.39599,-7.22057 -0.67412,-1.8282 -1.50208,-3.03683 -2.48268,-3.62779 -1.47568,-0.88924 -2.68418,-0.33926 -3.629,1.6424 -0.94184,1.95024 -1.44412,5.64763 -1.50886,11.09862 -0.0657,5.53171 0.32577,9.83698 1.17652,12.92095 0.85352,3.09406 1.99526,5.11378 3.42833,6.05501 1.15583,0.75914 2.13411,0.54393 2.93293,-0.65009 0.80075,-1.19694 1.32691,-3.50191 1.57708,-6.91359"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6634"
-         d="m 323.2836,420.53147 c 0,0 3.47131,1.96737 3.47131,1.96737 0,0 -0.0913,24.20953 -0.0913,24.20953 0,0 -3.47621,-2.08183 -3.47621,-2.08183 0,0 0.0101,-2.52677 0.0101,-2.52677 -1.42466,1.29952 -2.94623,1.46107 -4.5632,0.49121 -2.0315,-1.21851 -3.69729,-3.41306 -5.00188,-6.57948 -1.28946,-3.19719 -1.92379,-6.60219 -1.90669,-10.2229 0.0168,-3.55411 0.6789,-6.14271 1.98946,-7.77092 1.31476,-1.63341 2.9588,-1.89468 4.93677,-0.77545 1.71533,0.97067 3.25611,2.9957 4.62048,6.08088 0,0 0.0111,-2.79164 0.0111,-2.79164 m -8.13002,7.25454 c -0.0103,2.28371 0.35592,4.36415 1.09987,6.24394 0.76557,1.91218 1.73725,3.21882 2.91655,3.91833 1.26404,0.74979 2.29142,0.64933 3.08006,-0.30463 0.79033,-1.00527 1.19056,-2.63908 1.19956,-4.89951 0.009,-2.25977 -0.37624,-4.34987 -1.15431,-6.26753 -0.77646,-1.8804 -1.78697,-3.17567 -3.02974,-3.88724 -1.16829,-0.66888 -2.14556,-0.50772 -2.93343,0.4805 -0.77645,1.00843 -1.16894,2.57986 -1.17856,4.71614"
+         id="path3668"
+         d="m 172.97661,394.46064 c 0,0 0.0905,-8.17492 0.0905,-8.17492 0,0 3.48861,2.27245 3.48861,2.27245 0,0 -0.0895,8.22327 -0.0895,8.22327 -0.0329,3.02363 -0.28765,5.30542 -0.76375,6.84314 -0.47577,1.56243 -1.21303,2.51325 -2.20987,2.85324 0,0 -0.81311,-3.65386 -0.81311,-3.65386 0.65091,-0.22881 1.13685,-0.89297 1.45702,-1.99285 0.32015,-1.07418 0.51068,-2.8142 0.57137,-5.21909 0,0 -1.73124,-1.15138 -1.73124,-1.15138"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6636"
-         d="m 343.1806,432.87546 c 0,0 -0.0207,7.59105 -0.0207,7.59105 -0.82209,-2.06753 -1.55806,-3.5828 -2.2085,-4.54818 -0.63859,-0.97439 -1.38872,-1.70786 -2.24976,-2.20087 -1.34618,-0.77073 -2.46323,-0.66272 -3.35327,0.32045 -0.88816,0.98111 -1.33524,2.59173 -1.34264,4.83391 -0.008,2.29306 0.41155,4.42679 1.25895,6.40429 0.86,1.98854 1.96259,3.38313 3.30976,4.18222 0.86167,0.51112 1.62477,0.66763 2.28865,0.46844 0.64349,-0.19527 1.40214,-0.86623 2.27653,-2.01442 0,0 -0.0206,7.55467 -0.0206,7.55467 -1.47401,0.32368 -2.94087,0.0461 -4.40046,-0.82938 -2.39574,-1.43698 -4.38377,-3.85601 -5.96995,-7.25233 -1.57902,-3.3978 -2.35945,-6.90733 -2.34662,-10.53875 0.0128,-3.62973 0.82463,-6.23223 2.44014,-7.81337 1.62183,-1.58731 3.61521,-1.71555 5.98635,-0.37384 1.53207,0.86696 2.98302,2.2714 4.35206,4.21611"
+         id="path3670"
+         d="m 204.77784,410.06983 c -1.27022,1.55778 -2.48568,2.44071 -3.64678,2.65261 -1.1447,0.21934 -2.36657,-0.10529 -3.66459,-0.97064 -2.13127,-1.42084 -3.74779,-3.67649 -4.85717,-6.76514 -1.10483,-3.1041 -1.63719,-6.47275 -1.60031,-10.11391 0.0216,-2.13477 0.25062,-3.94364 0.6874,-5.42825 0.44957,-1.50612 1.02226,-2.57799 1.71876,-3.21526 0.71002,-0.63098 1.50367,-0.94896 2.38159,-0.95288 0.64759,0.017 1.6255,0.25355 2.93681,0.71095 2.68835,0.95136 4.68535,1.32634 5.97773,1.11825 0.0222,-1.02578 0.0346,-1.67832 0.0372,-1.95765 0.0289,-3.07178 -0.26872,-5.42898 -0.8919,-7.06976 -0.84101,-2.21749 -2.10184,-3.83086 -3.77761,-4.84085 -1.55688,-0.93829 -2.71034,-1.00947 -3.46489,-0.21839 -0.74047,0.76925 -1.30109,2.5996 -1.68287,5.49061 0,0 -3.16708,-2.94172 -3.16708,-2.94172 0.31864,-2.91383 0.81734,-5.11515 1.49696,-6.60484 0.6812,-1.51989 1.65517,-2.41342 2.92464,-2.67921 1.27473,-0.29431 2.75127,0.0544 4.43259,1.05105 1.67794,0.99472 3.04366,2.25211 4.09313,3.7721 1.05306,1.52531 1.82526,3.12483 2.31452,4.79681 0.49033,1.64692 0.82696,3.5698 1.00937,5.76792 0.10151,1.36012 0.13673,3.72492 0.1056,7.09479 0,0 -0.0935,10.11679 -0.0935,10.11679 -0.0653,7.05995 -0.0372,11.58025 0.0844,13.55797 0.13448,1.95911 0.40887,3.94126 0.8236,5.94773 0,0 -3.55349,-2.3633 -3.55349,-2.3633 -0.33594,-1.80359 -0.5439,-3.78856 -0.62416,-5.9558 m -0.12224,-17.05427 c -1.23154,0.34731 -3.06331,0.14247 -5.48491,-0.60924 -1.36335,-0.41924 -2.32581,-0.53009 -2.89103,-0.33412 -0.56424,0.19568 -1.00286,0.73389 -1.31639,1.61435 -0.31298,0.85222 -0.4758,1.92485 -0.48867,3.21853 -0.0197,1.98221 0.29058,3.84732 0.93197,5.59804 0.65498,1.76261 1.62279,3.0659 2.90625,3.90947 1.27641,0.83893 2.42209,0.96176 3.43544,0.36456 1.01669,-0.62694 1.7731,-1.89094 2.26739,-3.79238 0.3778,-1.47261 0.58252,-3.87376 0.61388,-7.20158 0,0 0.0261,-2.76763 0.0261,-2.76763"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6638"
-         d="m 361.38277,456.18302 c 0,0 -11.61841,-6.79154 -11.61841,-6.79154 0.0937,2.10935 0.5168,3.99418 1.27044,5.65627 0.75525,1.64844 1.72826,2.82675 2.92056,3.53398 0.92985,0.55156 1.70154,0.66407 2.31408,0.33617 0.60228,-0.33496 1.29017,-1.20853 2.06401,-2.62217 0,0 3.19414,4.63876 3.19414,4.63876 -0.49766,1.00943 -1.02257,1.81886 -1.57461,2.42855 -0.55134,0.59157 -1.14088,1.00311 -1.76844,1.23492 -0.6266,0.21415 -1.3021,0.24242 -2.02622,0.0853 -0.72285,-0.15689 -1.50505,-0.48791 -2.34622,-0.99245 -2.40289,-1.44126 -4.31837,-3.78974 -5.75319,-7.04227 -1.42904,-3.25657 -2.13636,-6.86223 -2.12627,-10.82528 0.01,-3.92704 0.71123,-6.70704 2.10712,-8.34548 1.4113,-1.60346 3.28493,-1.74728 5.62792,-0.42151 2.37927,1.34637 4.26874,3.59818 5.66153,6.75839 1.38693,3.14867 2.07913,6.8173 2.07261,10.99851 0,0 -0.019,1.36991 -0.019,1.36991 m -3.87464,-7.03474 c -0.51794,-3.40669 -1.77639,-5.68028 -3.76842,-6.82083 -0.45233,-0.25896 -0.87683,-0.39044 -1.27361,-0.39467 -0.39636,-0.0213 -0.7596,0.079 -1.08982,0.30074 -0.31897,0.21081 -0.59406,0.53216 -0.82535,0.96399 -0.23118,0.43165 -0.40773,0.97154 -0.52969,1.61975 0,0 7.48689,4.33102 7.48689,4.33102"
+         id="path3672"
+         d="m 226.91498,430.33317 c 0,0 0.056,-6.79135 0.056,-6.79135 -1.69979,4.12585 -3.95958,5.23997 -6.76691,3.36841 -1.23125,-0.82083 -2.37518,-2.1017 -3.4326,-3.84047 -1.04088,-1.72429 -1.81148,-3.52427 -2.31374,-5.40182 -0.48827,-1.89422 -0.82487,-4.02954 -1.01034,-6.40682 -0.12775,-1.59592 -0.17698,-4.02489 -0.14772,-7.28678 0,0 0.25063,-27.95019 0.25063,-27.95019 0,0 3.47921,2.068 3.47921,2.068 0,0 -0.22098,25.15376 -0.22098,25.15376 -0.0353,4.02044 0.0122,6.77614 0.14272,8.26649 0.20297,2.17003 0.65699,4.07445 1.36316,5.71471 0.70804,1.61546 1.59303,2.77268 2.65633,3.47053 1.06676,0.70016 2.07587,0.76801 3.02668,0.20066 0.95364,-0.59783 1.63329,-1.79901 2.03728,-3.60358 0.41794,-1.82668 0.64337,-4.71043 0.67595,-8.64861 0,0 0.20406,-24.67831 0.20406,-24.67831 0,0 3.62583,2.15515 3.62583,2.15515 0,0 -0.37466,46.37229 -0.37466,46.37229 0,0 -3.25092,-2.16207 -3.25092,-2.16207"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6640"
-         d="m 375.33222,454.42534 c 0,0 -3.24135,0.75716 -3.24135,0.75716 -0.50708,-1.87388 -1.13523,-3.02442 -1.88397,-3.45311 -0.35661,-0.20414 -0.66139,-0.19456 -0.91447,0.0285 -0.25291,0.20547 -0.37962,0.58828 -0.38022,1.14856 -0.001,0.98062 0.74437,2.3836 2.24068,4.21417 2.06978,2.55457 3.46913,4.62025 4.19079,6.18906 0.72304,1.57188 1.0845,3.33245 1.08333,5.2798 -0.002,2.49684 -0.61084,4.22538 -1.82536,5.18303 -1.17621,0.88754 -2.59171,0.83341 -4.24376,-0.15749 -2.81466,-1.68825 -4.79321,-4.96501 -5.94752,-9.82659 0,0 3.28603,-0.39417 3.28603,-0.39417 0.4567,1.49615 0.80551,2.48269 1.04611,2.9584 0.47014,0.94553 1.03329,1.61321 1.68981,2.00263 1.31626,0.78076 1.97681,0.25583 1.97841,-1.5764 9e-4,-1.05686 -0.50748,-2.34215 -1.52312,-3.85296 -0.39175,-0.52835 -0.78309,-1.04739 -1.17404,-1.55713 -0.39058,-0.50922 -0.78648,-1.03004 -1.18769,-1.56244 -1.12117,-1.49473 -1.90829,-2.79267 -2.36339,-3.89608 -0.57945,-1.40133 -0.86772,-2.93798 -0.86551,-4.61137 0.003,-2.21318 0.50007,-3.7603 1.49317,-4.64338 1.01827,-0.87212 2.25607,-0.89716 3.71556,-0.0713 2.15876,1.22159 3.7697,3.83387 4.82651,7.84113"
+         id="path3674"
+         d="m 236.84818,436.9394 c 0,0 0.31458,-40.68866 0.31458,-40.68866 0,0 -3.27066,-1.97443 -3.27066,-1.97443 0,0 0.0485,-6.13244 0.0485,-6.13244 0,0 3.26986,1.94357 3.26986,1.94357 0,0 0.0384,-4.9718 0.0384,-4.9718 0.0242,-3.13718 0.17313,-5.39171 0.44675,-6.76504 0.37445,-1.8466 1.0157,-3.14492 1.92523,-3.8952 0.92597,-0.77365 2.21207,-0.69593 3.86256,0.23811 1.06731,0.60412 2.24898,1.54093 3.54628,2.81271 0,0 -0.62418,6.66996 -0.62418,6.66996 -0.78934,-0.75385 -1.53564,-1.33338 -2.23919,-1.73932 -1.15067,-0.66373 -1.96603,-0.6152 -2.44858,0.14318 -0.48194,0.75751 -0.73333,2.55103 -0.75467,5.38196 0,0 -0.0327,4.33654 -0.0327,4.33654 0,0 4.35398,2.58795 4.35398,2.58795 0,0 -0.0456,6.23957 -0.0456,6.23957 0,0 -4.35509,-2.62908 -4.35509,-2.62908 0,0 -0.30843,40.92114 -0.30843,40.92114 0,0 -3.72704,-2.47872 -3.72704,-2.47872"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6642"
-         d="m 382.4733,472.39188 c 0,0 3.57821,4.20615 3.57821,4.20615 0,0 -5.09505,12.25901 -5.09505,12.25901 0,0 -2.69055,-3.32451 -2.69055,-3.32451 0,0 4.20739,-13.14065 4.20739,-13.14065"
+         id="path3676"
+         d="m 246.46465,429.05307 c 0,0 3.81968,1.1922 3.81968,1.1922 0.19276,3.35392 0.7721,6.20708 1.74012,8.56243 0.98544,2.37207 2.3721,4.14723 4.16469,5.32459 1.81668,1.19318 3.17579,1.3205 4.07171,0.37548 0.89826,-0.97786 1.35491,-2.50699 1.36833,-4.58524 0.012,-1.86394 -0.37148,-3.58214 -1.14903,-5.15206 -0.54183,-1.08052 -1.89103,-2.87259 -4.03793,-5.36553 -2.87017,-3.33767 -4.84719,-5.88768 -5.94667,-7.66691 -1.08128,-1.7942 -1.8993,-3.82568 -2.45597,-6.09572 -0.54119,-2.28674 -0.80303,-4.59245 -0.78627,-6.91984 0.0153,-2.11796 0.25669,-3.93345 0.72469,-5.44816 0.48302,-1.53765 1.12853,-2.66509 1.93745,-3.38209 0.60808,-0.56866 1.4316,-0.86027 2.47213,-0.87408 1.05827,-0.0353 2.19002,0.30354 3.396,1.01839 1.82428,1.08147 3.42677,2.57943 4.80442,4.49544 1.39816,1.9329 2.42778,4.04798 3.08549,6.34283 0.65928,2.26923 1.10658,5.05898 1.34104,8.36831 0,0 -3.93498,-1.30965 -3.93498,-1.30965 -0.1613,-2.60573 -0.66572,-4.86818 -1.51169,-6.78511 -0.82908,-1.90296 -2.01211,-3.31622 -3.54556,-4.24034 -1.80214,-1.08596 -3.08681,-1.24118 -3.85989,-0.47117 -0.77146,0.76845 -1.16235,1.97686 -1.17391,3.62665 -0.007,1.05006 0.14407,2.09235 0.45452,3.12753 0.31055,1.06635 0.80269,2.09487 1.47721,3.08626 0.38829,0.54294 1.53561,1.95069 3.44979,4.23261 2.78949,3.29205 4.7444,5.79841 5.85003,7.50277 1.12436,1.68881 2.00304,3.68747 2.63416,5.99522 0.63237,2.3125 0.94024,4.88426 0.92265,7.71231 -0.0173,2.76736 -0.43134,5.12235 -1.24099,7.06139 -0.79291,1.91427 -1.93089,3.05649 -3.41056,3.42835 -1.47342,0.33983 -3.12755,-0.1039 -4.95957,-1.32524 -3.01245,-2.00831 -5.28496,-4.82452 -6.83171,-8.44857 -1.52498,-3.59708 -2.47979,-8.05614 -2.86938,-13.38305"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6644"
-         d="m 140.00428,341.66899 c 0,0 1.98726,1.21371 1.98726,1.21371 0,0 -0.49335,31.63684 -0.49335,31.63684 0,0 -1.98954,-1.3266 -1.98954,-1.3266 0,0 0.03,-1.90736 0.03,-1.90736 -0.80225,1.10306 -1.69144,1.32715 -2.66659,0.67603 -1.15652,-0.77223 -2.09838,-2.31317 -2.82752,-4.62013 -0.72071,-2.33341 -1.05811,-4.87017 -1.01379,-7.61462 0.0433,-2.68198 0.45371,-4.68023 1.23263,-5.99786 0.77527,-1.3368 1.73102,-1.64695 2.86928,-0.92583 0.99176,0.62833 1.86836,2.06186 2.62891,4.30398 0,0 0.24272,-15.43816 0.24272,-15.43816 m -4.99633,19.34403 c -0.0277,1.72908 0.16551,3.2789 0.58017,4.65088 0.42669,1.3943 0.97861,2.31597 1.65641,2.76401 0.72594,0.47988 1.32231,0.3336 1.78822,-0.44059 0.46715,-0.81268 0.7144,-2.07294 0.74123,-3.77964 0.0268,-1.70631 -0.17615,-3.25863 -0.60831,-4.65544 -0.43183,-1.36954 -1.00516,-2.28245 -1.71922,-2.73966 -0.67178,-0.4301 -1.24097,-0.2449 -1.70827,0.55397 -0.46108,0.81379 -0.70432,2.02891 -0.73023,3.64647"
+         id="path3678"
+         d="m 267.46509,458.46409 c 0,0 10.16276,-64.44628 10.16276,-64.44628 0,0 3.35985,1.90154 3.35985,1.90154 0,0 -10.22211,64.7453 -10.22211,64.7453 0,0 -3.3005,-2.20056 -3.3005,-2.20056"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6646"
-         d="m 152.1579,373.24915 c 0,0 -6.32462,-4.12591 -6.32462,-4.12591 0.0306,1.54667 0.24537,2.90308 0.64471,4.07015 0.40014,1.15626 0.92482,1.94948 1.57467,2.37904 0.50644,0.33478 0.92997,0.35873 1.2702,0.0712 0.33453,-0.29185 0.7214,-0.98572 1.16076,-2.08239 0,0 1.70573,3.16264 1.70573,3.16264 -0.28298,0.77848 -0.57896,1.41254 -0.88792,1.90231 -0.30848,0.47658 -0.63623,0.82411 -0.98318,1.04274 -0.34637,0.20569 -0.7179,0.2787 -1.11446,0.21929 -0.39605,-0.0593 -0.82321,-0.24212 -1.28133,-0.54802 -1.31008,-0.87476 -2.34391,-2.45642 -3.10428,-4.74285 -0.75791,-2.2922 -1.11342,-4.89799 -1.06823,-7.82163 0.0448,-2.89745 0.4608,-4.99866 1.24949,-6.3068 0.79619,-1.28235 1.83223,-1.52102 3.11092,-0.71095 1.29646,0.82138 2.31265,2.34183 3.04575,4.5633 0.72927,2.21121 1.07181,4.85293 1.02605,7.9214 0,0 -0.0243,1.00652 -0.0243,1.00652 m -2.05577,-4.87681 c -0.25041,-2.46552 -0.91969,-4.04554 -2.00503,-4.74047 -0.24665,-0.1579 -0.47931,-0.22409 -0.69803,-0.19851 -0.21837,0.0129 -0.41975,0.113 -0.60417,0.30016 -0.17813,0.17832 -0.33319,0.43491 -0.46521,0.76974 -0.13196,0.33473 -0.23486,0.74527 -0.30871,1.23166 0,0 4.08115,2.63742 4.08115,2.63742"
+         id="path3680"
+         d="m 287.73074,470.77961 c 0,0 -3.98413,-2.64971 -3.98413,-2.64971 0,0 0.36657,-69.26132 0.36657,-69.26132 0,0 4.28286,2.431 4.28286,2.431 0,0 -0.12574,24.80354 -0.12574,24.80354 1.84841,-3.43804 4.20286,-4.3171 7.07399,-2.61515 1.5995,0.94822 3.11282,2.48894 4.53901,4.62548 1.44866,2.12297 2.63509,4.62828 3.55675,7.51533 0.94101,2.87289 1.67339,6.11301 2.19582,9.71903 0.52331,3.61258 0.77764,7.29172 0.76223,11.03361 -0.0367,8.8888 -1.19889,15.02735 -3.47692,18.39523 -2.26525,3.34891 -4.9514,3.97742 -8.04813,1.91293 -3.05429,-2.0362 -5.42013,-6.12345 -7.11007,-12.2502 0,0 -0.0322,6.34023 -0.0322,6.34023 m 0.0826,-25.6991 c -0.0308,6.05748 0.36263,10.70405 1.18198,13.94323 1.3439,5.31484 3.18967,8.7503 5.54452,10.29694 1.92772,1.26611 3.60983,0.72174 5.04245,-1.64447 1.43781,-2.407 2.17299,-6.89882 2.20167,-13.46572 0.0293,-6.72399 -0.63702,-12.10528 -1.99483,-16.13506 -1.33586,-4.00333 -2.96003,-6.57643 -4.86901,-7.72687 -1.91517,-1.15407 -3.57055,-0.50907 -4.97003,1.92406 -1.39445,2.39298 -2.10547,6.6592 -2.13675,12.80789"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6648"
-         d="m 155.68268,365.07829 c 0,0 2.19041,12.03556 2.19041,12.03556 0,0 2.50679,-9.05551 2.50679,-9.05551 0,0 2.39127,1.5171 2.39127,1.5171 0,0 -5.05552,17.06458 -5.05552,17.06458 0,0 -4.35815,-23.03691 -4.35815,-23.03691 0,0 2.3252,1.47518 2.3252,1.47518"
+         id="path3682"
+         d="m 322.12463,485.58433 c 0,0 0.65936,8.40758 0.65936,8.40758 -1.33673,-0.35442 -2.52804,-0.88064 -3.57528,-1.5781 -1.70425,-1.13503 -3.01872,-2.52454 -3.94739,-4.16917 -0.92628,-1.6404 -1.57435,-3.40805 -1.9457,-5.30454 -0.37079,-1.92713 -0.54592,-5.5546 -0.52573,-10.88197 0,0 0.114,-30.08386 0.114,-30.08386 0,0 -3.36894,-2.03377 -3.36894,-2.03377 0,0 0.0272,-6.84805 0.0272,-6.84805 0,0 3.36786,2.00182 3.36786,2.00182 0,0 0.0489,-12.91135 0.0489,-12.91135 0,0 4.63253,-2.66881 4.63253,-2.66881 0,0 -0.065,18.3241 -0.065,18.3241 0,0 4.72675,2.80952 4.72675,2.80952 0,0 -0.023,6.96866 -0.023,6.96866 0,0 -4.72829,-2.85438 -4.72829,-2.85438 0,0 -0.10923,30.77205 -0.10923,30.77205 -0.009,2.54809 0.0632,4.23726 0.21665,5.06728 0.17091,0.8418 0.43796,1.59732 0.80137,2.26677 0.38115,0.6815 0.92028,1.25067 1.61806,1.70755 0.52419,0.34326 1.21588,0.67931 2.07599,1.00867"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6650"
-         d="m 166.36109,371.85301 c 0,0 -0.26715,19.06666 -0.26715,19.06666 0,0 -2.15509,-1.43699 -2.15509,-1.43699 0,0 0.26879,-18.99589 0.26879,-18.99589 0,0 2.15345,1.36622 2.15345,1.36622 m -2.36584,-9.45257 c 0.0108,-0.76492 0.15673,-1.34165 0.43785,-1.73047 0.28135,-0.3891 0.61313,-0.46607 0.99556,-0.23045 0.38929,0.23991 0.71762,0.72434 0.98483,1.45361 0.26764,0.71701 0.396,1.47309 0.38488,2.26779 -0.0111,0.79482 -0.15836,1.38651 -0.44155,1.77477 -0.27655,0.39197 -0.60963,0.46583 -0.99903,0.22208 -0.38889,-0.24338 -0.71605,-0.72983 -0.98168,-1.45901 -0.26536,-0.72842 -0.39225,-1.49437 -0.38086,-2.29832"
+         id="path3684"
+         d="m 326.68371,496.68588 c 0,0 0.16352,-53.31935 0.16352,-53.31935 0,0 4.33405,2.57612 4.33405,2.57612 0,0 -0.0231,8.11168 -0.0231,8.11168 1.12479,-3.12783 2.15869,-5.02087 3.10122,-5.67423 0.96285,-0.64401 2.01732,-0.62746 3.16426,0.0524 1.66273,0.98571 3.35799,2.97819 5.08643,5.98483 0,0 -1.73463,7.50163 -1.73463,7.50163 -1.20956,-2.06252 -2.41678,-3.45673 -3.62177,-4.18598 -1.07402,-0.64988 -2.03784,-0.62407 -2.89238,0.075 -0.85268,0.66393 -1.46157,1.94671 -1.82782,3.84834 -0.54904,2.90043 -0.82874,6.26858 -0.83955,10.10792 0,0 -0.0793,28.13461 -0.0793,28.13461 0,0 -4.83103,-3.21295 -4.83103,-3.21295"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6652"
-         d="m 176.56016,379.16638 c 0,0 -0.0795,5.95169 -0.0795,5.95169 -0.4932,-1.57594 -0.93788,-2.72422 -1.33432,-3.44626 -0.38914,-0.73003 -0.85002,-1.26532 -1.38234,-1.60616 -0.83275,-0.53317 -1.53077,-0.38873 -2.09505,0.43118 -0.56335,0.81857 -0.85676,2.10833 -0.88093,3.87063 -0.0247,1.80217 0.21857,3.45533 0.73062,4.96131 0.51956,1.51318 1.19536,2.54562 2.02841,3.09628 0.5325,0.35201 1.00647,0.43138 1.42159,0.23746 0.40225,-0.18962 0.88018,-0.75857 1.43407,-1.70778 0,0 -0.0791,5.92076 -0.0791,5.92076 -0.91969,0.33906 -1.83038,0.20652 -2.73204,-0.39554 -1.48155,-0.98925 -2.70176,-2.77454 -3.66356,-5.35266 -0.95823,-2.58212 -1.41626,-5.2986 -1.37666,-8.15555 0.0396,-2.85588 0.57045,-4.94627 1.59495,-6.27502 1.02766,-1.33283 2.27368,-1.53769 3.74101,-0.6081 0.94706,0.60002 1.83813,1.62535 2.67279,3.07776"
+         id="path3686"
+         d="m 346.63844,509.95707 c 0,0 0.0968,-47.55946 0.0968,-47.55946 0,0 -4.43131,-2.6751 -4.43131,-2.6751 0,0 0.0162,-7.15908 0.0162,-7.15908 0,0 4.42975,2.633 4.42975,2.633 0,0 0.0118,-5.80848 0.0118,-5.80848 0.007,-3.66486 0.19039,-6.28429 0.54899,-7.86025 0.49107,-2.11858 1.34725,-3.56796 2.57091,-4.34826 1.24623,-0.8062 2.9874,-0.57829 5.2303,0.69102 1.45137,0.82149 3.06136,2.04536 4.83196,3.67489 0,0 -0.79224,7.74699 -0.79224,7.74699 -1.07705,-0.96968 -2.09389,-1.73012 -3.05099,-2.28234 -1.56464,-0.90254 -2.66858,-0.93449 -3.31577,-0.0995 -0.64623,0.83385 -0.9719,2.90502 -0.97777,6.21534 0,0 -0.009,5.07119 -0.009,5.07119 0,0 5.92043,3.51903 5.92043,3.51903 0,0 -0.0107,7.30549 -0.0107,7.30549 0,0 -5.92257,-3.57534 -5.92257,-3.57534 0,0 -0.0849,47.87735 -0.0849,47.87735 0,0 -5.0619,-3.36649 -5.0619,-3.36649"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
       <path
       <path
-         id="path6654"
-         d="m 187.61772,396.3816 c 0,0 -7.11602,-4.64219 -7.11602,-4.64219 0.0394,1.64462 0.28493,3.09461 0.73698,4.35101 0.45296,1.24506 1.04456,2.10953 1.77557,2.59274 0.56977,0.37665 1.04545,0.41952 1.42659,0.12784 0.37474,-0.29649 0.80696,-1.01821 1.29684,-2.16605 0,0 1.92697,3.43249 1.92697,3.43249 -0.31539,0.81607 -0.64586,1.478 -0.99136,1.98594 -0.345,0.49391 -0.71206,0.8498 -1.10107,1.06783 -0.3884,0.20427 -0.80545,0.26645 -1.25104,0.18682 -0.44497,-0.0796 -0.92525,-0.29156 -1.44065,-0.63571 -1.47362,-0.98396 -2.63915,-2.70738 -3.4999,-5.16799 -0.85787,-2.46621 -1.26523,-5.24883 -1.22412,-8.35273 0.0407,-3.07604 0.50052,-5.29153 1.38106,-6.65002 0.88916,-1.331 2.0511,-1.54389 3.48916,-0.63284 1.45844,0.92399 2.60479,2.58022 3.43569,4.97077 0.82671,2.37989 1.2204,5.2023 1.17921,8.46292 0,0 -0.0239,1.06918 -0.0239,1.06918 m -2.32579,-5.26802 c -0.28944,-2.63077 -1.04632,-4.33686 -2.26728,-5.11863 -0.27743,-0.17761 -0.53885,-0.25719 -0.78429,-0.2387 -0.24508,0.005 -0.47079,0.10338 -0.67718,0.29491 -0.19935,0.18239 -0.37254,0.44885 -0.5196,0.79935 -0.147,0.35039 -0.26113,0.78249 -0.3424,1.29632 0,0 4.59075,2.96675 4.59075,2.96675"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6656"
-         d="m 188.72253,393.84291 c 0,0 4.65008,3.00896 4.65008,3.00896 0,0 -0.0493,4.01621 -0.0493,4.01621 0,0 -4.6509,-3.03976 -4.6509,-3.03976 0,0 0.0501,-3.98541 0.0501,-3.98541"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6658"
-         d="m 194.98418,390.01241 c 0,0 2.38336,1.51208 2.38336,1.51208 0,0 -0.0223,1.85395 -0.0223,1.85395 0.47236,-0.64064 0.8742,-1.02249 1.2053,-1.14499 0.35295,-0.12281 0.7934,-0.0171 1.32172,0.31756 1.18119,0.74835 2.10689,2.33833 2.77498,4.77229 0.78844,-1.50888 1.83882,-1.84926 3.15393,-1.01611 2.40446,1.52334 3.58213,5.08307 3.51928,10.67354 0,0 -0.14246,12.66683 -0.14246,12.66683 0,0 -2.48907,-1.65968 -2.48907,-1.65968 0,0 0.12932,-11.33864 0.12932,-11.33864 0.0223,-1.95448 -0.0862,-3.41623 -0.32523,-4.38496 -0.246,-0.9865 -0.6604,-1.66604 -1.24258,-2.03881 -0.67547,-0.43246 -1.17407,-0.26194 -1.49669,0.51038 -0.31512,0.77633 -0.48509,2.23983 -0.51011,4.39145 0,0 -0.12195,10.48199 -0.12195,10.48199 0,0 -2.44211,-1.62837 -2.44211,-1.62837 0,0 0.13166,-11.16634 0.13166,-11.16634 0.0424,-3.5942 -0.47379,-5.73473 -1.54571,-6.42107 -0.67701,-0.43345 -1.17721,-0.26165 -1.50149,0.51417 -0.31691,0.77974 -0.48789,2.22076 -0.51317,4.32394 0,0 -0.12475,10.3799 -0.12475,10.3799 0,0 -2.38549,-1.59061 -2.38549,-1.59061 0,0 0.24354,-20.0085 0.24354,-20.0085"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6660"
-         d="m 220.67229,406.30977 c 0,0 2.59577,1.64683 2.59577,1.64683 0,0 -0.21691,20.94138 -0.21691,20.94138 0,0 -2.59835,-1.73254 -2.59835,-1.73254 0,0 0.023,-2.18666 0.023,-2.18666 -1.07897,1.15205 -2.22441,1.32134 -3.43538,0.51275 -1.52241,-1.01654 -2.76433,-2.88658 -3.72874,-5.60691 -0.95328,-2.74866 -1.41135,-5.69028 -1.37656,-8.83075 0.0342,-3.0829 0.54943,-5.33872 1.54792,-6.77135 1.00129,-1.43658 2.2419,-1.68824 3.72487,-0.74874 1.28522,0.81425 2.43195,2.54372 3.43893,5.19275 0,0 0.0254,-2.41676 0.0254,-2.41676 m -6.1639,6.41799 c -0.0216,1.97958 0.24179,3.77627 0.79099,5.39197 0.56513,1.64292 1.28898,2.75701 2.17252,3.34105 0.94662,0.62576 1.72003,0.51965 2.31889,-0.32072 0.60028,-0.88461 0.91108,-2.30584 0.93166,-4.26226 0.0205,-1.95592 -0.25622,-3.75833 -0.82947,-5.40517 -0.57245,-1.61511 -1.32472,-2.72014 -2.25566,-3.31621 -0.87552,-0.56055 -1.61254,-0.40511 -2.21212,0.4641 -0.59125,0.88677 -0.89661,2.25538 -0.91681,4.10724"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6662"
-         d="m 228.86507,444.37687 c 0,0 -2.63556,-1.80388 -2.63556,-1.80388 0,0 0.33007,-32.52814 0.33007,-32.52814 0,0 2.63142,1.66946 2.63142,1.66946 0,0 -0.0229,2.29758 -0.0229,2.29758 1.06199,-1.25971 2.25943,-1.46948 3.59403,-0.62398 1.59434,1.01009 2.90256,2.9254 3.92147,5.74946 1.03857,2.82939 1.54379,5.89034 1.51292,9.17657 -0.0302,3.2142 -0.58245,5.54602 -1.65437,6.99134 -1.06073,1.43185 -2.37899,1.61939 -3.95143,0.56944 -1.35204,-0.90278 -2.54722,-2.70142 -3.58698,-5.39167 0,0 -0.13864,13.89382 -0.13864,13.89382 m 6.56523,-17.84101 c 0.0196,-2.04531 -0.2677,-3.90598 -0.8609,-5.57996 -0.59985,-1.69028 -1.36333,-2.83125 -2.28942,-3.42422 -0.97867,-0.62659 -1.7759,-0.51019 -2.39309,0.34676 -0.61616,0.85555 -0.93396,2.29493 -0.95418,4.31963 -0.0198,1.98183 0.26545,3.81475 0.85665,5.5009 0.58481,1.65531 1.36619,2.80715 2.34548,3.45448 0.92667,0.61257 1.69914,0.47721 2.31624,-0.40847 0.63406,-0.87667 0.96074,-2.28021 0.97922,-4.20912"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6664"
-         d="m 243.63173,454.48373 c 0,0 -2.75506,-1.88567 -2.75506,-1.88567 0,0 0.30651,-33.27553 0.30651,-33.27553 0,0 2.75048,1.74499 2.75048,1.74499 0,0 -0.0212,2.35049 -0.0212,2.35049 1.10789,-1.27393 2.35861,-1.47168 3.754,-0.58767 1.66712,1.0562 3.03647,3.03485 4.10469,5.9396 1.08886,2.91072 1.62057,6.05093 1.59225,9.41402 -0.0277,3.2894 -0.60216,5.66794 -1.72078,7.13133 -1.10687,1.44968 -2.48425,1.62195 -4.12858,0.524 -1.41371,-0.94396 -2.66448,-2.80222 -3.75384,-5.57034 0,0 -0.12843,14.21478 -0.12843,14.21478 m 6.83844,-18.16 c 0.018,-2.09293 -0.28441,-4.00134 -0.9063,-5.72303 -0.62883,-1.73839 -1.42796,-2.91691 -2.39627,-3.53692 -1.02324,-0.65513 -1.85599,-0.54738 -2.49973,0.32068 -0.64265,0.86659 -0.97292,2.3348 -0.99166,4.40616 -0.0183,2.02752 0.2818,3.90713 0.90139,5.64107 0.61288,1.70236 1.43058,2.8925 2.45451,3.56933 0.96896,0.64053 1.77597,0.51339 2.41977,-0.38392 0.66156,-0.88788 1.00127,-2.31952 1.01829,-4.29337"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6666"
-         d="m 267.86842,448.73368 c 0,0 -9.07705,-5.92148 -9.07705,-5.92148 0.0623,1.86772 0.38465,3.53392 0.96775,4.99999 0.58427,1.4536 1.34189,2.48884 2.2739,3.10492 0.72662,0.48033 1.33136,0.57335 1.81353,0.278 0.47409,-0.30144 1.01811,-1.08009 1.63231,-2.33704 0,0 2.47728,4.07571 2.47728,4.07571 -0.39517,0.89689 -0.81065,1.61729 -1.24637,2.16138 -0.43514,0.52815 -0.89931,0.8975 -1.39237,1.10826 -0.49231,0.19518 -1.02204,0.22639 -1.58901,0.094 -0.56607,-0.1322 -1.1779,-0.41799 -1.83522,-0.85689 -1.87857,-1.25436 -3.37064,-3.31736 -4.48087,-6.18645 -1.10618,-2.87402 -1.64327,-6.06407 -1.61422,-9.57662 0.0288,-3.48083 0.59516,-5.94921 1.70151,-7.40949 1.11787,-1.42883 2.59067,-1.56666 4.42321,-0.4057 1.85965,1.17819 3.32917,3.15974 4.40377,5.94699 1.06959,2.77587 1.5919,6.01542 1.56418,9.71291 0,0 -0.0223,1.21152 -0.0223,1.21152 m -2.99801,-6.19288 c -0.38774,-3.01092 -1.36248,-5.01476 -2.91938,-6.01162 -0.35366,-0.22642 -0.6862,-0.34022 -0.99769,-0.34139 -0.31111,-0.0164 -0.59689,0.0748 -0.85742,0.27326 -0.25166,0.18876 -0.46946,0.47515 -0.65346,0.85915 -0.18394,0.38385 -0.32552,0.86327 -0.42481,1.43829 0,0 5.85276,3.78231 5.85276,3.78231"
-         inkscape:connector-curvature="0" />
-      <path
-         id="path6668"
-         d="m 271.20265,438.36784 c 0,0 3.00317,1.9053 3.00317,1.9053 0,0 -0.0143,2.01845 -0.0143,2.01845 0.56166,-0.65278 1.05916,-1.02883 1.49228,-1.12749 0.4427,-0.10856 0.96556,0.028 1.56898,0.41025 0.80312,0.50884 1.64124,1.4933 2.51467,2.95537 0,0 -1.43281,3.92017 -1.43281,3.92017 -0.57353,-1.08711 -1.135,-1.80624 -1.68452,-2.15838 -1.65244,-1.0588 -2.49103,0.56479 -2.52159,4.86884 0,0 -0.0834,11.73974 -0.0834,11.73974 0,0 -3.00677,-2.00488 -3.00677,-2.00488 0,0 0.1643,-22.52737 0.1643,-22.52737"
+         id="path3688"
+         d="m 359.60073,501.90418 c 0,0 5.20059,1.86777 5.20059,1.86777 0.29001,3.96114 1.10193,7.38322 2.43911,10.27061 1.36176,2.91073 3.2661,5.17238 5.72054,6.78444 2.48967,1.63519 4.34728,1.95881 5.56379,0.96109 1.21993,-1.0365 1.83154,-2.77869 1.83229,-5.22389 6.2e-4,-2.19296 -0.5384,-4.26389 -1.61481,-6.20909 -0.7497,-1.33918 -2.60804,-3.61528 -5.55946,-6.8122 -3.94075,-4.27425 -6.65395,-7.50944 -8.16465,-9.73106 -1.48522,-2.23573 -2.61386,-4.7171 -3.38893,-7.44614 -0.75395,-2.74593 -1.12852,-5.48045 -1.12491,-8.2074 0.003,-2.48146 0.31617,-4.58205 0.93929,-6.30404 0.64345,-1.7475 1.51123,-2.99566 2.60481,-3.74404 0.82208,-0.59757 1.93976,-0.84564 3.35554,-0.74295 1.44048,0.0796 2.98492,0.60687 4.63457,1.58472 2.49729,1.48044 4.69744,3.42626 6.59564,5.83924 1.92772,2.43694 3.35406,5.04673 4.27363,7.82559 0.92183,2.74989 1.55812,6.08842 1.90744,10.01415 0,0 -5.39591,-2.01583 -5.39591,-2.01583 -0.24253,-3.08522 -0.95109,-5.80694 -2.12313,-8.16184 -1.14834,-2.33544 -2.7751,-4.13563 -4.87465,-5.40091 -2.46541,-1.48565 -4.2164,-1.81727 -5.26239,-1.00324 -1.04343,0.8121 -1.56519,2.18465 -1.56724,4.11944 -10e-4,1.23148 0.21335,2.47259 0.64434,3.72428 0.43146,1.28852 1.10985,2.55443 2.03645,3.7988 0.53331,0.68393 2.10812,2.47474 4.73703,5.38635 3.83534,4.20888 6.52812,7.39657 8.05468,9.53851 1.55295,2.12718 2.77297,4.59004 3.65706,7.38727 0.88613,2.80397 1.33003,5.87348 1.33006,9.20426 -3e-5,3.25947 -0.54743,5.98195 -1.64026,8.16269 -1.06972,2.15296 -2.61798,3.35081 -4.63932,3.59644 -2.01164,0.20856 -4.27524,-0.52848 -6.78627,-2.2025 -4.12399,-2.74933 -7.24172,-6.34882 -9.37583,-10.80056 -2.10254,-4.4137 -3.43626,-9.76409 -4.0091,-16.05996"
          inkscape:connector-curvature="0" />
          inkscape:connector-curvature="0" />
     </g>
     </g>
-    <path
-       inkscape:connector-curvature="0"
-       id="path5888"
-       d="m 132.72634,378.57586 1.06487,-65.93906 252.2603,142.74417 0,96.58028 z"
-       style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
   </g>
   </g>
   <g
   <g
-     style="display:none"
+     style="display:inline"
      inkscape:groupmode="layer"
      inkscape:groupmode="layer"
      id="layer9"
      id="layer9"
      inkscape:label="GenericRootfs">
      inkscape:label="GenericRootfs">
@@ -656,7 +553,7 @@
     </g>
     </g>
   </g>
   </g>
   <g
   <g
-     style="display:inline"
+     style="display:none"
      inkscape:label="Debian"
      inkscape:label="Debian"
      id="layer5"
      id="layer5"
      inkscape:groupmode="layer">
      inkscape:groupmode="layer">
@@ -962,37 +859,6 @@
            id="path6350" />
            id="path6350" />
       </g>
       </g>
     </g>
     </g>
-    <g
-       inkscape:groupmode="layer"
-       id="layer13"
-       inkscape:label="referenceparent"
-       style="display:none">
-      <g
-         style="display:inline"
-         id="g8920">
-        <path
-           sodipodi:nodetypes="cc"
-           inkscape:connector-curvature="0"
-           id="path7326"
-           d="m 534.63171,136.882 c 67.93802,-0.71514 15.01787,41.47795 15.01787,41.47795"
-           style="fill:none;stroke:#000000;stroke-width:7;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;marker-start:none;marker-end:url(#Arrow1Send)" />
-        <text
-           sodipodi:linespacing="75%"
-           id="text8726"
-           y="147.31824"
-           x="575.6853"
-           style="font-size:28px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:75%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Arial;-inkscape-font-specification:Arial"
-           xml:space="preserve"><tspan
-             y="147.31824"
-             x="575.6853"
-             id="tspan8728"
-             sodipodi:role="line">references</tspan><tspan
-             id="tspan8730"
-             y="168.31824"
-             x="575.6853"
-             sodipodi:role="line">parent</tspan></text>
-      </g>
-    </g>
     <text
     <text
        xml:space="preserve"
        xml:space="preserve"
        style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
        style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans;-inkscape-font-specification:Sans"
@@ -1387,7 +1253,7 @@
     </g>
     </g>
   </g>
   </g>
   <g
   <g
-     style="display:inline"
+     style="display:none"
      inkscape:label="busybox"
      inkscape:label="busybox"
      id="layer4"
      id="layer4"
      inkscape:groupmode="layer">
      inkscape:groupmode="layer">

+ 8 - 2
docs/sources/use/builder.rst

@@ -54,8 +54,14 @@ Docker evaluates the instructions in a Dockerfile in order. **The
 first instruction must be `FROM`** in order to specify the
 first instruction must be `FROM`** in order to specify the
 :ref:`base_image_def` from which you are building.
 :ref:`base_image_def` from which you are building.
 
 
-Docker will ignore **comment lines** *beginning* with ``#``. A comment
-marker anywhere in the rest of the line will be treated as an argument.
+Docker will treat lines that *begin* with ``#`` as a comment. A ``#``
+marker anywhere else in the line will be treated as an argument. This
+allows statements like:
+
+::
+
+    # Comment
+    RUN echo 'we are running some # of cool things'
 
 
 3. Instructions
 3. Instructions
 ===============
 ===============

+ 17 - 5
hack/RELEASE-CHECKLIST.md

@@ -57,7 +57,13 @@ EXAMPLES:
 
 
 FIXME
 FIXME
 
 
-### 5. Commit and create a pull request to the "release" branch
+### 5. Test the docs
+
+Make sure that your tree includes documentation for any modified or
+new features, syntax or semantic changes. Instructions for building
+the docs are in ``docs/README.md``
+
+### 6. Commit and create a pull request to the "release" branch
 
 
 ```bash
 ```bash
 git add CHANGELOG.md
 git add CHANGELOG.md
@@ -65,9 +71,9 @@ git commit -m "Bump version to $VERSION"
 git push origin bump_$VERSION
 git push origin bump_$VERSION
 ```
 ```
 
 
-### 6. Get 2 other maintainers to validate the pull request
+### 7. Get 2 other maintainers to validate the pull request
 
 
-### 7. Merge the pull request and apply tags
+### 8. Merge the pull request and apply tags
 
 
 ```bash
 ```bash
 git checkout release
 git checkout release
@@ -78,7 +84,13 @@ git push
 git push --tags
 git push --tags
 ```
 ```
 
 
-### 8. Publish binaries
+Merging the pull request to the release branch will automatically
+update the documentation on the "latest" revision of the docs. You
+should see the updated docs 5-10 minutes after the merge. The docs
+will appear on http://docs.docker.io/. For more information about
+documentation releases, see ``docs/README.md``
+
+### 9. Publish binaries
 
 
 To run this you will need access to the release credentials.
 To run this you will need access to the release credentials.
 Get them from [the infrastructure maintainers](
 Get them from [the infrastructure maintainers](
@@ -100,6 +112,6 @@ use get-nightly.docker.io for general testing, and once everything is fine,
 switch to get.docker.io).
 switch to get.docker.io).
 
 
 
 
-### 9. Rejoice!
+### 10. Rejoice!
 
 
 Congratulations! You're done.
 Congratulations! You're done.

+ 80 - 2
hack/infrastructure/README.md

@@ -1,5 +1,83 @@
 # Docker project infrastructure
 # Docker project infrastructure
 
 
-This directory holds all information about the technical infrastructure of the docker project; servers, dns, email, and all the corresponding tools and configuration.
+This is an overview of the Docker infrastructure.
 
 
-Obviously credentials should not be stored in this repo, but how to obtain and use them should be documented here.
+**Note: obviously, credentials should not be stored in this repository.**
+However, when there are credentials, we should list how to obtain them
+(e.g. who has them).
+
+
+## Providers
+
+This should be the list of all the entities providing some kind of
+infrastructure service to the Docker project (either for free,
+or paid by dotCloud).
+
+
+Provider      | Service
+--------------|-------------------------------------------------
+AWS           | packages (S3 bucket), dotCloud PAAS, dev-env, ci
+CloudFlare    | cdn
+Digital Ocean | ci
+dotCloud PAAS | website, index, registry, ssl, blog
+DynECT        | dns (docker.io)            
+GitHub        | repository
+Linode        | stackbrew
+Mailgun       | outgoing e-mail            
+ReadTheDocs   | docs
+
+*Ordered-by: lexicographic*
+
+
+## URLs
+
+This should be the list of all the infrastructure-related URLs
+and which service is handling them.
+
+URL                                          | Service
+---------------------------------------------|---------------------------------
+ http://blog.docker.io/                      | blog
+*http://cdn-registry-1.docker.io/            | registry (pull)
+ http://debug.docker.io/                     | debug tool
+ http://docs.docker.io/                      | docsproxy (proxy to readthedocs)
+ http://docker-ci.dotcloud.com/              | ci
+ http://docker.io/                           | redirect to www.docker.io (dynect)
+ http://docker.readthedocs.org/              | docs
+*http://get.docker.io/                       | packages
+ https://github.com/dotcloud/docker          | repository
+*https://index.docker.io/                    | index
+ http://registry-1.docker.io/                | registry (push)
+ http://staging-docker-ci.dotcloud.com/      | ci
+*http://test.docker.io/                      | packages
+*http://www.docker.io/                       | website
+ http://? (internal URL, not for public use) | stackbrew
+
+*Ordered-by: lexicographic*
+
+**Note:** an asterisk in front of the URL means that it is cached by CloudFlare.
+
+
+## Services
+
+This should be the list of all services referenced above.
+
+Service             | Maintainer(s)      | How to update    | Source
+--------------------|--------------------|------------------|-------
+blog                | @jbarbier          | dotcloud push    | https://github.com/dotcloud/blog.docker.io
+cdn                 | @jpetazzo @samalba | cloudflare panel | N/A
+ci                  | @mzdaniel          | See [docker-ci]  | See [docker-ci]
+docs                | @metalivedev       | github webhook   | docker repo
+docsproxy           | @dhrp              | dotcloud push    | https://github.com/dotcloud/docker-docs-dotcloud-proxy
+index               | @kencochrane       | dotcloud push    | private
+packages            | @jpetazzo          | hack/release     | docker repo
+registry            | @samalba           | dotcloud push    | https://github.com/dotcloud/docker-registry
+repository (github) | N/A                | N/A              | N/A
+ssl (dotcloud)      | @jpetazzo          | dotcloud ops     | N/A
+ssl (cloudflare)    | @jpetazzo          | cloudflare panel | N/A
+stackbrew           | @shin-             | manual           | https://github.com/dotcloud/stackbrew/stackbrew
+website             | @dhrp              | dotcloud push    | https://github.com/dotcloud/www.docker.io
+
+*Ordered-by: lexicographic*
+
+
+[docker-ci]: docker-ci.rst

+ 43 - 2
hack/infrastructure/docker-ci.rst

@@ -1,5 +1,38 @@
-docker-ci github pull request
-=============================
+docker-ci
+=========
+
+docker-ci is our buildbot continuous integration server,
+building and testing docker, hosted on EC2 and reachable at
+http://docker-ci.dotcloud.com
+
+
+Deployment
+==========
+
+# Load AWS credentials
+export AWS_ACCESS_KEY_ID=''
+export AWS_SECRET_ACCESS_KEY=''
+export AWS_KEYPAIR_NAME=''
+export AWS_SSH_PRIVKEY=''
+
+# Load buildbot credentials and config
+export BUILDBOT_PWD=''
+export IRC_PWD=''
+export IRC_CHANNEL='docker-dev'
+export SMTP_USER=''
+export SMTP_PWD=''
+export EMAIL_RCP=''
+
+# Load registry test credentials
+export REGISTRY_USER=''
+export REGISTRY_PWD=''
+
+cd docker/testing
+vagrant up --provider=aws
+
+
+github pull request
+===================
 
 
 The entire docker pull request test workflow is event driven by github. Its
 The entire docker pull request test workflow is event driven by github. Its
 usage is fully automatic and the results are logged in docker-ci.dotcloud.com
 usage is fully automatic and the results are logged in docker-ci.dotcloud.com
@@ -13,3 +46,11 @@ buildbot (0.8.7p1) was patched using ./testing/buildbot/github.py, so it
 can understand the PR data github sends to it. Originally PR #1603 (ee64e099e0)
 can understand the PR data github sends to it. Originally PR #1603 (ee64e099e0)
 implemented this capability. Also we added a new scheduler to exclusively filter
 implemented this capability. Also we added a new scheduler to exclusively filter
 PRs. and the 'pullrequest' builder to rebase the PR on top of master and test it.
 PRs. and the 'pullrequest' builder to rebase the PR on top of master and test it.
+
+
+nighthly release
+================
+
+The nightly release process is done by buildbot, running a DinD container that downloads
+the docker repository and builds the release container. The resulting docker
+binary is then tested, and if everything is fine, the release is done.

+ 43 - 0
hack/infrastructure/docker-ci/Dockerfile

@@ -0,0 +1,43 @@
+# VERSION:        0.22
+# DOCKER-VERSION  0.6.3
+# AUTHOR:         Daniel Mizyrycki <daniel@dotcloud.com>
+# DESCRIPTION:    Deploy docker-ci on Amazon EC2
+# COMMENTS:
+#     CONFIG_JSON is an environment variable json string loaded as:
+#
+# export CONFIG_JSON='
+#     { "AWS_TAG":             "EC2_instance_name",
+#       "AWS_ACCESS_KEY":      "EC2_access_key",
+#       "AWS_SECRET_KEY":      "EC2_secret_key",
+#       "DOCKER_CI_PUB":       "$(cat docker-ci_ssh_public_key.pub)",
+#       "DOCKER_CI_KEY":       "$(cat docker-ci_ssh_private_key.key)",
+#       "BUILDBOT_PWD":        "Buildbot_server_password",
+#       "IRC_PWD":             "Buildbot_IRC_password",
+#       "SMTP_USER":           "SMTP_server_user",
+#       "SMTP_PWD":            "SMTP_server_password",
+#       "PKG_ACCESS_KEY":      "Docker_release_S3_bucket_access_key",
+#       "PKG_SECRET_KEY":      "Docker_release_S3_bucket_secret_key",
+#       "PKG_GPG_PASSPHRASE":  "Docker_release_gpg_passphrase",
+#       "INDEX_AUTH":          "Index_encripted_user_password",
+#       "REGISTRY_USER":       "Registry_test_user",
+#       "REGISTRY_PWD":        "Registry_test_password",
+#       "REGISTRY_BUCKET":     "Registry_S3_bucket_name",
+#       "REGISTRY_ACCESS_KEY": "Registry_S3_bucket_access_key",
+#       "REGISTRY_SECRET_KEY": "Registry_S3_bucket_secret_key",
+#       "IRC_CHANNEL":         "Buildbot_IRC_channel",
+#       "EMAIL_RCP":           "Buildbot_mailing_receipient" }'
+#
+#
+# TO_BUILD:   docker build -t docker-ci .
+# TO_DEPLOY:  docker run -e CONFIG_JSON="${CONFIG_JSON}" docker-ci
+
+from ubuntu:12.04
+
+run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
+run apt-get update; apt-get install -y python2.7 python-dev python-pip ssh rsync less vim
+run pip install boto fabric
+
+# Add deployment code and set default container command
+add . /docker-ci
+cmd "/docker-ci/deployment.py"
+

+ 0 - 0
testing/MAINTAINERS → hack/infrastructure/docker-ci/MAINTAINERS


+ 26 - 0
hack/infrastructure/docker-ci/README.rst

@@ -0,0 +1,26 @@
+=======
+testing
+=======
+
+This directory contains docker-ci testing related files.
+
+
+Buildbot
+========
+
+Buildbot is a continuous integration system designed to automate the
+build/test cycle. By automatically rebuilding and testing the tree each time
+something has changed, build problems are pinpointed quickly, before other
+developers are inconvenienced by the failure.
+
+We are running buildbot in Amazon's EC2 to verify docker passes all
+tests when commits get pushed to the master branch and building
+nightly releases using Docker in Docker awesome implementation made
+by Jerome Petazzoni.
+
+https://github.com/jpetazzo/dind
+
+Docker's buildbot instance is at http://docker-ci.dotcloud.com/waterfall
+
+For deployment instructions, please take a look at
+hack/infrastructure/docker-ci/Dockerfile

+ 1 - 0
hack/infrastructure/docker-ci/buildbot/README.rst

@@ -0,0 +1 @@
+Buildbot configuration and setup files

+ 0 - 0
testing/buildbot/buildbot.conf → hack/infrastructure/docker-ci/buildbot/buildbot.conf


+ 19 - 13
testing/buildbot/github.py → hack/infrastructure/docker-ci/buildbot/github.py

@@ -86,12 +86,16 @@ def getChanges(request, options = None):
                 the http request object
                 the http request object
         """
         """
         payload = json.loads(request.args['payload'][0])
         payload = json.loads(request.args['payload'][0])
-	if 'pull_request' in payload:
-	    user = payload['repository']['owner']['login']
-	    repo = payload['repository']['name']
-            repo_url = payload['repository']['html_url']
-	else:
-	    user = payload['repository']['owner']['name']
+        import urllib,datetime
+        fname = str(datetime.datetime.now()).replace(' ','_').replace(':','-')[:19]
+        open('github_{0}.json'.format(fname),'w').write(json.dumps(json.loads(urllib.unquote(request.args['payload'][0])), sort_keys = True, indent = 2))
+
+        if 'pull_request' in payload:
+            user = payload['pull_request']['user']['login']
+            repo = payload['pull_request']['head']['repo']['name']
+            repo_url = payload['pull_request']['head']['repo']['html_url']
+        else:
+            user = payload['repository']['owner']['name']
             repo = payload['repository']['name']
             repo = payload['repository']['name']
             repo_url = payload['repository']['url']
             repo_url = payload['repository']['url']
         project = request.args.get('project', None)
         project = request.args.get('project', None)
@@ -115,7 +119,7 @@ def process_change(payload, user, repo, repo_url, project):
                 Hook.
                 Hook.
         """
         """
         changes = []
         changes = []
-	
+
         newrev = payload['after'] if 'after' in payload else payload['pull_request']['head']['sha']
         newrev = payload['after'] if 'after' in payload else payload['pull_request']['head']['sha']
         refname = payload['ref'] if 'ref' in payload else payload['pull_request']['head']['ref']
         refname = payload['ref'] if 'ref' in payload else payload['pull_request']['head']['ref']
 
 
@@ -130,10 +134,13 @@ def process_change(payload, user, repo, repo_url, project):
             log.msg("Branch `%s' deleted, ignoring" % branch)
             log.msg("Branch `%s' deleted, ignoring" % branch)
             return []
             return []
         else: 
         else: 
-	    if 'pull_request' in payload:
-		changes = [{
-		    'category'   : 'github_pullrequest',
-                    'who'        : user,
+            if 'pull_request' in payload:
+                if payload['action'] == 'closed':
+                    log.msg("PR#{} closed, ignoring".format(payload['number']))
+                    return []
+                changes = [{
+                    'category'   : 'github_pullrequest',
+                    'who'        : '{0} - PR#{1}'.format(user,payload['number']),
                     'files'      : [],
                     'files'      : [],
                     'comments'   : payload['pull_request']['title'], 
                     'comments'   : payload['pull_request']['title'], 
                     'revision'   : newrev,
                     'revision'   : newrev,
@@ -142,7 +149,7 @@ def process_change(payload, user, repo, repo_url, project):
                     'revlink'    : '{0}/commit/{1}'.format(repo_url,newrev),
                     'revlink'    : '{0}/commit/{1}'.format(repo_url,newrev),
                     'repository' : repo_url,
                     'repository' : repo_url,
                     'project'  : project  }] 
                     'project'  : project  }] 
-		return changes
+                return changes
             for commit in payload['commits']:
             for commit in payload['commits']:
                 files = []
                 files = []
                 if 'added' in commit:
                 if 'added' in commit:
@@ -166,4 +173,3 @@ def process_change(payload, user, repo, repo_url, project):
                     project  = project)
                     project  = project)
                 changes.append(chdict) 
                 changes.append(chdict) 
             return changes
             return changes
-        

+ 26 - 25
testing/buildbot/master.cfg → hack/infrastructure/docker-ci/buildbot/master.cfg

@@ -20,7 +20,8 @@ TEST_PWD = 'docker'     # Credential to authenticate build triggers
 BUILDER_NAME = 'docker'
 BUILDER_NAME = 'docker'
 GITHUB_DOCKER = 'github.com/dotcloud/docker'
 GITHUB_DOCKER = 'github.com/dotcloud/docker'
 BUILDBOT_PATH = '/data/buildbot'
 BUILDBOT_PATH = '/data/buildbot'
-DOCKER_PATH = '/data/docker'
+DOCKER_PATH = '/go/src/github.com/dotcloud/docker'
+DOCKER_CI_PATH = '/docker-ci'
 BUILDER_PATH = '/data/buildbot/slave/{0}/build'.format(BUILDER_NAME)
 BUILDER_PATH = '/data/buildbot/slave/{0}/build'.format(BUILDER_NAME)
 PULL_REQUEST_PATH = '/data/buildbot/slave/pullrequest/build'
 PULL_REQUEST_PATH = '/data/buildbot/slave/pullrequest/build'
 
 
@@ -45,49 +46,41 @@ c['slavePortnum'] = PORT_MASTER
 
 
 # Schedulers
 # Schedulers
 c['schedulers'] = [ForceScheduler(name='trigger', builderNames=[BUILDER_NAME,
 c['schedulers'] = [ForceScheduler(name='trigger', builderNames=[BUILDER_NAME,
-    'index','registry','coverage'])]
+    'index','registry','coverage','nightlyrelease'])]
 c['schedulers'] += [SingleBranchScheduler(name="all",
 c['schedulers'] += [SingleBranchScheduler(name="all",
     change_filter=filter.ChangeFilter(branch='master'), treeStableTimer=None,
     change_filter=filter.ChangeFilter(branch='master'), treeStableTimer=None,
     builderNames=[BUILDER_NAME])]
     builderNames=[BUILDER_NAME])]
 c['schedulers'] += [SingleBranchScheduler(name='pullrequest',
 c['schedulers'] += [SingleBranchScheduler(name='pullrequest',
     change_filter=filter.ChangeFilter(category='github_pullrequest'), treeStableTimer=None,
     change_filter=filter.ChangeFilter(category='github_pullrequest'), treeStableTimer=None,
     builderNames=['pullrequest'])]
     builderNames=['pullrequest'])]
-c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=['coverage'],
-    hour=0, minute=30)]
+c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=['nightlyrelease'],
+    hour=7, minute=00)]
 c['schedulers'] += [Nightly(name='every4hrs', branch=None, builderNames=['registry','index'],
 c['schedulers'] += [Nightly(name='every4hrs', branch=None, builderNames=['registry','index'],
     hour=range(0,24,4), minute=15)]
     hour=range(0,24,4), minute=15)]
 
 
 # Builders
 # Builders
 # Docker commit test
 # Docker commit test
 factory = BuildFactory()
 factory = BuildFactory()
-factory.addStep(ShellCommand(description='Docker',logEnviron=False,usePTY=True,
-    command=["sh", "-c", Interpolate("cd ..; rm -rf build; mkdir build; "
-    "cp -r {2}-dependencies/src {0}; export GOPATH={0}; go get {3}; cd {1}; "
-    "git reset --hard %(src::revision)s; go test -v".format(
-    BUILDER_PATH, BUILDER_PATH+'/src/'+GITHUB_DOCKER, DOCKER_PATH, GITHUB_DOCKER))]))
-c['builders'] = [BuilderConfig(name=BUILDER_NAME,slavenames=['buildworker'],
+factory.addStep(ShellCommand(description='Docker', logEnviron=False,
+    usePTY=True, command=['sh', '-c', Interpolate(
+    '{0}/docker-test/test_docker.sh %(src::revision)s'.format(DOCKER_CI_PATH))]))
+c['builders'] = [BuilderConfig(name='docker',slavenames=['buildworker'],
     factory=factory)]
     factory=factory)]
 
 
 # Docker pull request test
 # Docker pull request test
 factory = BuildFactory()
 factory = BuildFactory()
-factory.addStep(ShellCommand(description='pull_request',logEnviron=False,usePTY=True,
-    command=["sh", "-c", Interpolate("cd ..; rm -rf build; mkdir build; "
-    "cp -r {2}-dependencies/src {0}; export GOPATH={0}; go get {3}; cd {1}; "
-    "git fetch %(src::repository)s %(src::branch)s:PR-%(src::branch)s; "
-    "git checkout %(src::revision)s; git rebase master; go test -v".format(
-    PULL_REQUEST_PATH, PULL_REQUEST_PATH+'/src/'+GITHUB_DOCKER, DOCKER_PATH, GITHUB_DOCKER))]))
+factory.addStep(ShellCommand(description='pull_request', logEnviron=False,
+    usePTY=True, command=['sh', '-c', Interpolate(
+    '{0}/docker-test/test_docker.sh %(src::revision)s %(src::repository)s'
+    ' %(src::branch)s'.format(DOCKER_CI_PATH))]))
 c['builders'] += [BuilderConfig(name='pullrequest',slavenames=['buildworker'],
 c['builders'] += [BuilderConfig(name='pullrequest',slavenames=['buildworker'],
     factory=factory)]
     factory=factory)]
 
 
 # Docker coverage test
 # Docker coverage test
-coverage_cmd = ('GOPATH=`pwd` go get -d github.com/dotcloud/docker\n'
-    'GOPATH=`pwd` go get github.com/axw/gocov/gocov\n'
-    'sudo -E GOPATH=`pwd` ./bin/gocov test -deps -exclude-goroot -v'
-    ' -exclude github.com/gorilla/context,github.com/gorilla/mux,github.com/kr/pty,'
-    'code.google.com/p/go.net/websocket github.com/dotcloud/docker | ./bin/gocov report')
 factory = BuildFactory()
 factory = BuildFactory()
-factory.addStep(ShellCommand(description='Coverage',logEnviron=False,usePTY=True,
-    command=coverage_cmd))
+factory.addStep(ShellCommand(description='Coverage', logEnviron=False,
+    usePTY=True, command='{0}/docker-coverage/coverage-docker.sh'.format(
+    DOCKER_CI_PATH)))
 c['builders'] += [BuilderConfig(name='coverage',slavenames=['buildworker'],
 c['builders'] += [BuilderConfig(name='coverage',slavenames=['buildworker'],
     factory=factory)]
     factory=factory)]
 
 
@@ -95,8 +88,8 @@ c['builders'] += [BuilderConfig(name='coverage',slavenames=['buildworker'],
 factory = BuildFactory()
 factory = BuildFactory()
 factory.addStep(ShellCommand(description='registry', logEnviron=False,
 factory.addStep(ShellCommand(description='registry', logEnviron=False,
     command='. {0}/master/credentials.cfg; '
     command='. {0}/master/credentials.cfg; '
-    '{1}/testing/functionaltests/test_registry.sh'.format(BUILDBOT_PATH,
-    DOCKER_PATH), usePTY=True))
+    '/docker-ci/functionaltests/test_registry.sh'.format(BUILDBOT_PATH),
+    usePTY=True))
 c['builders'] += [BuilderConfig(name='registry',slavenames=['buildworker'],
 c['builders'] += [BuilderConfig(name='registry',slavenames=['buildworker'],
     factory=factory)]
     factory=factory)]
 
 
@@ -109,6 +102,14 @@ factory.addStep(ShellCommand(description='index', logEnviron=False,
 c['builders'] += [BuilderConfig(name='index',slavenames=['buildworker'],
 c['builders'] += [BuilderConfig(name='index',slavenames=['buildworker'],
     factory=factory)]
     factory=factory)]
 
 
+# Docker nightly release
+nightlyrelease_cmd = ('docker run -i -t -privileged -lxc-conf=lxc.aa_profile=unconfined'
+    ' -e AWS_S3_BUCKET=test.docker.io dockerbuilder')
+factory = BuildFactory()
+factory.addStep(ShellCommand(description='NightlyRelease',logEnviron=False,usePTY=True,
+    command=nightlyrelease_cmd))
+c['builders'] += [BuilderConfig(name='nightlyrelease',slavenames=['buildworker'],
+    factory=factory)]
 
 
 # Status
 # Status
 authz_cfg = authz.Authz(auth=auth.BasicAuth([(TEST_USER, TEST_PWD)]),
 authz_cfg = authz.Authz(auth=auth.BasicAuth([(TEST_USER, TEST_PWD)]),

+ 0 - 0
testing/buildbot/requirements.txt → hack/infrastructure/docker-ci/buildbot/requirements.txt


+ 17 - 7
testing/buildbot/setup.sh → hack/infrastructure/docker-ci/buildbot/setup.sh

@@ -6,16 +6,22 @@
 
 
 USER=$1
 USER=$1
 CFG_PATH=$2
 CFG_PATH=$2
-BUILDBOT_PWD=$3
-IRC_PWD=$4
-IRC_CHANNEL=$5
-SMTP_USER=$6
-SMTP_PWD=$7
-EMAIL_RCP=$8
+DOCKER_PATH=$3
+BUILDBOT_PWD=$4
+IRC_PWD=$5
+IRC_CHANNEL=$6
+SMTP_USER=$7
+SMTP_PWD=$8
+EMAIL_RCP=$9
+REGISTRY_USER=${10}
+REGISTRY_PWD=${11}
+REGISTRY_BUCKET=${12}
+REGISTRY_ACCESS_KEY=${13}
+REGISTRY_SECRET_KEY=${14}
 BUILDBOT_PATH="/data/buildbot"
 BUILDBOT_PATH="/data/buildbot"
-DOCKER_PATH="/data/docker"
 SLAVE_NAME="buildworker"
 SLAVE_NAME="buildworker"
 SLAVE_SOCKET="localhost:9989"
 SLAVE_SOCKET="localhost:9989"
+
 export PATH="/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin"
 export PATH="/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin"
 
 
 function run { su $USER -c "$1"; }
 function run { su $USER -c "$1"; }
@@ -35,6 +41,10 @@ run "sed -i -E 's#(SMTP_USER = ).+#\1\"$SMTP_USER\"#' master/master.cfg"
 run "sed -i -E 's#(SMTP_PWD = ).+#\1\"$SMTP_PWD\"#' master/master.cfg"
 run "sed -i -E 's#(SMTP_PWD = ).+#\1\"$SMTP_PWD\"#' master/master.cfg"
 run "sed -i -E 's#(EMAIL_RCP = ).+#\1\"$EMAIL_RCP\"#' master/master.cfg"
 run "sed -i -E 's#(EMAIL_RCP = ).+#\1\"$EMAIL_RCP\"#' master/master.cfg"
 run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
 run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
+run "echo 'export DOCKER_CREDS=\"$REGISTRY_USER:$REGISTRY_PWD\"' > $BUILDBOT_PATH/master/credentials.cfg"
+run "echo 'export S3_BUCKET=\"$REGISTRY_BUCKET\"' >> $BUILDBOT_PATH/master/credentials.cfg"
+run "echo 'export S3_ACCESS_KEY=\"$REGISTRY_ACCESS_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
+run "echo 'export S3_SECRET_KEY=\"$REGISTRY_SECRET_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
 
 
 # Patch github webstatus to capture pull requests
 # Patch github webstatus to capture pull requests
 cp $CFG_PATH/github.py /usr/local/lib/python2.7/dist-packages/buildbot/status/web/hooks
 cp $CFG_PATH/github.py /usr/local/lib/python2.7/dist-packages/buildbot/status/web/hooks

+ 155 - 0
hack/infrastructure/docker-ci/deployment.py

@@ -0,0 +1,155 @@
+#!/usr/bin/env python
+
+import os, sys, re, json, base64
+from boto.ec2.connection import EC2Connection
+from subprocess import call
+from fabric import api
+from fabric.api import cd, run, put, sudo
+from os import environ as env
+from time import sleep
+
+# Remove SSH private key as it needs more processing
+CONFIG = json.loads(re.sub(r'("DOCKER_CI_KEY".+?"(.+?)",)','',
+    env['CONFIG_JSON'], flags=re.DOTALL))
+
+# Populate environment variables
+for key in CONFIG:
+    env[key] = CONFIG[key]
+
+# Load SSH private key
+env['DOCKER_CI_KEY'] = re.sub('^.+"DOCKER_CI_KEY".+?"(.+?)".+','\\1',
+    env['CONFIG_JSON'],flags=re.DOTALL)
+
+
+AWS_TAG = env.get('AWS_TAG','docker-ci')
+AWS_KEY_NAME = 'dotcloud-dev'       # Same as CONFIG_JSON['DOCKER_CI_PUB']
+AWS_AMI = 'ami-d582d6bc'            # Ubuntu 13.04
+AWS_REGION = 'us-east-1'
+AWS_TYPE = 'm1.small'
+AWS_SEC_GROUPS = 'gateway'
+AWS_IMAGE_USER = 'ubuntu'
+DOCKER_PATH = '/go/src/github.com/dotcloud/docker'
+DOCKER_CI_PATH = '/docker-ci'
+CFG_PATH = '{}/buildbot'.format(DOCKER_CI_PATH)
+
+
+class AWS_EC2:
+    '''Amazon EC2'''
+    def __init__(self, access_key, secret_key):
+        '''Set default API parameters'''
+        self.handler = EC2Connection(access_key, secret_key)
+    def create_instance(self, tag, instance_type):
+        reservation = self.handler.run_instances(**instance_type)
+        instance = reservation.instances[0]
+        sleep(10)
+        while instance.state != 'running':
+            sleep(5)
+            instance.update()
+            print "Instance state: %s" % (instance.state)
+        instance.add_tag("Name",tag)
+        print "instance %s done!" % (instance.id)
+        return instance.ip_address
+    def get_instances(self):
+        return self.handler.get_all_instances()
+    def get_tags(self):
+        return dict([(i.instances[0].id, i.instances[0].tags['Name'])
+            for i in self.handler.get_all_instances() if i.instances[0].tags])
+    def del_instance(self, instance_id):
+        self.handler.terminate_instances(instance_ids=[instance_id])
+
+
+def json_fmt(data):
+    '''Format json output'''
+    return json.dumps(data, sort_keys = True, indent = 2)
+
+
+# Create EC2 API handler
+ec2 = AWS_EC2(env['AWS_ACCESS_KEY'], env['AWS_SECRET_KEY'])
+
+# Stop processing if AWS_TAG exists on EC2
+if AWS_TAG in ec2.get_tags().values():
+    print ('Instance: {} already deployed. Not further processing.'
+        .format(AWS_TAG))
+    exit(1)
+
+ip = ec2.create_instance(AWS_TAG, {'image_id':AWS_AMI, 'instance_type':AWS_TYPE,
+    'security_groups':[AWS_SEC_GROUPS], 'key_name':AWS_KEY_NAME})
+
+# Wait 30 seconds for the machine to boot
+sleep(30)
+
+# Create docker-ci ssh private key so docker-ci docker container can communicate
+# with its EC2 instance
+os.makedirs('/root/.ssh')
+open('/root/.ssh/id_rsa','w').write(env['DOCKER_CI_KEY'])
+os.chmod('/root/.ssh/id_rsa',0600)
+open('/root/.ssh/config','w').write('StrictHostKeyChecking no\n')
+
+api.env.host_string = ip
+api.env.user = AWS_IMAGE_USER
+api.env.key_filename = '/root/.ssh/id_rsa'
+
+# Correct timezone
+sudo('echo "America/Los_Angeles" >/etc/timezone')
+sudo('dpkg-reconfigure --frontend noninteractive tzdata')
+
+# Load public docker-ci key
+sudo("echo '{}' >> /root/.ssh/authorized_keys".format(env['DOCKER_CI_PUB']))
+
+# Create docker nightly release credentials file
+credentials = {
+    'AWS_ACCESS_KEY': env['PKG_ACCESS_KEY'],
+    'AWS_SECRET_KEY': env['PKG_SECRET_KEY'],
+    'GPG_PASSPHRASE': env['PKG_GPG_PASSPHRASE'],
+    'INDEX_AUTH': env['INDEX_AUTH']}
+open(DOCKER_CI_PATH + '/nightlyrelease/release_credentials.json', 'w').write(
+    base64.b64encode(json.dumps(credentials)))
+
+# Transfer docker
+sudo('mkdir -p ' + DOCKER_CI_PATH)
+sudo('chown {}.{} {}'.format(AWS_IMAGE_USER, AWS_IMAGE_USER, DOCKER_CI_PATH))
+call('/usr/bin/rsync -aH {} {}@{}:{}'.format(DOCKER_CI_PATH, AWS_IMAGE_USER, ip,
+    os.path.dirname(DOCKER_CI_PATH)), shell=True)
+
+# Install Docker and Buildbot dependencies
+sudo('addgroup docker')
+sudo('usermod -a -G docker ubuntu')
+sudo('mkdir /mnt/docker; ln -s /mnt/docker /var/lib/docker')
+sudo('wget -q -O - https://get.docker.io/gpg | apt-key add -')
+sudo('echo deb https://get.docker.io/ubuntu docker main >'
+    ' /etc/apt/sources.list.d/docker.list')
+sudo('echo -e "deb http://archive.ubuntu.com/ubuntu raring main universe\n'
+    'deb http://us.archive.ubuntu.com/ubuntu/ raring-security main universe\n"'
+    ' > /etc/apt/sources.list; apt-get update')
+sudo('DEBIAN_FRONTEND=noninteractive apt-get install -q -y wget python-dev'
+    ' python-pip supervisor git mercurial linux-image-extra-$(uname -r)'
+    ' aufs-tools make libfontconfig libevent-dev')
+sudo('wget -O - https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz | '
+    'tar -v -C /usr/local -xz; ln -s /usr/local/go/bin/go /usr/bin/go')
+sudo('GOPATH=/go go get -d github.com/dotcloud/docker')
+sudo('pip install -r {}/requirements.txt'.format(CFG_PATH))
+
+# Install docker and testing dependencies
+sudo('apt-get install -y -q lxc-docker')
+sudo('curl -s https://phantomjs.googlecode.com/files/'
+    'phantomjs-1.9.1-linux-x86_64.tar.bz2 | tar jx -C /usr/bin'
+    ' --strip-components=2 phantomjs-1.9.1-linux-x86_64/bin/phantomjs')
+
+#### FIXME. Temporarily install docker with proper apparmor handling
+sudo('stop docker')
+sudo('wget -q -O /usr/bin/docker http://test.docker.io/test/docker')
+sudo('start docker')
+
+# Build docker-ci containers
+sudo('cd {}; docker build -t docker .'.format(DOCKER_PATH))
+sudo('cd {}/nightlyrelease; docker build -t dockerbuilder .'.format(
+    DOCKER_CI_PATH))
+
+# Setup buildbot
+sudo('mkdir /data')
+sudo('{0}/setup.sh root {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10}'
+    ' {11} {12}'.format(CFG_PATH, DOCKER_PATH, env['BUILDBOT_PWD'],
+    env['IRC_PWD'], env['IRC_CHANNEL'], env['SMTP_USER'],
+    env['SMTP_PWD'], env['EMAIL_RCP'], env['REGISTRY_USER'],
+    env['REGISTRY_PWD'], env['REGISTRY_BUCKET'], env['REGISTRY_ACCESS_KEY'],
+    env['REGISTRY_SECRET_KEY']))

+ 32 - 0
hack/infrastructure/docker-ci/docker-coverage/coverage-docker.sh

@@ -0,0 +1,32 @@
+#!/bin/bash
+
+set -x
+# Generate a random string of $1 characters
+function random {
+    cat /dev/urandom | tr -cd 'a-f0-9' | head -c $1
+}
+
+# Compute test paths
+BASE_PATH=`pwd`/test_docker_$(random 12)
+DOCKER_PATH=$BASE_PATH/go/src/github.com/dotcloud/docker
+export GOPATH=$BASE_PATH/go:$DOCKER_PATH/vendor
+
+# Fetch latest master
+mkdir -p $DOCKER_PATH
+cd $DOCKER_PATH
+git init .
+git fetch -q http://github.com/dotcloud/docker master
+git reset --hard FETCH_HEAD
+
+# Fetch go coverage
+cd $BASE_PATH/go
+GOPATH=$BASE_PATH/go go get github.com/axw/gocov/gocov
+sudo -E GOPATH=$GOPATH ./bin/gocov test -deps -exclude-goroot -v\
+ -exclude github.com/gorilla/context,github.com/gorilla/mux,github.com/kr/pty,\
+code.google.com/p/go.net/websocket,github.com/dotcloud/tar\
+ github.com/dotcloud/docker | ./bin/gocov report; exit_status=$?
+
+# Cleanup testing directory
+rm -rf $BASE_PATH
+
+exit $exit_status

+ 35 - 0
hack/infrastructure/docker-ci/docker-test/test_docker.sh

@@ -0,0 +1,35 @@
+#!/bin/bash
+
+set -x
+COMMIT=${1-HEAD}
+REPO=${2-http://github.com/dotcloud/docker}
+BRANCH=${3-master}
+
+# Generate a random string of $1 characters
+function random {
+    cat /dev/urandom | tr -cd 'a-f0-9' | head -c $1
+}
+
+# Compute test paths
+BASE_PATH=`pwd`/test_docker_$(random 12)
+DOCKER_PATH=$BASE_PATH/go/src/github.com/dotcloud/docker
+export GOPATH=$BASE_PATH/go:$DOCKER_PATH/vendor
+
+# Fetch latest master
+mkdir -p $DOCKER_PATH
+cd $DOCKER_PATH
+git init .
+git fetch -q http://github.com/dotcloud/docker master
+git reset --hard FETCH_HEAD
+
+# Merge commit
+git fetch -q "$REPO" "$BRANCH"
+git merge --no-edit $COMMIT || exit 1
+
+# Test commit
+go test -v; exit_status=$?
+
+# Cleanup testing directory
+rm -rf $BASE_PATH
+
+exit $exit_status

+ 0 - 0
testing/functionaltests/test_index.py → hack/infrastructure/docker-ci/functionaltests/test_index.py


+ 26 - 0
hack/infrastructure/docker-ci/functionaltests/test_registry.sh

@@ -0,0 +1,26 @@
+#!/bin/sh
+
+set -x
+
+# Cleanup
+rm -rf docker-registry
+
+# Setup the environment
+export SETTINGS_FLAVOR=test
+export DOCKER_REGISTRY_CONFIG=config_test.yml
+
+# Get latest docker registry
+git clone -q https://github.com/dotcloud/docker-registry.git
+cd docker-registry
+
+# Get dependencies
+pip install -q -r requirements.txt
+pip install -q -r test-requirements.txt
+pip install -q tox
+
+# Run registry tests
+tox || exit 1
+export PYTHONPATH=$(pwd)/docker-registry
+python -m unittest discover -p s3.py -s test || exit 1
+python -m unittest discover -p workflow.py -s test
+

+ 37 - 0
hack/infrastructure/docker-ci/nightlyrelease/Dockerfile

@@ -0,0 +1,37 @@
+# VERSION:        1.2
+# DOCKER-VERSION  0.6.3
+# AUTHOR:         Daniel Mizyrycki <daniel@dotcloud.com>
+# DESCRIPTION:    Build docker nightly release using Docker in Docker.
+# REFERENCES:     This code reuses the excellent implementation of docker in docker
+#                 made by Jerome Petazzoni.  https://github.com/jpetazzo/dind
+# COMMENTS:
+#   release_credentials.json is a base64 json encoded file containing:
+#       { "AWS_ACCESS_KEY": "Test_docker_AWS_S3_bucket_id",
+#         "AWS_SECRET_KEY='Test_docker_AWS_S3_bucket_key'
+#         "GPG_PASSPHRASE='Test_docker_GPG_passphrase_signature'
+#         "INDEX_AUTH='Encripted_index_authentication' }
+# TO_BUILD:       docker build -t dockerbuilder .
+# TO_RELEASE:     docker run -i -t -privileged -lxc-conf="lxc.aa_profile = unconfined" -e AWS_S3_BUCKET="test.docker.io" dockerbuilder
+
+from docker
+maintainer Daniel Mizyrycki <daniel@dotcloud.com>
+
+# Add docker dependencies and downloading packages
+run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
+run apt-get update; apt-get install -y -q wget python2.7
+
+# Add production docker binary
+run wget -q -O /usr/bin/docker http://get.docker.io/builds/Linux/x86_64/docker-latest; chmod +x /usr/bin/docker
+
+#### FIXME. Temporarily install docker with proper apparmor handling
+run wget -q -O /usr/bin/docker http://test.docker.io/test/docker; chmod +x /usr/bin/docker
+
+# Add proto docker builder
+add ./dockerbuild /usr/bin/dockerbuild
+run chmod +x /usr/bin/dockerbuild
+
+# Add release credentials
+add ./release_credentials.json /root/release_credentials.json
+
+# Launch build process in a container
+cmd dockerbuild

+ 45 - 0
hack/infrastructure/docker-ci/nightlyrelease/dockerbuild

@@ -0,0 +1,45 @@
+#!/bin/bash
+
+# Variables AWS_ACCESS_KEY, AWS_SECRET_KEY, PG_PASSPHRASE and INDEX_AUTH
+# are decoded from /root/release_credentials.json
+# Variable AWS_S3_BUCKET is passed to the environment from docker run -e
+
+# Enable debugging
+set -x
+
+# Fetch docker master branch
+rm -rf  /go/src/github.com/dotcloud/docker
+cd /
+git clone -q http://github.com/dotcloud/docker /go/src/github.com/dotcloud/docker
+cd /go/src/github.com/dotcloud/docker
+
+echo FIXME. Temporarily add Jerome changeset with proper apparmor handling
+git fetch  http://github.com/jpetazzo/docker escape-apparmor-confinement:escape-apparmor-confinement
+git rebase --onto master master escape-apparmor-confinement
+
+# Launch docker daemon using dind inside the container
+./hack/dind /usr/bin/docker -d &
+sleep 5
+
+# Add an uncommitted change to generate a timestamped release
+date > timestamp
+
+# Build the docker package using /Dockerfile
+docker build -t docker .
+
+# Run Docker unittests binary and Ubuntu package
+docker run -privileged -lxc-conf=lxc.aa_profile=unconfined docker hack/make.sh || exit 1
+
+# Turn debug off to load credentials from the environment
+set +x
+eval $(cat /root/release_credentials.json  | python -c '
+import sys,json,base64;
+d=json.loads(base64.b64decode(sys.stdin.read()));
+exec("""for k in d: print "export {0}=\\"{1}\\"".format(k,d[k])""")')
+echo '{"https://index.docker.io/v1/":{"auth":"'$INDEX_AUTH'","email":"engineering@dotcloud.com"}}' > /.dockercfg
+set -x
+
+# Push docker nightly
+echo docker run -i -t -privileged -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=XXXXX -e AWS_SECRET_KEY=XXXXX -e GPG_PASSPHRASE=XXXXX release  hack/release.sh
+set +x
+docker run -i -t -privileged -e AWS_S3_BUCKET=$AWS_S3_BUCKET -e AWS_ACCESS_KEY=$AWS_ACCESS_KEY -e AWS_SECRET_KEY=$AWS_SECRET_KEY -e GPG_PASSPHRASE=$GPG_PASSPHRASE  release  hack/release.sh

+ 1 - 0
hack/infrastructure/docker-ci/nightlyrelease/release_credentials.json

@@ -0,0 +1 @@
+eyAiQVdTX0FDQ0VTU19LRVkiOiAiIiwKICAiQVdTX1NFQ1JFVF9LRVkiOiAiIiwKICAiR1BHX1BBU1NQSFJBU0UiOiAiIiwKICAiSU5ERVhfQVVUSCI6ICIiIH0=

+ 28 - 0
hack/infrastructure/docker-ci/report/Dockerfile

@@ -0,0 +1,28 @@
+# VERSION:        0.22
+# DOCKER-VERSION  0.6.3
+# AUTHOR:         Daniel Mizyrycki <daniel@dotcloud.com>
+# DESCRIPTION:    Generate docker-ci daily report
+# COMMENTS:       The build process is initiated by deployment.py
+                  Report configuration is passed through ./credentials.json at
+#                 deployment time.
+# TO_BUILD:       docker build -t report .
+# TO_DEPLOY:      docker run report
+
+from ubuntu:12.04
+maintainer Daniel Mizyrycki <daniel@dotcloud.com>
+
+env PYTHONPATH /report
+
+
+# Add report dependencies
+run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > \
+    /etc/apt/sources.list
+run apt-get update; apt-get install -y python2.7 python-pip ssh rsync
+
+# Set San Francisco timezone
+run echo "America/Los_Angeles" >/etc/timezone
+run dpkg-reconfigure --frontend noninteractive tzdata
+
+# Add report code and set default container command
+add . /report
+cmd "/report/report.py"

+ 130 - 0
hack/infrastructure/docker-ci/report/deployment.py

@@ -0,0 +1,130 @@
+#!/usr/bin/env python
+
+'''Deploy docker-ci report container on Digital Ocean.
+Usage:
+    export CONFIG_JSON='
+        { "DROPLET_NAME":        "Digital_Ocean_dropplet_name",
+          "DO_CLIENT_ID":        "Digital_Ocean_client_id",
+          "DO_API_KEY":          "Digital_Ocean_api_key",
+          "DOCKER_KEY_ID":       "Digital_Ocean_ssh_key_id",
+          "DOCKER_CI_KEY_PATH":  "docker-ci_private_key_path",
+          "DOCKER_CI_PUB":       "$(cat docker-ci_ssh_public_key.pub)",
+          "DOCKER_CI_ADDRESS"    "user@docker-ci_fqdn_server",
+          "SMTP_USER":           "SMTP_server_user",
+          "SMTP_PWD":            "SMTP_server_password",
+          "EMAIL_SENDER":        "Buildbot_mailing_sender",
+          "EMAIL_RCP":           "Buildbot_mailing_receipient" }'
+    python deployment.py
+'''
+
+import re, json, requests, base64
+from fabric import api
+from fabric.api import cd, run, put, sudo
+from os import environ as env
+from time import sleep
+from datetime import datetime
+
+# Populate environment variables
+CONFIG = json.loads(env['CONFIG_JSON'])
+for key in CONFIG:
+    env[key] = CONFIG[key]
+
+# Load DOCKER_CI_KEY
+env['DOCKER_CI_KEY'] = open(env['DOCKER_CI_KEY_PATH']).read()
+
+DROPLET_NAME = env.get('DROPLET_NAME','report')
+TIMEOUT = 120            # Seconds before timeout droplet creation
+IMAGE_ID = 894856        # Docker on Ubuntu 13.04
+REGION_ID = 4            # New York 2
+SIZE_ID = 66             # memory 512MB
+DO_IMAGE_USER = 'root'   # Image user on Digital Ocean
+API_URL = 'https://api.digitalocean.com/'
+
+
+class digital_ocean():
+
+    def __init__(self, key, client):
+        '''Set default API parameters'''
+        self.key = key
+        self.client = client
+        self.api_url = API_URL
+
+    def api(self, cmd_path, api_arg={}):
+        '''Make api call'''
+        api_arg.update({'api_key':self.key, 'client_id':self.client})
+        resp = requests.get(self.api_url + cmd_path, params=api_arg).text
+        resp = json.loads(resp)
+        if resp['status'] != 'OK':
+            raise Exception(resp['error_message'])
+        return resp
+
+    def droplet_data(self, name):
+        '''Get droplet data'''
+        data = self.api('droplets')
+        data = [droplet for droplet in data['droplets']
+            if droplet['name'] == name]
+        return data[0] if data else {}
+
+def json_fmt(data):
+    '''Format json output'''
+    return json.dumps(data, sort_keys = True, indent = 2)
+
+
+do = digital_ocean(env['DO_API_KEY'], env['DO_CLIENT_ID'])
+
+# Get DROPLET_NAME data
+data = do.droplet_data(DROPLET_NAME)
+
+# Stop processing if DROPLET_NAME exists on Digital Ocean
+if data:
+    print ('Droplet: {} already deployed. Not further processing.'
+        .format(DROPLET_NAME))
+    exit(1)
+
+# Create droplet
+do.api('droplets/new', {'name':DROPLET_NAME, 'region_id':REGION_ID,
+    'image_id':IMAGE_ID, 'size_id':SIZE_ID,
+    'ssh_key_ids':[env['DOCKER_KEY_ID']]})
+
+# Wait for droplet to be created.
+start_time = datetime.now()
+while (data.get('status','') != 'active' and (
+ datetime.now()-start_time).seconds < TIMEOUT):
+    data = do.droplet_data(DROPLET_NAME)
+    print data['status']
+    sleep(3)
+
+# Wait for the machine to boot
+sleep(15)
+
+# Get droplet IP
+ip = str(data['ip_address'])
+print 'droplet: {}    ip: {}'.format(DROPLET_NAME, ip)
+
+api.env.host_string = ip
+api.env.user = DO_IMAGE_USER
+api.env.key_filename = env['DOCKER_CI_KEY_PATH']
+
+# Correct timezone
+sudo('echo "America/Los_Angeles" >/etc/timezone')
+sudo('dpkg-reconfigure --frontend noninteractive tzdata')
+
+# Load JSON_CONFIG environment for Dockerfile
+CONFIG_JSON= base64.b64encode(
+    '{{"DOCKER_CI_PUB":     "{DOCKER_CI_PUB}",'
+    '  "DOCKER_CI_KEY":     "{DOCKER_CI_KEY}",'
+    '  "DOCKER_CI_ADDRESS": "{DOCKER_CI_ADDRESS}",'
+    '  "SMTP_USER":         "{SMTP_USER}",'
+    '  "SMTP_PWD":          "{SMTP_PWD}",'
+    '  "EMAIL_SENDER":      "{EMAIL_SENDER}",'
+    '  "EMAIL_RCP":         "{EMAIL_RCP}"}}'.format(**env))
+
+run('mkdir -p /data/report')
+put('./', '/data/report')
+with cd('/data/report'):
+    run('chmod 700 report.py')
+    run('echo "{}" > credentials.json'.format(CONFIG_JSON))
+    run('docker build -t report .')
+    run('rm credentials.json')
+    run("echo -e '30 09 * * * /usr/bin/docker run report\n' |"
+        " /usr/bin/crontab -")

+ 145 - 0
hack/infrastructure/docker-ci/report/report.py

@@ -0,0 +1,145 @@
+#!/usr/bin/python
+
+'''CONFIG_JSON is a json encoded string base64 environment variable. It is used
+to clone docker-ci database, generate docker-ci report and submit it by email.
+CONFIG_JSON data comes from the file /report/credentials.json inserted in this
+container by deployment.py:
+
+{ "DOCKER_CI_PUB":       "$(cat docker-ci_ssh_public_key.pub)",
+  "DOCKER_CI_KEY":       "$(cat docker-ci_ssh_private_key.key)",
+  "DOCKER_CI_ADDRESS":   "user@docker-ci_fqdn_server",
+  "SMTP_USER":           "SMTP_server_user",
+  "SMTP_PWD":            "SMTP_server_password",
+  "EMAIL_SENDER":        "Buildbot_mailing_sender",
+  "EMAIL_RCP":           "Buildbot_mailing_receipient" }  '''
+
+import os, re, json, sqlite3, datetime, base64
+import smtplib
+from datetime import timedelta
+from subprocess import call
+from os import environ as env
+
+TODAY = datetime.date.today()
+
+# Load credentials to the environment
+env['CONFIG_JSON'] = base64.b64decode(open('/report/credentials.json').read())
+
+# Remove SSH private key as it needs more processing
+CONFIG = json.loads(re.sub(r'("DOCKER_CI_KEY".+?"(.+?)",)','',
+    env['CONFIG_JSON'], flags=re.DOTALL))
+
+# Populate environment variables
+for key in CONFIG:
+    env[key] = CONFIG[key]
+
+# Load SSH private key
+env['DOCKER_CI_KEY'] = re.sub('^.+"DOCKER_CI_KEY".+?"(.+?)".+','\\1',
+    env['CONFIG_JSON'],flags=re.DOTALL)
+
+# Prevent rsync to validate host on first connection to docker-ci
+os.makedirs('/root/.ssh')
+open('/root/.ssh/id_rsa','w').write(env['DOCKER_CI_KEY'])
+os.chmod('/root/.ssh/id_rsa',0600)
+open('/root/.ssh/config','w').write('StrictHostKeyChecking no\n')
+
+
+# Sync buildbot database from docker-ci
+call('rsync {}:/data/buildbot/master/state.sqlite .'.format(
+    env['DOCKER_CI_ADDRESS']), shell=True)
+
+class SQL:
+    def __init__(self, database_name):
+        sql = sqlite3.connect(database_name)
+        # Use column names as keys for fetchall rows
+        sql.row_factory = sqlite3.Row
+        sql = sql.cursor()
+        self.sql = sql
+
+    def query(self,query_statement):
+        return self.sql.execute(query_statement).fetchall()
+
+sql = SQL("state.sqlite")
+
+
+class Report():
+
+    def __init__(self,period='',date=''):
+        self.data = []
+        self.period = 'date' if not period else period
+        self.date = str(TODAY) if not date else date
+        self.compute()
+
+    def compute(self):
+        '''Compute report'''
+        if self.period == 'week':
+            self.week_report(self.date)
+        else:
+            self.date_report(self.date)
+
+
+    def date_report(self,date):
+        '''Create a date test report'''
+        builds = []
+        # Get a queryset with all builds from date
+        rows = sql.query('SELECT * FROM builds JOIN buildrequests'
+            ' WHERE builds.brid=buildrequests.id and'
+            ' date(start_time, "unixepoch", "localtime") = "{0}"'
+            ' GROUP BY number'.format(date))
+        build_names = sorted(set([row['buildername'] for row in rows]))
+        # Create a report build line for a given build
+        for build_name in build_names:
+            tried = len([row['buildername']
+                for row in rows if row['buildername'] == build_name])
+            fail_tests = [row['buildername'] for row in rows if (
+                row['buildername'] == build_name and row['results'] != 0)]
+            fail = len(fail_tests)
+            fail_details = ''
+            fail_pct = int(100.0*fail/tried) if  tried != 0 else 100
+            builds.append({'name': build_name, 'tried': tried, 'fail': fail,
+                'fail_pct': fail_pct, 'fail_details':fail_details})
+        if builds:
+            self.data.append({'date': date, 'builds': builds})
+
+
+    def week_report(self,date):
+        '''Add the week's date test reports to report.data'''
+        date = datetime.datetime.strptime(date,'%Y-%m-%d').date()
+        last_monday = date - datetime.timedelta(days=date.weekday())
+        week_dates = [last_monday + timedelta(days=x) for x in range(7,-1,-1)]
+        for date in week_dates:
+            self.date_report(str(date))
+
+    def render_text(self):
+        '''Return rendered report in text format'''
+        retval = ''
+        fail_tests = {}
+        for builds in self.data:
+            retval += 'Test date: {0}\n'.format(builds['date'],retval)
+            table = ''
+            for build in builds['builds']:
+                table += ('Build {name:15}   Tried: {tried:4}   '
+                    ' Failures: {fail:4} ({fail_pct}%)\n'.format(**build))
+                if build['name'] in fail_tests:
+                    fail_tests[build['name']] += build['fail_details']
+                else:
+                    fail_tests[build['name']] = build['fail_details']
+            retval += '{0}\n'.format(table)
+            retval += '\n    Builds failing'
+            for fail_name in fail_tests:
+                retval += '\n' + fail_name + '\n'
+                for (fail_id,fail_url,rn_tests,nr_errors,log_errors,
+                 tracelog_errors) in fail_tests[fail_name]:
+                    retval += fail_url + '\n'
+            retval += '\n\n'
+        return retval
+
+
+# Send email
+smtp_from = env['EMAIL_SENDER']
+subject = '[docker-ci] Daily report for {}'.format(str(TODAY))
+msg = "From: {}\r\nTo: {}\r\nSubject: {}\r\n\r\n".format(
+    smtp_from, env['EMAIL_RCP'], subject)
+msg = msg + Report('week').render_text()
+server = smtplib.SMTP_SSL('smtp.mailgun.org')
+server.login(env['SMTP_USER'], env['SMTP_PWD'])
+server.sendmail(smtp_from, env['EMAIL_RCP'], msg)

+ 10 - 2
network_proxy.go

@@ -103,7 +103,11 @@ func (proxy *TCPProxy) Run() {
 	for {
 	for {
 		client, err := proxy.listener.Accept()
 		client, err := proxy.listener.Accept()
 		if err != nil {
 		if err != nil {
-			utils.Errorf("Stopping proxy on tcp/%v for tcp/%v (%v)", proxy.frontendAddr, proxy.backendAddr, err.Error())
+			if utils.IsClosedError(err) {
+				utils.Debugf("Stopping proxy on tcp/%v for tcp/%v (socket was closed)", proxy.frontendAddr, proxy.backendAddr)
+			} else {
+				utils.Errorf("Stopping proxy on tcp/%v for tcp/%v (%v)", proxy.frontendAddr, proxy.backendAddr, err.Error())
+			}
 			return
 			return
 		}
 		}
 		go proxy.clientLoop(client.(*net.TCPConn), quit)
 		go proxy.clientLoop(client.(*net.TCPConn), quit)
@@ -205,7 +209,11 @@ func (proxy *UDPProxy) Run() {
 			// NOTE: Apparently ReadFrom doesn't return
 			// NOTE: Apparently ReadFrom doesn't return
 			// ECONNREFUSED like Read do (see comment in
 			// ECONNREFUSED like Read do (see comment in
 			// UDPProxy.replyLoop)
 			// UDPProxy.replyLoop)
-			utils.Errorf("Stopping proxy on udp/%v for udp/%v (%v)", proxy.frontendAddr, proxy.backendAddr, err.Error())
+			if utils.IsClosedError(err) {
+				utils.Debugf("Stopping proxy on udp/%v for udp/%v (socket was closed)", proxy.frontendAddr, proxy.backendAddr)
+			} else {
+				utils.Errorf("Stopping proxy on udp/%v for udp/%v (%v)", proxy.frontendAddr, proxy.backendAddr, err.Error())
+			}
 			break
 			break
 		}
 		}
 
 

+ 3 - 1
runtime.go

@@ -421,7 +421,9 @@ func (runtime *Runtime) Create(config *Config) (*Container, error) {
 	}
 	}
 
 
 	if img.Config != nil {
 	if img.Config != nil {
-		MergeConfig(config, img.Config)
+		if err := MergeConfig(config, img.Config); err != nil {
+			return nil, err
+		}
 	}
 	}
 
 
 	if len(config.Entrypoint) != 0 && config.Cmd == nil {
 	if len(config.Entrypoint) != 0 && config.Cmd == nil {

+ 23 - 0
runtime_test.go

@@ -313,6 +313,29 @@ func TestRuntimeCreate(t *testing.T) {
 	if err == nil {
 	if err == nil {
 		t.Fatal("Builder.Create should throw an error when Cmd is empty")
 		t.Fatal("Builder.Create should throw an error when Cmd is empty")
 	}
 	}
+
+	config := &Config{
+		Image:     GetTestImage(runtime).ID,
+		Cmd:       []string{"/bin/ls"},
+		PortSpecs: []string{"80"},
+	}
+	container, err = runtime.Create(config)
+
+	image, err := runtime.Commit(container, "testrepo", "testtag", "", "", config)
+	if err != nil {
+		t.Error(err)
+	}
+
+	_, err = runtime.Create(
+		&Config{
+			Image:     image.ID,
+			PortSpecs: []string{"80000:80"},
+		},
+	)
+	if err == nil {
+		t.Fatal("Builder.Create should throw an error when PortSpecs is invalid")
+	}
+
 }
 }
 
 
 func TestDestroy(t *testing.T) {
 func TestDestroy(t *testing.T) {

+ 8 - 0
term/term.go

@@ -21,11 +21,19 @@ type Winsize struct {
 func GetWinsize(fd uintptr) (*Winsize, error) {
 func GetWinsize(fd uintptr) (*Winsize, error) {
 	ws := &Winsize{}
 	ws := &Winsize{}
 	_, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(ws)))
 	_, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(ws)))
+	// Skipp errno = 0
+	if err == 0 {
+		return ws, nil
+	}
 	return ws, err
 	return ws, err
 }
 }
 
 
 func SetWinsize(fd uintptr, ws *Winsize) error {
 func SetWinsize(fd uintptr, ws *Winsize) error {
 	_, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, uintptr(syscall.TIOCSWINSZ), uintptr(unsafe.Pointer(ws)))
 	_, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, uintptr(syscall.TIOCSWINSZ), uintptr(unsafe.Pointer(ws)))
+	// Skipp errno = 0
+	if err == 0 {
+		return nil
+	}
 	return err
 	return err
 }
 }
 
 

+ 0 - 58
testing/README.rst

@@ -1,58 +0,0 @@
-=======
-testing
-=======
-
-This directory contains testing related files.
-
-
-Buildbot
-========
-
-Buildbot is a continuous integration system designed to automate the
-build/test cycle. By automatically rebuilding and testing the tree each time
-something has changed, build problems are pinpointed quickly, before other
-developers are inconvenienced by the failure.
-
-We are running buildbot in an AWS instance to verify docker passes all tests
-when commits get pushed to the master branch.
-
-You can check docker's buildbot instance at http://docker-ci.dotcloud.com/waterfall
-
-
-Deployment
-~~~~~~~~~~
-
-::
-
-  # Define AWS credential environment variables
-  export AWS_ACCESS_KEY_ID=xxxxxxxxxxxx
-  export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxx
-  export AWS_KEYPAIR_NAME=xxxxxxxxxxxx
-  export AWS_SSH_PRIVKEY=xxxxxxxxxxxx
-
-  # Define email recipient and IRC channel
-  export EMAIL_RCP=xxxxxx@domain.com
-  export IRC_CHANNEL=docker
-
-  # Define buildbot credentials
-  export BUILDBOT_PWD=xxxxxxxxxxxx
-  export IRC_PWD=xxxxxxxxxxxx
-  export SMTP_USER=xxxxxxxxxxxx
-  export SMTP_PWD=xxxxxxxxxxxx
-
-  # Define docker registry functional test credentials
-  export REGISTRY_USER=xxxxxxxxxxxx
-  export REGISTRY_PWD=xxxxxxxxxxxx
-
-  # Checkout docker
-  git clone git://github.com/dotcloud/docker.git
-
-  # Deploy docker on AWS
-  cd docker/testing
-  vagrant up --provider=aws
-
-
-Buildbot AWS dependencies
--------------------------
-
-vagrant, virtualbox packages and vagrant aws plugin

+ 0 - 74
testing/Vagrantfile

@@ -1,74 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-BOX_NAME = "docker-ci"
-BOX_URI = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box"
-AWS_AMI = "ami-10314d79"
-DOCKER_PATH = "/data/docker"
-CFG_PATH = "#{DOCKER_PATH}/testing/buildbot"
-on_vbox = File.file?("#{File.dirname(__FILE__)}/.vagrant/machines/default/virtualbox/id") | \
-  Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty? & \
-  (on_vbox=true; ARGV.each do |arg| on_vbox &&= !arg.downcase.start_with?("--provider") end; on_vbox)
-USER = on_vbox ? "vagrant": "ubuntu"
-
-Vagrant::Config.run do |config|
-  # Setup virtual machine box. This VM configuration code is always executed.
-  config.vm.box = BOX_NAME
-  config.vm.box_url = BOX_URI
-  config.vm.forward_port 8010, 8010
-  config.vm.share_folder "v-data", DOCKER_PATH, "#{File.dirname(__FILE__)}/.."
-
-
-  # Deploy buildbot and its dependencies if it was not done
-  if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
-    # Add memory limitation capabilities
-    pkg_cmd = 'sed -Ei \'s/^(GRUB_CMDLINE_LINUX_DEFAULT)=.+/\\1="cgroup_enable=memory swapaccount=1 quiet"/\' /etc/default/grub; '
-    # Adjust kernel
-    pkg_cmd << "apt-get update -qq; "
-    if on_vbox
-      pkg_cmd << "apt-get install -q -y linux-image-extra-`uname -r`; "
-    else
-      pkg_cmd << "apt-get install -q -y linux-image-generic; "
-    end
-
-    # Deploy buildbot CI
-    pkg_cmd << "apt-get install -q -y python-dev python-pip supervisor; " \
-      "pip install -r #{CFG_PATH}/requirements.txt; " \
-      "chown #{USER}.#{USER} /data; cd /data; " \
-      "#{CFG_PATH}/setup.sh #{USER} #{CFG_PATH} #{ENV['BUILDBOT_PWD']} " \
-        "#{ENV['IRC_PWD']} #{ENV['IRC_CHANNEL']} #{ENV['SMTP_USER']} " \
-        "#{ENV['SMTP_PWD']} #{ENV['EMAIL_RCP']}; " \
-      "#{CFG_PATH}/setup_credentials.sh #{USER} " \
-        "#{ENV['REGISTRY_USER']} #{ENV['REGISTRY_PWD']}; "
-    # Install docker and testing dependencies
-    pkg_cmd << "curl -s https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz | " \
-      "  tar -v -C /usr/local -xz; ln -s /usr/local/go/bin/go /usr/bin/go; " \
-      "curl -s https://phantomjs.googlecode.com/files/phantomjs-1.9.1-linux-x86_64.tar.bz2 | " \
-      "  tar jx -C /usr/bin --strip-components=2 phantomjs-1.9.1-linux-x86_64/bin/phantomjs; " \
-      "DEBIAN_FRONTEND=noninteractive apt-get install -q -y lxc git mercurial aufs-tools make libfontconfig; " \
-      "export GOPATH=/data/docker-dependencies; go get -d github.com/dotcloud/docker; " \
-      "rm -rf ${GOPATH}/src/github.com/dotcloud/docker; "
-    # Activate new kernel options
-    pkg_cmd << "shutdown -r +1; "
-    config.vm.provision :shell, :inline => pkg_cmd
-  end
-end
-
-# Providers were added on Vagrant >= 1.1.0
-Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
-  config.vm.provider :aws do |aws, override|
-    aws.tags = { 'Name' => 'docker-ci' }
-    aws.access_key_id = ENV["AWS_ACCESS_KEY_ID"]
-    aws.secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
-    aws.keypair_name = ENV["AWS_KEYPAIR_NAME"]
-    override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"]
-    override.ssh.username = USER
-    aws.ami = AWS_AMI
-    aws.region = "us-east-1"
-    aws.instance_type = "m1.small"
-    aws.security_groups = "gateway"
-  end
-
-  config.vm.provider :virtualbox do |vb|
-  end
-end

+ 0 - 1
testing/buildbot/README.rst

@@ -1 +0,0 @@
-Buildbot configuration and setup files (except Vagrantfile located on ..)

Некоторые файлы не были показаны из-за большого количества измененных файлов