diff --git a/.mailmap b/.mailmap index 00b698bba0..8a4500c76d 100644 --- a/.mailmap +++ b/.mailmap @@ -1,4 +1,4 @@ -# Generate AUTHORS: project/generate-authors.sh +# Generate AUTHORS: hack/generate-authors.sh # Tip for finding duplicates (besides scanning the output of AUTHORS for name # duplicates that aren't also email duplicates): scan the output of: diff --git a/AUTHORS b/AUTHORS index e6ec5d00f7..88fff3aa0a 100644 --- a/AUTHORS +++ b/AUTHORS @@ -1,5 +1,5 @@ # This file lists all individuals having contributed content to the repository. -# For how it is generated, see `project/generate-authors.sh`. +# For how it is generated, see `hack/generate-authors.sh`. Aanand Prasad Aaron Feng diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index dfa6dee076..e6bf6ad5f3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,70 +1,60 @@ # Contributing to Docker -Want to hack on Docker? Awesome! Here are instructions to get you -started. They are probably not perfect; please let us know if anything -feels wrong or incomplete. +Want to hack on Docker? Awesome! We have a contributor's guide that explains +[setting up a Docker development environment and the contribution +process](https://docs.docker.com/project/who-written-for/). + +![Contributors guide](docs/sources/static_files/contributors.png) + +This page contains information about reporting issues as well as some tips and +guidelines useful to experienced open source contributors. Finally, make sure +you read our [community guidelines](#docker-community-guidelines) before you +start participating. ## Topics * [Reporting Security Issues](#reporting-security-issues) * [Design and Cleanup Proposals](#design-and-cleanup-proposals) -* [Reporting Issues](#reporting-issues) -* [Build Environment](#build-environment) -* [Contribution Guidelines](#contribution-guidelines) +* [Reporting Issues](#reporting-other-issues) +* [Quick Contribution Tips and Guidelines](#quick-contribution-tips-and-guidelines) * [Community Guidelines](#docker-community-guidelines) -## Reporting Security Issues +## Reporting security issues -The Docker maintainers take security very seriously. If you discover a security issue, -please bring it to their attention right away! +The Docker maintainers take security seriously. If you discover a security +issue, please bring it to their attention right away! -Please send your report privately to [security@docker.com](mailto:security@docker.com), -please **DO NOT** file a public issue. +Please **DO NOT** file a public issue, instead send your report privately to +[security@docker.com](mailto:security@docker.com), -Security reports are greatly appreciated and we will publicly thank you for it. We also -like to send gifts - if you're into Docker shwag make sure to let us know :) -We currently do not offer a paid security bounty program, but are not ruling it out in -the future. +Security reports are greatly appreciated and we will publicly thank you for it. +We also like to send gifts—if you're into Docker schwag make sure to let +us know We currently do not offer a paid security bounty program, but are not +ruling it out in the future. -## Design and Cleanup Proposals -When considering a design proposal, we are looking for: - -* A description of the problem this design proposal solves -* A pull request, not an issue, that modifies the documentation describing - the feature you are proposing, adding new documentation if necessary. - * Please prefix your issue with `Proposal:` in the title -* Please review [the existing Proposals](https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%3AProposal) - before reporting a new one. You can always pair with someone if you both - have the same idea. - -When considering a cleanup task, we are looking for: - -* A description of the refactors made - * Please note any logic changes if necessary -* A pull request with the code - * Please prefix your PR's title with `Cleanup:` so we can quickly address it. - * Your pull request must remain up to date with master, so rebase as necessary. - -## Reporting Issues +## Reporting other issues A great way to contribute to the project is to send a detailed report when you encounter an issue. We always appreciate a well-written, thorough bug report, and will thank you for it! -When reporting [issues](https://github.com/docker/docker/issues) on -GitHub please include your host OS (Ubuntu 12.04, Fedora 19, etc). -Please include: +Check that [our issue database](https://github.com/docker/docker/issues) +doesn't already include that problem or suggestion before submitting an issue. +If you find a match, add a quick "+1" or "I have this problem too." Doing this +helps prioritize the most common problems and requests. + +When reporting issues, please include your host OS (Ubuntu 12.04, Fedora 19, +etc). Please include: * The output of `uname -a`. * The output of `docker version`. * The output of `docker -D info`. -Please also include the steps required to reproduce the problem if -possible and applicable. This information will help us review and fix -your issue faster. +Please also include the steps required to reproduce the problem if possible and +applicable. This information will help us review and fix your issue faster. -### Template +**Issue Report Template**: ``` Description of problem: @@ -103,123 +93,165 @@ Additional info: ``` -## Build Environment -For instructions on setting up your development environment, please -see our dedicated [dev environment setup -docs](http://docs.docker.com/contributing/devenvironment/). +##Quick contribution tips and guidelines -## Contribution guidelines +This section gives the experienced contributor some tips and guidelines. -### Pull requests are always welcome +###Pull requests are always welcome -We are always thrilled to receive pull requests, and do our best to -process them as quickly as possible. Not sure if that typo is worth a pull -request? Do it! We will appreciate it. +Not sure if that typo is worth a pull request? Found a bug and know how to fix +it? Do it! We will appreciate it. Any significant improvement should be +documented as [a GitHub issue](https://github.com/docker/docker/issues) before +anybody starts working on it. -If your pull request is not accepted on the first try, don't be -discouraged! If there's a problem with the implementation, hopefully you -received feedback on what to improve. +We are always thrilled to receive pull requests. We do our best to process them +quickly. If your pull request is not accepted on the first try, +don't get discouraged! Our contributor's guide explains [the review process we +use for simple changes](https://docs.docker.com/project/make-a-contribution/). -We're trying very hard to keep Docker lean and focused. We don't want it -to do everything for everybody. This means that we might decide against -incorporating a new feature. However, there might be a way to implement -that feature *on top of* Docker. +### Design and cleanup proposals -### Discuss your design on the mailing list +You can propose new designs for existing Docker features. You can also design +entirely new features. We really appreciate contributors who want to refactor or +otherwise cleanup our project. For information on making these types of +contributions, see [the advanced contribution +section](https://docs.docker.com/project/advanced-contributing/) in the +contributors guide. -We recommend discussing your plans [on the mailing -list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev) -before starting to code - especially for more ambitious contributions. -This gives other contributors a chance to point you in the right -direction, give feedback on your design, and maybe point out if someone -else is working on the same thing. +We try hard to keep Docker lean and focused. Docker can't do everything for +everybody. This means that we might decide against incorporating a new feature. +However, there might be a way to implement that feature *on top of* Docker. -### Create issues... +### Talking to other Docker users and contributors -Any significant improvement should be documented as [a GitHub -issue](https://github.com/docker/docker/issues) before anybody -starts working on it. + + + + + + + + + + + + + + + + + + +
Internet Relay Chat (IRC) + +

+ IRC a direct line to our most knowledgeable Docker users; we have + both the #docker and #docker-dev group on + irc.freenode.net. + IRC is a rich chat protocol but it can overwhelm new users. You can search + our chat archives. +

+ Read our IRC quickstart guide for an easy way to get started. +
Google Groups + There are two groups. + Docker-user + is for people using Docker containers. + The docker-dev + group is for contributors and other people contributing to the Docker + project. +
Twitter + You can follow Docker's Twitter feed + to get updates on our products. You can also tweet us questions or just + share blogs or stories. +
Stack Overflow + Stack Overflow has over 7000K Docker questions listed. We regularly + monitor Docker questions + and so do many other knowledgeable Docker users. +
-### ...but check for existing issues first! - -Please take a moment to check that an issue doesn't already exist -documenting your bug report or improvement proposal. If it does, it -never hurts to add a quick "+1" or "I have this problem too". This will -help prioritize the most common problems and requests. ### Conventions Fork the repository and make changes on your fork in a feature branch: -- If it's a bug fix branch, name it XXXX-something where XXXX is the number of the - issue. -- If it's a feature branch, create an enhancement issue to announce your - intentions, and name it XXXX-something where XXXX is the number of the issue. +- If it's a bug fix branch, name it XXXX-something where XXXX is the number of + the issue. +- If it's a feature branch, create an enhancement issue to announce + your intentions, and name it XXXX-something where XXXX is the number of the + issue. -Submit unit tests for your changes. Go has a great test framework built in; use -it! Take a look at existing tests for inspiration. Run the full test suite on -your branch before submitting a pull request. +Submit unit tests for your changes. Go has a great test framework built in; use +it! Take a look at existing tests for inspiration. [Run the full test +suite](https://docs.docker.com/project/test-and-docs/) on your branch before +submitting a pull request. -Update the documentation when creating or modifying features. Test -your documentation changes for clarity, concision, and correctness, as -well as a clean documentation build. See `docs/README.md` for more -information on building the docs and how they get released. +Update the documentation when creating or modifying features. Test your +documentation changes for clarity, concision, and correctness, as well as a +clean documentation build. See our contributors guide for [our style +guide](https://docs.docker.com/project/doc-style) and instructions on [building +the documentation](https://docs.docker.com/project/test-and-docs/#build-and-test-the-documentation). Write clean code. Universally formatted code promotes ease of writing, reading, and maintenance. Always run `gofmt -s -w file.go` on each changed file before committing your changes. Most editors have plug-ins that do this automatically. -Pull requests descriptions should be as clear as possible and include a -reference to all the issues that they address. +Pull request descriptions should be as clear as possible and include a reference +to all the issues that they address. -Commit messages must start with a capitalized and short summary (max. 50 -chars) written in the imperative, followed by an optional, more detailed -explanatory text which is separated from the summary by an empty line. +Commit messages must start with a capitalized and short summary (max. 50 chars) +written in the imperative, followed by an optional, more detailed explanatory +text which is separated from the summary by an empty line. Code review comments may be added to your pull request. Discuss, then make the -suggested modifications and push additional commits to your feature branch. Be -sure to post a comment after pushing. The new commits will show up in the pull -request automatically, but the reviewers will not be notified unless you -comment. +suggested modifications and push additional commits to your feature branch. Post +a comment after pushing. New commits show up in the pull request automatically, +but the reviewers are notified only when you comment. -Pull requests must be cleanly rebased ontop of master without multiple branches +Pull requests must be cleanly rebased on top of master without multiple branches mixed into the PR. **Git tip**: If your PR no longer merges cleanly, use `rebase master` in your feature branch to update your pull request rather than `merge master`. -Before the pull request is merged, make sure that you squash your commits into -logical units of work using `git rebase -i` and `git push -f`. After every -commit the test suite should be passing. Include documentation changes in the -same commit so that a revert would remove all traces of the feature or fix. +Before you make a pull request, squash your commits into logical units of work +using `git rebase -i` and `git push -f`. A logical unit of work is a consistent +set of patches that should be reviewed together: for example, upgrading the +version of a vendored dependency and taking advantage of its now available new +feature constitute two separate units of work. Implementing a new function and +calling it in another file constitute a single logical unit of work. The very +high majory of submissions should have a single commit, so if in doubt: squash +down to one. -Commits that fix or close an issue should include a reference like -`Closes #XXXX` or `Fixes #XXXX`, which will automatically close the -issue when merged. +After every commit, [make sure the test suite passes] +((https://docs.docker.com/project/test-and-docs/)). Include documentation +changes in the same pull request so that a revert would remove all traces of +the feature or fix. -Please do not add yourself to the `AUTHORS` file, as it is regenerated -regularly from the Git history. +Include an issue reference like `Closes #XXXX` or `Fixes #XXXX` in commits that +close an issue. Including references automatically closes the issue on a merge. + +Please do not add yourself to the `AUTHORS` file, as it is regenerated regularly +from the Git history. ### Merge approval -Docker maintainers use LGTM (Looks Good To Me) in comments on the code review -to indicate acceptance. +Docker maintainers use LGTM (Looks Good To Me) in comments on the code review to +indicate acceptance. A change requires LGTMs from an absolute majority of the maintainers of each component affected. For example, if a change affects `docs/` and `registry/`, it needs an absolute majority from the maintainers of `docs/` AND, separately, an absolute majority of the maintainers of `registry/`. -For more details see [MAINTAINERS](MAINTAINERS) +For more details, see the [MAINTAINERS](MAINTAINERS) page. ### Sign your work -The sign-off is a simple line at the end of the explanation for the -patch, which certifies that you wrote it or otherwise have the right to -pass it on as an open-source patch. The rules are pretty simple: if you -can certify the below (from -[developercertificate.org](http://developercertificate.org/)): +The sign-off is a simple line at the end of the explanation for the patch. Your +signature certifies that you wrote the patch or otherwise have the right to pass +it on as an open-source patch. The rules are pretty simple: if you can certify +the below (from [developercertificate.org](http://developercertificate.org/)): ``` Developer Certificate of Origin @@ -263,7 +295,7 @@ Then you just add a line to every git commit message: Signed-off-by: Joe Smith -Using your real name (sorry, no pseudonyms or anonymous contributions.) +Use your real name (sorry, no pseudonyms or anonymous contributions.) If you set your `user.name` and `user.email` git configs, you can sign your commit automatically with `git commit -s`. @@ -280,45 +312,45 @@ format right away, but please do adjust your processes for future contributions. * Step 4: Propose yourself at a scheduled docker meeting in #docker-dev Don't forget: being a maintainer is a time investment. Make sure you -will have time to make yourself available. You don't have to be a +will have time to make yourself available. You don't have to be a maintainer to make a difference on the project! -### IRC Meetings +### IRC meetings -There are two monthly meetings taking place on #docker-dev IRC to accomodate all timezones. -Anybody can ask for a topic to be discussed prior to the meeting. +There are two monthly meetings taking place on #docker-dev IRC to accomodate all +timezones. Anybody can propose a topic for discussion prior to the meeting. If you feel the conversation is going off-topic, feel free to point it out. -For the exact dates and times, have a look at [the irc-minutes repo](https://github.com/docker/irc-minutes). -They also contain all the notes from previous meetings. +For the exact dates and times, have a look at [the irc-minutes +repo](https://github.com/docker/irc-minutes). The minutes also contain all the +notes from previous meetings. -## Docker Community Guidelines +## Docker community guidelines -We want to keep the Docker community awesome, growing and collaborative. We -need your help to keep it that way. To help with this we've come up with some -general guidelines for the community as a whole: +We want to keep the Docker community awesome, growing and collaborative. We need +your help to keep it that way. To help with this we've come up with some general +guidelines for the community as a whole: -* Be nice: Be courteous, respectful and polite to fellow community members: no - regional, racial, gender, or other abuse will be tolerated. We like nice people - way better than mean ones! +* Be nice: Be courteous, respectful and polite to fellow community members: + no regional, racial, gender, or other abuse will be tolerated. We like + nice people way better than mean ones! -* Encourage diversity and participation: Make everyone in our community - feel welcome, regardless of their background and the extent of their +* Encourage diversity and participation: Make everyone in our community feel + welcome, regardless of their background and the extent of their contributions, and do everything possible to encourage participation in our community. * Keep it legal: Basically, don't get us in trouble. Share only content that - you own, do not share private or sensitive information, and don't break the - law. + you own, do not share private or sensitive information, and don't break + the law. -* Stay on topic: Make sure that you are posting to the correct channel - and avoid off-topic discussions. Remember when you update an issue or - respond to an email you are potentially sending to a large number of - people. Please consider this before you update. Also remember that - nobody likes spam. +* Stay on topic: Make sure that you are posting to the correct channel and + avoid off-topic discussions. Remember when you update an issue or respond + to an email you are potentially sending to a large number of people. Please + consider this before you update. Also remember that nobody likes spam. -### Guideline Violations — 3 Strikes Method +### Guideline violations — 3 strikes method The point of this section is not to find opportunities to punish people, but we do need a fair way to deal with people who are making our community suck. @@ -337,20 +369,19 @@ do need a fair way to deal with people who are making our community suck. * Obvious spammers are banned on first occurrence. If we don't do this, we'll have spam all over the place. -* Violations are forgiven after 6 months of good behavior, and we won't - hold a grudge. +* Violations are forgiven after 6 months of good behavior, and we won't hold a + grudge. -* People who commit minor infractions will get some education, - rather than hammering them in the 3 strikes process. +* People who commit minor infractions will get some education, rather than + hammering them in the 3 strikes process. -* The rules apply equally to everyone in the community, no matter how - much you've contributed. +* The rules apply equally to everyone in the community, no matter how much + you've contributed. * Extreme violations of a threatening, abusive, destructive or illegal nature - will be addressed immediately and are not subject to 3 strikes or - forgiveness. + will be addressed immediately and are not subject to 3 strikes or forgiveness. * Contact abuse@docker.com to report abuse or appeal violations. In the case of - appeals, we know that mistakes happen, and we'll work with you to come up with - a fair solution if there has been a misunderstanding. + appeals, we know that mistakes happen, and we'll work with you to come up with a + fair solution if there has been a misunderstanding. diff --git a/Dockerfile b/Dockerfile index c3d1c246bf..b064076137 100644 --- a/Dockerfile +++ b/Dockerfile @@ -107,11 +107,8 @@ RUN go get golang.org/x/tools/cmd/cover # TODO replace FPM with some very minimal debhelper stuff RUN gem install --no-rdoc --no-ri fpm --version 1.3.2 -# Get the "busybox" image source so we can build locally instead of pulling -RUN git clone -b buildroot-2014.02 https://github.com/jpetazzo/docker-busybox.git /docker-busybox - # Install registry -ENV REGISTRY_COMMIT c448e0416925a9876d5576e412703c9b8b865e19 +ENV REGISTRY_COMMIT d957768537c5af40e4f4cd96871f7b2bde9e2923 RUN set -x \ && git clone https://github.com/docker/distribution.git /go/src/github.com/docker/distribution \ && (cd /go/src/github.com/docker/distribution && git checkout -q $REGISTRY_COMMIT) \ @@ -145,6 +142,13 @@ ENV DOCKER_BUILDTAGS apparmor selinux btrfs_noversion # Let us use a .bashrc file RUN ln -sfv $PWD/.bashrc ~/.bashrc +# Get useful and necessary Hub images so we can "docker load" locally instead of pulling +COPY contrib/download-frozen-image.sh /go/src/github.com/docker/docker/contrib/ +RUN ./contrib/download-frozen-image.sh /docker-frozen-images \ + busybox:latest@4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125 \ + hello-world:frozen@e45a5af57b00862e5ef5782a9925979a02ba2b12dff832fd0991335f4a11e5c5 +# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is) + # Install man page generator COPY vendor /go/src/github.com/docker/docker/vendor # (copy vendor/ because go-md2man needs golang.org/x/net) diff --git a/Dockerfile.simple b/Dockerfile.simple new file mode 100644 index 0000000000..12ee7dde30 --- /dev/null +++ b/Dockerfile.simple @@ -0,0 +1,34 @@ +# docker build -t docker:simple -f Dockerfile.simple . +# docker run --rm docker:simple hack/make.sh dynbinary +# docker run --rm --privileged docker:simple hack/dind hack/make.sh test-unit +# docker run --rm --privileged -v /var/lib/docker docker:simple hack/dind hack/make.sh dynbinary test-integration-cli + +# This represents the bare minimum required to build and test Docker. + +FROM debian:jessie + +# compile and runtime deps +# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies +# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies +RUN apt-get update && apt-get install -y --no-install-recommends \ + btrfs-tools \ + curl \ + gcc \ + git \ + golang \ + libdevmapper-dev \ + libsqlite3-dev \ + \ + ca-certificates \ + e2fsprogs \ + iptables \ + procps \ + xz-utils \ + \ + aufs-tools \ + lxc \ + && rm -rf /var/lib/apt/lists/* + +ENV AUTO_GOPATH 1 +WORKDIR /usr/src/docker +COPY . /usr/src/docker diff --git a/MAINTAINERS b/MAINTAINERS index 1c2c969fef..04951bf459 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -193,13 +193,18 @@ for each. # They should ask for any editorial change that makes the documentation more # consistent and easier to understand. # - # Once documentation is approved, a maintainer should make sure to remove this + # Once documentation is approved (see below), a maintainer should make sure to remove this # label and add the next one. close = "" 2-code-review = "requires more code changes" 1-design-review = "raises design concerns" 4-merge = "general case" + + # Docs approval + [Rules.review.docs-approval] + # Changes and additions to docs must be reviewed and approved (LGTM'd) by a minimum of two docs sub-project maintainers. + # If the docs change originates with a docs maintainer, only one additional LGTM is required (since we assume a docs maintainer approves of their own PR). # Merge [Rules.review.states.4-merge] @@ -424,7 +429,10 @@ made through a pull request. "dmp42", "vbatts", "joffrey", - "samalba" + "samalba", + "sday", + "jlhawn", + "dmcg" ] [Org.Subsystems."build tools"] @@ -502,6 +510,16 @@ made through a pull request. Email = "dug@us.ibm.com" GitHub = "duglin" + [people.dmcg] + Name = "Derek McGowan" + Email = "derek@docker.com" + Github = "dmcgowan" + + [people.dmp42] + Name = "Olivier Gambier" + Email = "olivier@docker.com" + Github = "dmp42" + [people.ehazlett] Name = "Evan Hazlett" Email = "ejhazlett@gmail.com" @@ -522,6 +540,11 @@ made through a pull request. Email = "estesp@linux.vnet.ibm.com" GitHub = "estesp" + [people.fredlf] + Name = "Fred Lifton" + Email = "fred.lifton@docker.com" + GitHub = "fredlf" + [people.icecrime] Name = "Arnaud Porterie" Email = "arnaud@docker.com" @@ -532,6 +555,16 @@ made through a pull request. Email = "jess@docker.com" GitHub = "jfrazelle" + [people.jlhawn] + Name = "Josh Hawn" + Email = "josh.hawn@docker.com" + Github = "jlhawn" + + [people.joffrey] + Name = "Joffrey Fuhrer" + Email = "joffrey@docker.com" + Github = "shin-" + [people.lk4d4] Name = "Alexander Morozov" Email = "lk4d4@docker.com" @@ -542,6 +575,11 @@ made through a pull request. Email = "mary.anthony@docker.com" GitHub = "moxiegirl" + [people.sday] + Name = "Stephen Day" + Email = "stephen.day@docker.com" + Github = "stevvooe" + [people.shykes] Name = "Solomon Hykes" Email = "solomon@docker.com" diff --git a/Makefile b/Makefile index 1c71e00fad..9bf1b16c94 100644 --- a/Makefile +++ b/Makefile @@ -86,11 +86,11 @@ build: bundles docker build -t "$(DOCKER_IMAGE)" . docs-build: - git fetch https://github.com/docker/docker.git docs && git diff --name-status FETCH_HEAD...HEAD -- docs > docs/changed-files cp ./VERSION docs/VERSION echo "$(GIT_BRANCH)" > docs/GIT_BRANCH # echo "$(AWS_S3_BUCKET)" > docs/AWS_S3_BUCKET echo "$(GITCOMMIT)" > docs/GITCOMMIT + docker pull docs/base docker build -t "$(DOCKER_DOCS_IMAGE)" docs bundles: diff --git a/README.md b/README.md index 2404704ce9..079713ced9 100644 --- a/README.md +++ b/README.md @@ -183,12 +183,14 @@ Contributing to Docker [![Jenkins Build Status](https://jenkins.dockerproject.com/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.com/job/Docker%20Master/) Want to hack on Docker? Awesome! We have [instructions to help you get -started](CONTRIBUTING.md). If you'd like to contribute to the -documentation, please take a look at this [README.md](https://github.com/docker/docker/blob/master/docs/README.md). +started contributing code or documentation.](https://docs.docker.com/project/who-written-for/). These instructions are probably not perfect, please let us know if anything feels wrong or incomplete. Better yet, submit a PR and improve them yourself. +Getting the development builds +============================== + Want to run Docker from a master build? You can download master builds at [master.dockerproject.com](https://master.dockerproject.com). They are updated with each commit merged into the master branch. @@ -233,8 +235,8 @@ Docker platform to broaden its application and utility. If you know of another project underway that should be listed here, please help us keep this list up-to-date by submitting a PR. -* [Docker Registry](https://github.com/docker/docker-registry): Registry -server for Docker (hosting/delivering of repositories and images) +* [Docker Registry](https://github.com/docker/distribution): Registry +server for Docker (hosting/delivery of repositories and images) * [Docker Machine](https://github.com/docker/machine): Machine management for a container-centric world * [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering diff --git a/api/MAINTAINERS b/api/MAINTAINERS deleted file mode 100644 index 96abeae570..0000000000 --- a/api/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Victor Vieux (@vieux) -Jessie Frazelle (@jfrazelle) diff --git a/api/client/cli.go b/api/client/cli.go index 4c5eb2d0d6..fcf6c033fb 100644 --- a/api/client/cli.go +++ b/api/client/cli.go @@ -93,10 +93,13 @@ func (cli *DockerCli) Subcmd(name, signature, description string, exitOnError bo flags := flag.NewFlagSet(name, errorHandling) flags.Usage = func() { options := "" - if flags.FlagCountUndeprecated() > 0 { - options = "[OPTIONS] " + if signature != "" { + signature = " " + signature } - fmt.Fprintf(cli.out, "\nUsage: docker %s %s%s\n\n%s\n\n", name, options, signature, description) + if flags.FlagCountUndeprecated() > 0 { + options = " [OPTIONS]" + } + fmt.Fprintf(cli.out, "\nUsage: docker %s%s%s\n\n%s\n\n", name, options, signature, description) flags.SetOutput(cli.out) flags.PrintDefaults() os.Exit(0) diff --git a/api/client/commands.go b/api/client/commands.go index a4835bbeb3..839676d276 100644 --- a/api/client/commands.go +++ b/api/client/commands.go @@ -37,8 +37,10 @@ import ( "github.com/docker/docker/pkg/fileutils" "github.com/docker/docker/pkg/homedir" flag "github.com/docker/docker/pkg/mflag" + "github.com/docker/docker/pkg/networkfs/resolvconf" "github.com/docker/docker/pkg/parsers" "github.com/docker/docker/pkg/parsers/filters" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/pkg/promise" "github.com/docker/docker/pkg/signal" "github.com/docker/docker/pkg/symlink" @@ -87,7 +89,11 @@ func (cli *DockerCli) CmdBuild(args ...string) error { rm := cmd.Bool([]string{"#rm", "-rm"}, true, "Remove intermediate containers after a successful build") forceRm := cmd.Bool([]string{"-force-rm"}, false, "Always remove intermediate containers") pull := cmd.Bool([]string{"-pull"}, false, "Always attempt to pull a newer version of the image") - dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile(Default is 'Dockerfile')") + dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile (Default is 'PATH/Dockerfile')") + flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit") + flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Total memory (memory + swap), '-1' to disable swap") + flCpuShares := cmd.Int64([]string{"c", "-cpu-shares"}, 0, "CPU shares (relative weight)") + flCpuSetCpus := cmd.String([]string{"-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)") cmd.Require(flag.Exact, 1) @@ -231,7 +237,36 @@ func (cli *DockerCli) CmdBuild(args ...string) error { // FIXME: ProgressReader shouldn't be this annoying to use if context != nil { sf := utils.NewStreamFormatter(false) - body = utils.ProgressReader(context, 0, cli.out, sf, true, "", "Sending build context to Docker daemon") + body = progressreader.New(progressreader.Config{ + In: context, + Out: cli.out, + Formatter: sf, + NewLines: true, + ID: "", + Action: "Sending build context to Docker daemon", + }) + } + + var memory int64 + if *flMemoryString != "" { + parsedMemory, err := units.RAMInBytes(*flMemoryString) + if err != nil { + return err + } + memory = parsedMemory + } + + var memorySwap int64 + if *flMemorySwap != "" { + if *flMemorySwap == "-1" { + memorySwap = -1 + } else { + parsedMemorySwap, err := units.RAMInBytes(*flMemorySwap) + if err != nil { + return err + } + memorySwap = parsedMemorySwap + } } // Send the build context v := &url.Values{} @@ -274,6 +309,11 @@ func (cli *DockerCli) CmdBuild(args ...string) error { v.Set("pull", "1") } + v.Set("cpusetcpus", *flCpuSetCpus) + v.Set("cpushares", strconv.FormatInt(*flCpuShares, 10)) + v.Set("memory", strconv.FormatInt(memory, 10)) + v.Set("memswap", strconv.FormatInt(memorySwap, 10)) + v.Set("dockerfile", *dockerfileName) cli.LoadConfigFile() @@ -344,6 +384,7 @@ func (cli *DockerCli) CmdLogin(args ...string) error { if username == "" { promptDefault("Username", authconfig.Username) username = readInput(cli.in, cli.out) + username = strings.Trim(username, " ") if username == "" { username = authconfig.Username } @@ -409,6 +450,8 @@ func (cli *DockerCli) CmdLogin(args ...string) error { return err } registry.SaveConfig(cli.configFile) + fmt.Fprintf(cli.out, "WARNING: login credentials saved in %s.\n", path.Join(homedir.Get(), registry.CONFIGFILE)) + if out2.Get("Status") != "" { fmt.Fprintf(cli.out, "%s\n", out2.Get("Status")) } @@ -577,6 +620,14 @@ func (cli *DockerCli) CmdInfo(args ...string) error { if remoteInfo.Exists("NGoroutines") { fmt.Fprintf(cli.out, "Goroutines: %d\n", remoteInfo.GetInt("NGoroutines")) } + if remoteInfo.Exists("SystemTime") { + t, err := remoteInfo.GetTime("SystemTime") + if err != nil { + log.Errorf("Error reading system time: %v", err) + } else { + fmt.Fprintf(cli.out, "System Time: %s\n", t.Format(time.UnixDate)) + } + } if remoteInfo.Exists("NEventsListener") { fmt.Fprintf(cli.out, "EventsListeners: %d\n", remoteInfo.GetInt("NEventsListener")) } @@ -590,7 +641,15 @@ func (cli *DockerCli) CmdInfo(args ...string) error { fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", root) } } - + if remoteInfo.Exists("HttpProxy") { + fmt.Fprintf(cli.out, "Http Proxy: %s\n", remoteInfo.Get("HttpProxy")) + } + if remoteInfo.Exists("HttpsProxy") { + fmt.Fprintf(cli.out, "Https Proxy: %s\n", remoteInfo.Get("HttpsProxy")) + } + if remoteInfo.Exists("NoProxy") { + fmt.Fprintf(cli.out, "No Proxy: %s\n", remoteInfo.Get("NoProxy")) + } if len(remoteInfo.GetList("IndexServerAddress")) != 0 { cli.LoadConfigFile() u := cli.configFile.Configs[remoteInfo.Get("IndexServerAddress")].Username @@ -695,7 +754,7 @@ func (cli *DockerCli) CmdStart(args ...string) error { cErr chan error tty bool - cmd = cli.Subcmd("start", "CONTAINER [CONTAINER...]", "Restart a stopped container", true) + cmd = cli.Subcmd("start", "CONTAINER [CONTAINER...]", "Start one or more stopped containers", true) attach = cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals") openStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN") ) @@ -704,6 +763,16 @@ func (cli *DockerCli) CmdStart(args ...string) error { utils.ParseFlags(cmd, args, true) hijacked := make(chan io.Closer) + // Block the return until the chan gets closed + defer func() { + log.Debugf("CmdStart() returned, defer waiting for hijack to finish.") + if _, ok := <-hijacked; ok { + log.Errorf("Hijack did not finish (chan still open)") + } + if *openStdin || *attach { + cli.in.Close() + } + }() if *attach || *openStdin { if cmd.NArg() > 1 { @@ -760,25 +829,26 @@ func (cli *DockerCli) CmdStart(args ...string) error { return err } } - var encounteredError error for _, name := range cmd.Args() { _, _, err := readBody(cli.call("POST", "/containers/"+name+"/start", nil, false)) if err != nil { if !*attach && !*openStdin { + // attach and openStdin is false means it could be starting multiple containers + // when a container start failed, show the error message and start next fmt.Fprintf(cli.err, "%s\n", err) + encounteredError = fmt.Errorf("Error: failed to start one or more containers") + } else { + encounteredError = err } - encounteredError = fmt.Errorf("Error: failed to start one or more containers") } else { if !*attach && !*openStdin { fmt.Fprintf(cli.out, "%s\n", name) } } } + if encounteredError != nil { - if *openStdin || *attach { - cli.in.Close() - } return encounteredError } @@ -881,7 +951,7 @@ func (cli *DockerCli) CmdInspect(args ...string) error { obj, _, err := readBody(cli.call("GET", "/containers/"+name+"/json", nil, false)) if err != nil { if strings.Contains(err.Error(), "Too many") { - fmt.Fprintf(cli.err, "Error: %s", err.Error()) + fmt.Fprintf(cli.err, "Error: %v", err) status = 1 continue } @@ -1273,7 +1343,7 @@ func (cli *DockerCli) CmdPush(args ...string) error { } func (cli *DockerCli) CmdPull(args ...string) error { - cmd := cli.Subcmd("pull", "NAME[:TAG]", "Pull an image or a repository from the registry", true) + cmd := cli.Subcmd("pull", "NAME[:TAG|@DIGEST]", "Pull an image or a repository from the registry", true) allTags := cmd.Bool([]string{"a", "-all-tags"}, false, "Download all tagged images in the repository") cmd.Require(flag.Exact, 1) @@ -1286,7 +1356,7 @@ func (cli *DockerCli) CmdPull(args ...string) error { ) taglessRemote, tag := parsers.ParseRepositoryTag(remote) if tag == "" && !*allTags { - newRemote = taglessRemote + ":" + graph.DEFAULTTAG + newRemote = utils.ImageReference(taglessRemote, graph.DEFAULTTAG) } if tag != "" && *allTags { return fmt.Errorf("tag can't be used with --all-tags/-a") @@ -1339,6 +1409,7 @@ func (cli *DockerCli) CmdImages(args ...string) error { quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs") all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (default hides intermediate images)") noTrunc := cmd.Bool([]string{"#notrunc", "-no-trunc"}, false, "Don't truncate output") + showDigests := cmd.Bool([]string{"-digests"}, false, "Show digests") // FIXME: --viz and --tree are deprecated. Remove them in a future version. flViz := cmd.Bool([]string{"#v", "#viz", "#-viz"}, false, "Output graph in graphviz format") flTree := cmd.Bool([]string{"#t", "#tree", "#-tree"}, false, "Output graph in tree format") @@ -1465,20 +1536,46 @@ func (cli *DockerCli) CmdImages(args ...string) error { w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0) if !*quiet { - fmt.Fprintln(w, "REPOSITORY\tTAG\tIMAGE ID\tCREATED\tVIRTUAL SIZE") + if *showDigests { + fmt.Fprintln(w, "REPOSITORY\tTAG\tDIGEST\tIMAGE ID\tCREATED\tVIRTUAL SIZE") + } else { + fmt.Fprintln(w, "REPOSITORY\tTAG\tIMAGE ID\tCREATED\tVIRTUAL SIZE") + } } for _, out := range outs.Data { - for _, repotag := range out.GetList("RepoTags") { + outID := out.Get("Id") + if !*noTrunc { + outID = common.TruncateID(outID) + } - repo, tag := parsers.ParseRepositoryTag(repotag) - outID := out.Get("Id") - if !*noTrunc { - outID = common.TruncateID(outID) + repoTags := out.GetList("RepoTags") + repoDigests := out.GetList("RepoDigests") + + if len(repoTags) == 1 && repoTags[0] == ":" && len(repoDigests) == 1 && repoDigests[0] == "@" { + // dangling image - clear out either repoTags or repoDigsts so we only show it once below + repoDigests = []string{} + } + + // combine the tags and digests lists + tagsAndDigests := append(repoTags, repoDigests...) + for _, repoAndRef := range tagsAndDigests { + repo, ref := parsers.ParseRepositoryTag(repoAndRef) + // default tag and digest to none - if there's a value, it'll be set below + tag := "" + digest := "" + if utils.DigestReference(ref) { + digest = ref + } else { + tag = ref } if !*quiet { - fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize")))) + if *showDigests { + fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s ago\t%s\n", repo, tag, digest, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize")))) + } else { + fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(float64(out.GetInt64("VirtualSize")))) + } } else { fmt.Fprintln(w, outID) } @@ -1833,14 +1930,40 @@ func (cli *DockerCli) CmdEvents(args ...string) error { } func (cli *DockerCli) CmdExport(args ...string) error { - cmd := cli.Subcmd("export", "CONTAINER", "Export the contents of a filesystem as a tar archive to STDOUT", true) + cmd := cli.Subcmd("export", "CONTAINER", "Export a filesystem as a tar archive (streamed to STDOUT by default)", true) + outfile := cmd.String([]string{"o", "-output"}, "", "Write to a file, instead of STDOUT") cmd.Require(flag.Exact, 1) utils.ParseFlags(cmd, args, true) - if err := cli.stream("GET", "/containers/"+cmd.Arg(0)+"/export", nil, cli.out, nil); err != nil { - return err + var ( + output io.Writer = cli.out + err error + ) + if *outfile != "" { + output, err = os.Create(*outfile) + if err != nil { + return err + } + } else if cli.isTerminalOut { + return errors.New("Cowardly refusing to save to a terminal. Use the -o flag or redirect.") } + + if len(cmd.Args()) == 1 { + image := cmd.Arg(0) + if err := cli.stream("GET", "/containers/"+image+"/export", nil, output, nil); err != nil { + return err + } + } else { + v := url.Values{} + for _, arg := range cmd.Args() { + v.Add("names", arg) + } + if err := cli.stream("GET", "/containers/get?"+v.Encode(), nil, output, nil); err != nil { + return err + } + } + return nil } @@ -1898,6 +2021,10 @@ func (cli *DockerCli) CmdLogs(args ...string) error { return err } + if env.GetSubEnv("HostConfig").GetSubEnv("LogConfig").Get("Type") != "json-file" { + return fmt.Errorf("\"logs\" command is supported only for \"json-file\" logging driver") + } + v := url.Values{} v.Set("stdout", "1") v.Set("stderr", "1") @@ -2169,7 +2296,7 @@ func (cli *DockerCli) createContainer(config *runconfig.Config, hostConfig *runc if tag == "" { tag = graph.DEFAULTTAG } - fmt.Fprintf(cli.err, "Unable to find image '%s:%s' locally\n", repo, tag) + fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", utils.ImageReference(repo, tag)) // we don't want to write to stdout anything apart from container.ID if err = cli.pullImageCustomOut(config.Image, cli.err); err != nil { @@ -2244,6 +2371,18 @@ func (cli *DockerCli) CmdRun(args ...string) error { if err != nil { utils.ReportError(cmd, err.Error(), true) } + + if len(hostConfig.Dns) > 0 { + // check the DNS settings passed via --dns against + // localhost regexp to warn if they are trying to + // set a DNS to a localhost address + for _, dnsIP := range hostConfig.Dns { + if resolvconf.IsLocalhost(dnsIP) { + fmt.Fprintf(cli.err, "WARNING: Localhost DNS setting (--dns=%s) may fail in containers.\n", dnsIP) + break + } + } + } if config.Image == "" { cmd.Usage() return nil @@ -2415,7 +2554,7 @@ func (cli *DockerCli) CmdRun(args ...string) error { } func (cli *DockerCli) CmdCp(args ...string) error { - cmd := cli.Subcmd("cp", "CONTAINER:PATH HOSTPATH", "Copy files/folders from the PATH to the HOSTPATH", true) + cmd := cli.Subcmd("cp", "CONTAINER:PATH HOSTDIR|-", "Copy files/folders from a PATH on the container to a HOSTDIR on the host\nrunning the command. Use '-' to write the data\nas a tar file to STDOUT.", true) cmd.Require(flag.Exact, 2) utils.ParseFlags(cmd, args, true) @@ -2442,7 +2581,14 @@ func (cli *DockerCli) CmdCp(args ...string) error { } if statusCode == 200 { - if err := archive.Untar(stream, copyData.Get("HostPath"), &archive.TarOptions{NoLchown: true}); err != nil { + dest := copyData.Get("HostPath") + + if dest == "-" { + _, err = io.Copy(cli.out, stream) + } else { + err = archive.Untar(stream, dest, &archive.TarOptions{NoLchown: true}) + } + if err != nil { return err } } @@ -2737,7 +2883,7 @@ func (cli *DockerCli) CmdStats(args ...string) error { for _, c := range cStats { c.mu.Lock() if c.err != nil { - errs = append(errs, fmt.Sprintf("%s: %s", c.Name, c.err.Error())) + errs = append(errs, fmt.Sprintf("%s: %v", c.Name, c.err)) } c.mu.Unlock() } diff --git a/api/common.go b/api/common.go index 9e85c5ec3b..f6a0bc4883 100644 --- a/api/common.go +++ b/api/common.go @@ -104,7 +104,7 @@ func FormGroup(key string, start, last int) string { func MatchesContentType(contentType, expectedType string) bool { mimetype, _, err := mime.ParseMediaType(contentType) if err != nil { - log.Errorf("Error parsing media type: %s error: %s", contentType, err.Error()) + log.Errorf("Error parsing media type: %s error: %v", contentType, err) } return err == nil && mimetype == expectedType } diff --git a/api/server/MAINTAINERS b/api/server/MAINTAINERS deleted file mode 100644 index dee1eec042..0000000000 --- a/api/server/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Victor Vieux (@vieux) -# Johan Euphrosine (@proppy) diff --git a/api/server/server.go b/api/server/server.go index 65353c52f8..d244d2a0ce 100644 --- a/api/server/server.go +++ b/api/server/server.go @@ -32,7 +32,6 @@ import ( "github.com/docker/docker/pkg/listenbuffer" "github.com/docker/docker/pkg/parsers" "github.com/docker/docker/pkg/stdcopy" - "github.com/docker/docker/pkg/systemd" "github.com/docker/docker/pkg/version" "github.com/docker/docker/registry" "github.com/docker/docker/utils" @@ -135,7 +134,7 @@ func httpError(w http.ResponseWriter, err error) { } if err != nil { - log.Errorf("HTTP Error: statusCode=%d %s", statusCode, err.Error()) + log.Errorf("HTTP Error: statusCode=%d %v", statusCode, err) http.Error(w, err.Error(), statusCode) } } @@ -1083,6 +1082,10 @@ func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWrite job.Setenv("forcerm", r.FormValue("forcerm")) job.SetenvJson("authConfig", authConfig) job.SetenvJson("configFile", configFile) + job.Setenv("memswap", r.FormValue("memswap")) + job.Setenv("memory", r.FormValue("memory")) + job.Setenv("cpusetcpus", r.FormValue("cpusetcpus")) + job.Setenv("cpushares", r.FormValue("cpushares")) if err := job.Run(); err != nil { if !job.Stdout.Used() { @@ -1123,7 +1126,7 @@ func postContainersCopy(eng *engine.Engine, version version.Version, w http.Resp job.Stdout.Add(w) w.Header().Set("Content-Type", "application/x-tar") if err := job.Run(); err != nil { - log.Errorf("%s", err.Error()) + log.Errorf("%v", err) if strings.Contains(strings.ToLower(err.Error()), "no such id") { w.WriteHeader(http.StatusNotFound) } else if strings.Contains(err.Error(), "no such file or directory") { @@ -1406,43 +1409,6 @@ func ServeRequest(eng *engine.Engine, apiversion version.Version, w http.Respons router.ServeHTTP(w, req) } -// serveFd creates an http.Server and sets it up to serve given a socket activated -// argument. -func serveFd(addr string, job *engine.Job) error { - r := createRouter(job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("CorsHeaders"), job.Getenv("Version")) - - ls, e := systemd.ListenFD(addr) - if e != nil { - return e - } - - chErrors := make(chan error, len(ls)) - - // We don't want to start serving on these sockets until the - // daemon is initialized and installed. Otherwise required handlers - // won't be ready. - <-activationLock - - // Since ListenFD will return one or more sockets we have - // to create a go func to spawn off multiple serves - for i := range ls { - listener := ls[i] - go func() { - httpSrv := http.Server{Handler: r} - chErrors <- httpSrv.Serve(listener) - }() - } - - for i := 0; i < len(ls); i++ { - err := <-chErrors - if err != nil { - return err - } - } - - return nil -} - func lookupGidByName(nameOrGid string) (int, error) { groupFile, err := user.GetGroupPath() if err != nil { @@ -1457,13 +1423,21 @@ func lookupGidByName(nameOrGid string) (int, error) { if groups != nil && len(groups) > 0 { return groups[0].Gid, nil } + gid, err := strconv.Atoi(nameOrGid) + if err == nil { + log.Warnf("Could not find GID %d", gid) + return gid, nil + } return -1, fmt.Errorf("Group %s not found", nameOrGid) } func setupTls(cert, key, ca string, l net.Listener) (net.Listener, error) { tlsCert, err := tls.LoadX509KeyPair(cert, key) if err != nil { - return nil, fmt.Errorf("Couldn't load X509 key pair (%s, %s): %s. Key encrypted?", + if os.IsNotExist(err) { + return nil, fmt.Errorf("Could not load X509 key pair (%s, %s): %v", cert, key, err) + } + return nil, fmt.Errorf("Error reading X509 key pair (%s, %s): %q. Make sure the key is encrypted.", cert, key, err) } tlsConfig := &tls.Config{ @@ -1477,7 +1451,7 @@ func setupTls(cert, key, ca string, l net.Listener) (net.Listener, error) { certPool := x509.NewCertPool() file, err := ioutil.ReadFile(ca) if err != nil { - return nil, fmt.Errorf("Couldn't read CA certificate: %s", err) + return nil, fmt.Errorf("Could not read CA certificate: %v", err) } certPool.AppendCertsFromPEM(file) tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert @@ -1617,15 +1591,3 @@ func ServeApi(job *engine.Job) engine.Status { return engine.StatusOK } - -func AcceptConnections(job *engine.Job) engine.Status { - // Tell the init daemon we are accepting requests - go systemd.SdNotify("READY=1") - - // close the lock so the listeners start accepting connections - if activationLock != nil { - close(activationLock) - } - - return engine.StatusOK -} diff --git a/api/server/server_linux.go b/api/server/server_linux.go index 6cf2c3f185..fff803ddaf 100644 --- a/api/server/server_linux.go +++ b/api/server/server_linux.go @@ -9,6 +9,7 @@ import ( "syscall" "github.com/docker/docker/engine" + "github.com/docker/docker/pkg/systemd" ) // NewServer sets up the required Server and does protocol specific checking. @@ -50,3 +51,53 @@ func setupUnixHttp(addr string, job *engine.Job) (*HttpServer, error) { return &HttpServer{&http.Server{Addr: addr, Handler: r}, l}, nil } + +// serveFd creates an http.Server and sets it up to serve given a socket activated +// argument. +func serveFd(addr string, job *engine.Job) error { + r := createRouter(job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("CorsHeaders"), job.Getenv("Version")) + + ls, e := systemd.ListenFD(addr) + if e != nil { + return e + } + + chErrors := make(chan error, len(ls)) + + // We don't want to start serving on these sockets until the + // daemon is initialized and installed. Otherwise required handlers + // won't be ready. + <-activationLock + + // Since ListenFD will return one or more sockets we have + // to create a go func to spawn off multiple serves + for i := range ls { + listener := ls[i] + go func() { + httpSrv := http.Server{Handler: r} + chErrors <- httpSrv.Serve(listener) + }() + } + + for i := 0; i < len(ls); i++ { + err := <-chErrors + if err != nil { + return err + } + } + + return nil +} + +// Called through eng.Job("acceptconnections") +func AcceptConnections(job *engine.Job) engine.Status { + // Tell the init daemon we are accepting requests + go systemd.SdNotify("READY=1") + + // close the lock so the listeners start accepting connections + if activationLock != nil { + close(activationLock) + } + + return engine.StatusOK +} diff --git a/api/server/server_windows.go b/api/server/server_windows.go index fba1c6a60a..c5d2c2ca56 100644 --- a/api/server/server_windows.go +++ b/api/server/server_windows.go @@ -18,3 +18,14 @@ func NewServer(proto, addr string, job *engine.Job) (Server, error) { return nil, errors.New("Invalid protocol format. Windows only supports tcp.") } } + +// Called through eng.Job("acceptconnections") +func AcceptConnections(job *engine.Job) engine.Status { + + // close the lock so the listeners start accepting connections + if activationLock != nil { + close(activationLock) + } + + return engine.StatusOK +} diff --git a/builder/MAINTAINERS b/builder/MAINTAINERS deleted file mode 100644 index e170c235a3..0000000000 --- a/builder/MAINTAINERS +++ /dev/null @@ -1,3 +0,0 @@ -Tibor Vass (@tiborvass) -Erik Hollensbe (@erikh) -Doug Davis (@duglin) diff --git a/builder/command/command.go b/builder/command/command.go index f99fa2d906..16544f0267 100644 --- a/builder/command/command.go +++ b/builder/command/command.go @@ -3,6 +3,7 @@ package command const ( Env = "env" + Label = "label" Maintainer = "maintainer" Add = "add" Copy = "copy" @@ -21,6 +22,7 @@ const ( // Commands is list of all Dockerfile commands var Commands = map[string]struct{}{ Env: {}, + Label: {}, Maintainer: {}, Add: {}, Copy: {}, diff --git a/builder/dispatchers.go b/builder/dispatchers.go index b00268ed67..3cb3a9fb3a 100644 --- a/builder/dispatchers.go +++ b/builder/dispatchers.go @@ -85,6 +85,37 @@ func maintainer(b *Builder, args []string, attributes map[string]bool, original return b.commit("", b.Config.Cmd, fmt.Sprintf("MAINTAINER %s", b.maintainer)) } +// LABEL some json data describing the image +// +// Sets the Label variable foo to bar, +// +func label(b *Builder, args []string, attributes map[string]bool, original string) error { + if len(args) == 0 { + return fmt.Errorf("LABEL requires at least one argument") + } + if len(args)%2 != 0 { + // should never get here, but just in case + return fmt.Errorf("Bad input to LABEL, too many args") + } + + commitStr := "LABEL" + + if b.Config.Labels == nil { + b.Config.Labels = map[string]string{} + } + + for j := 0; j < len(args); j++ { + // name ==> args[j] + // value ==> args[j+1] + newVar := args[j] + "=" + args[j+1] + "" + commitStr += " " + newVar + + b.Config.Labels[args[j]] = args[j+1] + j++ + } + return b.commit("", b.Config.Cmd, commitStr) +} + // ADD foo /path // // Add the file 'foo' to '/path'. Tarball and Remote URL (git, http) handling @@ -213,8 +244,8 @@ func run(b *Builder, args []string, attributes map[string]bool, original string) args = handleJsonArgs(args, attributes) - if len(args) == 1 { - args = append([]string{"/bin/sh", "-c"}, args[0]) + if !attributes["json"] { + args = append([]string{"/bin/sh", "-c"}, args...) } runCmd := flag.NewFlagSet("run", flag.ContinueOnError) @@ -339,11 +370,19 @@ func expose(b *Builder, args []string, attributes map[string]bool, original stri b.Config.ExposedPorts = make(nat.PortSet) } - ports, _, err := nat.ParsePortSpecs(append(portsTab, b.Config.PortSpecs...)) + ports, bindingMap, err := nat.ParsePortSpecs(append(portsTab, b.Config.PortSpecs...)) if err != nil { return err } + for _, bindings := range bindingMap { + if bindings[0].HostIp != "" || bindings[0].HostPort != "" { + fmt.Fprintf(b.ErrStream, " ---> Using Dockerfile's EXPOSE instruction"+ + " to map host ports to container ports (ip:hostPort:containerPort) is deprecated.\n"+ + " Please use -p to publish the ports.\n") + } + } + // instead of using ports directly, we build a list of ports and sort it so // the order is consistent. This prevents cache burst where map ordering // changes between builds diff --git a/builder/evaluator.go b/builder/evaluator.go index eadef4a1e0..985656f16a 100644 --- a/builder/evaluator.go +++ b/builder/evaluator.go @@ -49,6 +49,7 @@ var ( // Environment variable interpolation will happen on these statements only. var replaceEnvAllowed = map[string]struct{}{ command.Env: {}, + command.Label: {}, command.Add: {}, command.Copy: {}, command.Workdir: {}, @@ -62,6 +63,7 @@ var evaluateTable map[string]func(*Builder, []string, map[string]bool, string) e func init() { evaluateTable = map[string]func(*Builder, []string, map[string]bool, string) error{ command.Env: env, + command.Label: label, command.Maintainer: maintainer, command.Add: add, command.Copy: dispatchCopy, // copy() is a go builtin @@ -123,6 +125,12 @@ type Builder struct { context tarsum.TarSum // the context is a tarball that is uploaded by the client contextPath string // the path of the temporary directory the local context is unpacked to (server side) noBaseImage bool // indicates that this build does not start from any base image, but is being built from an empty file system. + + // Set resource restrictions for build containers + cpuSetCpus string + cpuShares int64 + memory int64 + memorySwap int64 } // Run the builder with the context. This is the lynchpin of this package. This @@ -154,6 +162,7 @@ func (b *Builder) Run(context io.Reader) (string, error) { // some initializations that would not have been supplied by the caller. b.Config = &runconfig.Config{} + b.TmpContainers = map[string]struct{}{} for i, n := range b.dockerfile.Children { @@ -309,7 +318,5 @@ func (b *Builder) dispatch(stepN int, ast *parser.Node) error { return f(b, strList, attrs, original) } - fmt.Fprintf(b.ErrStream, "# Skipping unknown instruction %s\n", strings.ToUpper(cmd)) - - return nil + return fmt.Errorf("Unknown instruction: %s", strings.ToUpper(cmd)) } diff --git a/builder/internals.go b/builder/internals.go index f6b929f2b6..67650f75bc 100644 --- a/builder/internals.go +++ b/builder/internals.go @@ -28,11 +28,13 @@ import ( "github.com/docker/docker/pkg/common" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/parsers" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/pkg/symlink" "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/tarsum" "github.com/docker/docker/pkg/urlutil" "github.com/docker/docker/registry" + "github.com/docker/docker/runconfig" "github.com/docker/docker/utils" ) @@ -268,7 +270,15 @@ func calcCopyInfo(b *Builder, cmdName string, cInfos *[]*copyInfo, origPath stri } // Download and dump result to tmp file - if _, err := io.Copy(tmpFile, utils.ProgressReader(resp.Body, int(resp.ContentLength), b.OutOld, b.StreamFormatter, true, "", "Downloading")); err != nil { + if _, err := io.Copy(tmpFile, progressreader.New(progressreader.Config{ + In: resp.Body, + Out: b.OutOld, + Formatter: b.StreamFormatter, + Size: int(resp.ContentLength), + NewLines: true, + ID: "", + Action: "Downloading", + })); err != nil { tmpFile.Close() return err } @@ -528,10 +538,17 @@ func (b *Builder) create() (*daemon.Container, error) { } b.Config.Image = b.image + hostConfig := &runconfig.HostConfig{ + CpuShares: b.cpuShares, + CpusetCpus: b.cpuSetCpus, + Memory: b.memory, + MemorySwap: b.memorySwap, + } + config := *b.Config // Create the container - c, warnings, err := b.Daemon.Create(b.Config, nil, "") + c, warnings, err := b.Daemon.Create(b.Config, hostConfig, "") if err != nil { return nil, err } @@ -725,7 +742,7 @@ func (b *Builder) clearTmp() { } if err := b.Daemon.Rm(tmp); err != nil { - fmt.Fprintf(b.OutStream, "Error removing intermediate container %s: %s\n", common.TruncateID(c), err.Error()) + fmt.Fprintf(b.OutStream, "Error removing intermediate container %s: %v\n", common.TruncateID(c), err) return } b.Daemon.DeleteVolumes(tmp.VolumePaths()) diff --git a/builder/job.go b/builder/job.go index fb629e1c20..27591129cd 100644 --- a/builder/job.go +++ b/builder/job.go @@ -57,6 +57,10 @@ func (b *BuilderJob) CmdBuild(job *engine.Job) engine.Status { rm = job.GetenvBool("rm") forceRm = job.GetenvBool("forcerm") pull = job.GetenvBool("pull") + memory = job.GetenvInt64("memory") + memorySwap = job.GetenvInt64("memswap") + cpuShares = job.GetenvInt64("cpushares") + cpuSetCpus = job.Getenv("cpusetcpus") authConfig = ®istry.AuthConfig{} configFile = ®istry.ConfigFile{} tag string @@ -145,6 +149,10 @@ func (b *BuilderJob) CmdBuild(job *engine.Job) engine.Status { AuthConfig: authConfig, AuthConfigFile: configFile, dockerfileName: dockerfileName, + cpuShares: cpuShares, + cpuSetCpus: cpuSetCpus, + memory: memory, + memorySwap: memorySwap, } id, err := builder.Run(context) diff --git a/builder/parser/line_parsers.go b/builder/parser/line_parsers.go index c7fed13dbe..45c929ee69 100644 --- a/builder/parser/line_parsers.go +++ b/builder/parser/line_parsers.go @@ -44,10 +44,10 @@ func parseSubCommand(rest string) (*Node, map[string]bool, error) { // parse environment like statements. Note that this does *not* handle // variable interpolation, which will be handled in the evaluator. -func parseEnv(rest string) (*Node, map[string]bool, error) { +func parseNameVal(rest string, key string) (*Node, map[string]bool, error) { // This is kind of tricky because we need to support the old - // variant: ENV name value - // as well as the new one: ENV name=value ... + // variant: KEY name value + // as well as the new one: KEY name=value ... // The trigger to know which one is being used will be whether we hit // a space or = first. space ==> old, "=" ==> new @@ -137,10 +137,10 @@ func parseEnv(rest string) (*Node, map[string]bool, error) { } if len(words) == 0 { - return nil, nil, fmt.Errorf("ENV requires at least one argument") + return nil, nil, nil } - // Old format (ENV name value) + // Old format (KEY name value) var rootnode *Node if !strings.Contains(words[0], "=") { @@ -149,7 +149,7 @@ func parseEnv(rest string) (*Node, map[string]bool, error) { strs := TOKEN_WHITESPACE.Split(rest, 2) if len(strs) < 2 { - return nil, nil, fmt.Errorf("ENV must have two arguments") + return nil, nil, fmt.Errorf(key + " must have two arguments") } node.Value = strs[0] @@ -182,6 +182,14 @@ func parseEnv(rest string) (*Node, map[string]bool, error) { return rootnode, nil, nil } +func parseEnv(rest string) (*Node, map[string]bool, error) { + return parseNameVal(rest, "ENV") +} + +func parseLabel(rest string) (*Node, map[string]bool, error) { + return parseNameVal(rest, "LABEL") +} + // parses a whitespace-delimited set of arguments. The result is effectively a // linked list of string arguments. func parseStringsWhitespaceDelimited(rest string) (*Node, map[string]bool, error) { diff --git a/builder/parser/parser.go b/builder/parser/parser.go index 69bbfd0dc1..1ab151b30d 100644 --- a/builder/parser/parser.go +++ b/builder/parser/parser.go @@ -50,6 +50,7 @@ func init() { command.Onbuild: parseSubCommand, command.Workdir: parseString, command.Env: parseEnv, + command.Label: parseLabel, command.Maintainer: parseString, command.From: parseString, command.Add: parseMaybeJSONToList, diff --git a/builder/parser/parser_test.go b/builder/parser/parser_test.go index daceb9839c..6b55a611ec 100644 --- a/builder/parser/parser_test.go +++ b/builder/parser/parser_test.go @@ -11,7 +11,7 @@ import ( const testDir = "testfiles" const negativeTestDir = "testfiles-negative" -func getDirs(t *testing.T, dir string) []os.FileInfo { +func getDirs(t *testing.T, dir string) []string { f, err := os.Open(dir) if err != nil { t.Fatal(err) @@ -19,7 +19,7 @@ func getDirs(t *testing.T, dir string) []os.FileInfo { defer f.Close() - dirs, err := f.Readdir(0) + dirs, err := f.Readdirnames(0) if err != nil { t.Fatal(err) } @@ -29,16 +29,16 @@ func getDirs(t *testing.T, dir string) []os.FileInfo { func TestTestNegative(t *testing.T) { for _, dir := range getDirs(t, negativeTestDir) { - dockerfile := filepath.Join(negativeTestDir, dir.Name(), "Dockerfile") + dockerfile := filepath.Join(negativeTestDir, dir, "Dockerfile") df, err := os.Open(dockerfile) if err != nil { - t.Fatalf("Dockerfile missing for %s: %s", dir.Name(), err.Error()) + t.Fatalf("Dockerfile missing for %s: %v", dir, err) } _, err = Parse(df) if err == nil { - t.Fatalf("No error parsing broken dockerfile for %s", dir.Name()) + t.Fatalf("No error parsing broken dockerfile for %s", dir) } df.Close() @@ -47,29 +47,29 @@ func TestTestNegative(t *testing.T) { func TestTestData(t *testing.T) { for _, dir := range getDirs(t, testDir) { - dockerfile := filepath.Join(testDir, dir.Name(), "Dockerfile") - resultfile := filepath.Join(testDir, dir.Name(), "result") + dockerfile := filepath.Join(testDir, dir, "Dockerfile") + resultfile := filepath.Join(testDir, dir, "result") df, err := os.Open(dockerfile) if err != nil { - t.Fatalf("Dockerfile missing for %s: %s", dir.Name(), err.Error()) + t.Fatalf("Dockerfile missing for %s: %v", dir, err) } defer df.Close() ast, err := Parse(df) if err != nil { - t.Fatalf("Error parsing %s's dockerfile: %s", dir.Name(), err.Error()) + t.Fatalf("Error parsing %s's dockerfile: %v", dir, err) } content, err := ioutil.ReadFile(resultfile) if err != nil { - t.Fatalf("Error reading %s's result file: %s", dir.Name(), err.Error()) + t.Fatalf("Error reading %s's result file: %v", dir, err) } if ast.Dump()+"\n" != string(content) { fmt.Fprintln(os.Stderr, "Result:\n"+ast.Dump()) fmt.Fprintln(os.Stderr, "Expected:\n"+string(content)) - t.Fatalf("%s: AST dump of dockerfile does not match result", dir.Name()) + t.Fatalf("%s: AST dump of dockerfile does not match result", dir) } } } diff --git a/contrib/MAINTAINERS b/contrib/REVIEWERS similarity index 100% rename from contrib/MAINTAINERS rename to contrib/REVIEWERS diff --git a/contrib/check-config.sh b/contrib/check-config.sh index 0d5d70c9cd..ac5df62c26 100755 --- a/contrib/check-config.sh +++ b/contrib/check-config.sh @@ -138,7 +138,7 @@ fi flags=( NAMESPACES {NET,PID,IPC,UTS}_NS DEVPTS_MULTIPLE_INSTANCES - CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED + CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED CPUSETS MACVLAN VETH BRIDGE NF_NAT_IPV4 IP_NF_FILTER IP_NF_TARGET_MASQUERADE NETFILTER_XT_MATCH_{ADDRTYPE,CONNTRACK} diff --git a/contrib/completion/MAINTAINERS b/contrib/completion/REVIEWERS similarity index 100% rename from contrib/completion/MAINTAINERS rename to contrib/completion/REVIEWERS diff --git a/contrib/completion/bash/docker b/contrib/completion/bash/docker index cd09e4ba62..115cc15b39 100755 --- a/contrib/completion/bash/docker +++ b/contrib/completion/bash/docker @@ -131,6 +131,7 @@ __docker_capabilities() { ALL AUDIT_CONTROL AUDIT_WRITE + AUDIT_READ BLOCK_SUSPEND CHOWN DAC_OVERRIDE @@ -188,7 +189,6 @@ __docker_signals() { _docker_docker() { local boolean_options=" - --api-enable-cors --daemon -d --debug -D --help -h @@ -238,7 +238,7 @@ _docker_docker() { _docker_attach() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--no-stdin --sig-proxy" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --no-stdin --sig-proxy" -- "$cur" ) ) ;; *) local counter="$(__docker_pos_first_nonflag)" @@ -255,11 +255,15 @@ _docker_build() { __docker_image_repos_and_tags return ;; + --file|-f) + _filedir + return + ;; esac case "$cur" in -*) - COMPREPLY=( $( compgen -W "--force-rm --no-cache --quiet -q --rm --tag -t" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--file -f --force-rm --help --no-cache --pull --quiet -q --rm --tag -t" -- "$cur" ) ) ;; *) local counter="$(__docker_pos_first_nonflag '--tag|-t')" @@ -272,17 +276,17 @@ _docker_build() { _docker_commit() { case "$prev" in - --author|-a|--message|-m|--run) + --author|-a|--change|-c|--message|-m) return ;; esac case "$cur" in -*) - COMPREPLY=( $( compgen -W "--author -a --message -m --run" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--author -a --change -c --help --message -m --pause -p" -- "$cur" ) ) ;; *) - local counter=$(__docker_pos_first_nonflag '--author|-a|--message|-m|--run') + local counter=$(__docker_pos_first_nonflag '--author|-a|--change|-c|--message|-m') if [ $cword -eq $counter ]; then __docker_containers_all @@ -299,26 +303,33 @@ _docker_commit() { } _docker_cp() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - case "$cur" in - *:) - return - ;; - *) - __docker_containers_all - COMPREPLY=( $( compgen -W "${COMPREPLY[*]}" -S ':' ) ) - compopt -o nospace - return - ;; - esac - fi - (( counter++ )) + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + case "$cur" in + *:) + return + ;; + *) + __docker_containers_all + COMPREPLY=( $( compgen -W "${COMPREPLY[*]}" -S ':' ) ) + compopt -o nospace + return + ;; + esac + fi + (( counter++ )) - if [ $cword -eq $counter ]; then - _filedir - return - fi + if [ $cword -eq $counter ]; then + _filedir + return + fi + ;; + esac } _docker_create() { @@ -326,22 +337,53 @@ _docker_create() { } _docker_diff() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_all - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_all + fi + ;; + esac } _docker_events() { case "$prev" in - --since) + --filter|-f) + COMPREPLY=( $( compgen -S = -W "container event image" -- "$cur" ) ) + compopt -o nospace + return + ;; + --since|--until) + return + ;; + esac + + # "=" gets parsed to a word and assigned to either $cur or $prev depending on whether + # it is the last character or not. So we search for "xxx=" in the the last two words. + case "${words[$cword-2]}$prev=" in + *container=*) + cur="${cur#=}" + __docker_containers_all + return + ;; + *event=*) + COMPREPLY=( $( compgen -W "create destroy die export kill pause restart start stop unpause" -- "${cur#=}" ) ) + return + ;; + *image=*) + cur="${cur#=}" + __docker_image_repos_and_tags_and_ids return ;; esac case "$cur" in -*) - COMPREPLY=( $( compgen -W "--since" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--filter -f --help --since --until" -- "$cur" ) ) ;; esac } @@ -349,7 +391,7 @@ _docker_events() { _docker_exec() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--detach -d --interactive -i -t --tty" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--detach -d --help --interactive -i -t --tty" -- "$cur" ) ) ;; *) __docker_containers_running @@ -358,10 +400,17 @@ _docker_exec() { } _docker_export() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_all - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_all + fi + ;; + esac } _docker_help() { @@ -374,7 +423,7 @@ _docker_help() { _docker_history() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--no-trunc --quiet -q" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --no-trunc --quiet -q" -- "$cur" ) ) ;; *) local counter=$(__docker_pos_first_nonflag) @@ -386,9 +435,23 @@ _docker_history() { } _docker_images() { + case "$prev" in + --filter|-f) + COMPREPLY=( $( compgen -W "dangling=true" -- "$cur" ) ) + return + ;; + esac + + case "${words[$cword-2]}$prev=" in + *dangling=*) + COMPREPLY=( $( compgen -W "true false" -- "${cur#=}" ) ) + return + ;; + esac + case "$cur" in -*) - COMPREPLY=( $( compgen -W "--all -a --no-trunc --quiet -q" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--all -a --filter -f --help --no-trunc --quiet -q" -- "$cur" ) ) ;; *) local counter=$(__docker_pos_first_nonflag) @@ -400,20 +463,31 @@ _docker_images() { } _docker_import() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - return - fi - (( counter++ )) + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + return + fi + (( counter++ )) - if [ $cword -eq $counter ]; then - __docker_image_repos_and_tags - return - fi + if [ $cword -eq $counter ]; then + __docker_image_repos_and_tags + return + fi + ;; + esac } _docker_info() { - return + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + esac } _docker_inspect() { @@ -425,7 +499,7 @@ _docker_inspect() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--format -f" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--format -f --help" -- "$cur" ) ) ;; *) __docker_containers_and_images @@ -443,7 +517,7 @@ _docker_kill() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--signal -s" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --signal -s" -- "$cur" ) ) ;; *) __docker_containers_running @@ -461,7 +535,7 @@ _docker_load() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--input -i" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --input -i" -- "$cur" ) ) ;; esac } @@ -475,15 +549,57 @@ _docker_login() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--email -e --password -p --username -u" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--email -e --help --password -p --username -u" -- "$cur" ) ) + ;; + esac +} + +_docker_logout() { + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; esac } _docker_logs() { + case "$prev" in + --tail) + return + ;; + esac + case "$cur" in -*) - COMPREPLY=( $( compgen -W "--follow -f" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--follow -f --help --tail --timestamps -t" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag '--tail') + if [ $cword -eq $counter ]; then + __docker_containers_all + fi + ;; + esac +} + +_docker_pause() { + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_pauseable + fi + ;; + esac +} + +_docker_port() { + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) local counter=$(__docker_pos_first_nonflag) @@ -494,50 +610,42 @@ _docker_logs() { esac } -_docker_pause() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_pauseable - fi -} - -_docker_port() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_all - fi -} - _docker_ps() { case "$prev" in --before|--since) __docker_containers_all ;; + --filter|-f) + COMPREPLY=( $( compgen -S = -W "exited status" -- "$cur" ) ) + compopt -o nospace + return + ;; -n) return ;; esac - case "$cur" in - -*) - COMPREPLY=( $( compgen -W "--all -a --before --latest -l --no-trunc -n --quiet -q --size -s --since" -- "$cur" ) ) - ;; - esac -} - -_docker_pull() { - case "$prev" in - --tag|-t) + case "${words[$cword-2]}$prev=" in + *status=*) + COMPREPLY=( $( compgen -W "exited paused restarting running" -- "${cur#=}" ) ) return ;; esac case "$cur" in -*) - COMPREPLY=( $( compgen -W "--tag -t" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--all -a --before --filter -f --help --latest -l -n --no-trunc --quiet -q --size -s --since" -- "$cur" ) ) + ;; + esac +} + +_docker_pull() { + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--all-tags -a --help" -- "$cur" ) ) ;; *) - local counter=$(__docker_pos_first_nonflag '--tag|-t') + local counter=$(__docker_pos_first_nonflag) if [ $cword -eq $counter ]; then __docker_image_repos_and_tags fi @@ -546,17 +654,31 @@ _docker_pull() { } _docker_push() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_image_repos_and_tags - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_image_repos_and_tags + fi + ;; + esac } _docker_rename() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_all - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_all + fi + ;; + esac } _docker_restart() { @@ -568,7 +690,7 @@ _docker_restart() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--time -t" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --time -t" -- "$cur" ) ) ;; *) __docker_containers_all @@ -579,8 +701,7 @@ _docker_restart() { _docker_rm() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--force -f --link -l --volumes -v" -- "$cur" ) ) - return + COMPREPLY=( $( compgen -W "--force -f --help --link -l --volumes -v" -- "$cur" ) ) ;; *) for arg in "${COMP_WORDS[@]}"; do @@ -592,13 +713,19 @@ _docker_rm() { esac done __docker_containers_stopped - return ;; esac } _docker_rmi() { - __docker_image_repos_and_tags_and_ids + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--force -f --help --no-prune" -- "$cur" ) ) + ;; + *) + __docker_image_repos_and_tags_and_ids + ;; + esac } _docker_run() { @@ -623,21 +750,26 @@ _docker_run() { --lxc-conf --mac-address --memory -m + --memory-swap --name --net + --pid --publish -p --restart --security-opt --user -u + --ulimit --volumes-from --volume -v --workdir -w " local all_options="$options_with_args + --help --interactive -i --privileged --publish-all -P + --read-only --tty -t " @@ -794,7 +926,7 @@ _docker_save() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "-o --output" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --output -o" -- "$cur" ) ) ;; *) __docker_image_repos_and_tags_and_ids @@ -811,7 +943,7 @@ _docker_search() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--automated --no-trunc --stars -s" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--automated --help --no-trunc --stars -s" -- "$cur" ) ) ;; esac } @@ -819,7 +951,7 @@ _docker_search() { _docker_start() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--attach -a --interactive -i" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--attach -a --help --interactive -i" -- "$cur" ) ) ;; *) __docker_containers_stopped @@ -828,7 +960,14 @@ _docker_start() { } _docker_stats() { - __docker_containers_running + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + __docker_containers_running + ;; + esac } _docker_stop() { @@ -840,7 +979,7 @@ _docker_stop() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--time -t" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--help --time -t" -- "$cur" ) ) ;; *) __docker_containers_running @@ -851,7 +990,7 @@ _docker_stop() { _docker_tag() { case "$cur" in -*) - COMPREPLY=( $( compgen -W "--force -f" -- "$cur" ) ) + COMPREPLY=( $( compgen -W "--force -f --help" -- "$cur" ) ) ;; *) local counter=$(__docker_pos_first_nonflag) @@ -871,25 +1010,50 @@ _docker_tag() { } _docker_unpause() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_unpauseable - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_unpauseable + fi + ;; + esac } _docker_top() { - local counter=$(__docker_pos_first_nonflag) - if [ $cword -eq $counter ]; then - __docker_containers_running - fi + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + local counter=$(__docker_pos_first_nonflag) + if [ $cword -eq $counter ]; then + __docker_containers_running + fi + ;; + esac } _docker_version() { - return + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + esac } _docker_wait() { - __docker_containers_all + case "$cur" in + -*) + COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) + ;; + *) + __docker_containers_all + ;; + esac } _docker() { @@ -910,11 +1074,11 @@ _docker() { images import info - insert inspect kill load login + logout logs pause port @@ -939,8 +1103,10 @@ _docker() { ) local main_options_with_args=" + --api-cors-header --bip --bridge -b + --default-ulimit --dns --dns-search --exec-driver -e diff --git a/contrib/completion/fish/docker.fish b/contrib/completion/fish/docker.fish index 88caf15ea8..d3237588ef 100644 --- a/contrib/completion/fish/docker.fish +++ b/contrib/completion/fish/docker.fish @@ -16,7 +16,7 @@ function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand' for i in (commandline -opc) - if contains -- $i attach build commit cp create diff events exec export history images import info insert inspect kill load login logout logs pause port ps pull push restart rm rmi run save search start stop tag top unpause version wait + if contains -- $i attach build commit cp create diff events exec export history images import info inspect kill load login logout logs pause port ps pull push rename restart rm rmi run save search start stop tag top unpause version wait return 1 end end @@ -43,7 +43,7 @@ function __fish_print_docker_repositories --description 'Print a list of docker end # common options -complete -c docker -f -n '__fish_docker_no_subcommand' -l api-enable-cors -d 'Enable CORS headers in the remote API' +complete -c docker -f -n '__fish_docker_no_subcommand' -l api-cors-header -d "Set CORS headers in the remote API. Default is cors disabled" complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge' complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b" complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode' diff --git a/contrib/completion/zsh/MAINTAINERS b/contrib/completion/zsh/REVIEWERS similarity index 100% rename from contrib/completion/zsh/MAINTAINERS rename to contrib/completion/zsh/REVIEWERS diff --git a/contrib/completion/zsh/_docker b/contrib/completion/zsh/_docker index 3215814313..28398f7524 100644 --- a/contrib/completion/zsh/_docker +++ b/contrib/completion/zsh/_docker @@ -270,11 +270,6 @@ __docker_subcommand () { {-q,--quiet}'[Only show numeric IDs]' \ ':repository:__docker_repositories' ;; - (inspect) - _arguments \ - {-f,--format=-}'[Format the output using the given go template]:template: ' \ - '*:containers:__docker_containers' - ;; (import) _arguments \ ':URL:(- http:// file://)' \ @@ -282,15 +277,10 @@ __docker_subcommand () { ;; (info) ;; - (import) + (inspect) _arguments \ - ':URL:(- http:// file://)' \ - ':repository:__docker_repositories_with_tags' - ;; - (insert) - _arguments '1:containers:__docker_containers' \ - '2:URL:(http:// file://)' \ - '3:file:_files' + {-f,--format=-}'[Format the output using the given go template]:template: ' \ + '*:containers:__docker_containers' ;; (kill) _arguments \ diff --git a/contrib/download-frozen-image.sh b/contrib/download-frozen-image.sh new file mode 100755 index 0000000000..b45cba9813 --- /dev/null +++ b/contrib/download-frozen-image.sh @@ -0,0 +1,104 @@ +#!/bin/bash +set -e + +# hello-world latest ef872312fe1b 3 months ago 910 B +# hello-world latest ef872312fe1bbc5e05aae626791a47ee9b032efa8f3bda39cc0be7b56bfe59b9 3 months ago 910 B + +# debian latest f6fab3b798be 10 weeks ago 85.1 MB +# debian latest f6fab3b798be3174f45aa1eb731f8182705555f89c9026d8c1ef230cbf8301dd 10 weeks ago 85.1 MB + +if ! command -v curl &> /dev/null; then + echo >&2 'error: "curl" not found!' + exit 1 +fi + +usage() { + echo "usage: $0 dir image[:tag][@image-id] ..." + echo " ie: $0 /tmp/hello-world hello-world" + echo " $0 /tmp/debian-jessie debian:jessie" + echo " $0 /tmp/old-hello-world hello-world@ef872312fe1bbc5e05aae626791a47ee9b032efa8f3bda39cc0be7b56bfe59b9" + echo " $0 /tmp/old-debian debian:latest@f6fab3b798be3174f45aa1eb731f8182705555f89c9026d8c1ef230cbf8301dd" + [ -z "$1" ] || exit "$1" +} + +dir="$1" # dir for building tar in +shift || usage 1 >&2 + +[ $# -gt 0 -a "$dir" ] || usage 2 >&2 +mkdir -p "$dir" + +# hacky workarounds for Bash 3 support (no associative arrays) +images=() +rm -f "$dir"/tags-*.tmp +# repositories[busybox]='"latest": "...", "ubuntu-14.04": "..."' + +while [ $# -gt 0 ]; do + imageTag="$1" + shift + image="${imageTag%%[:@]*}" + tag="${imageTag#*:}" + imageId="${tag##*@}" + [ "$imageId" != "$tag" ] || imageId= + [ "$tag" != "$imageTag" ] || tag='latest' + tag="${tag%@*}" + + token="$(curl -sSL -o /dev/null -D- -H 'X-Docker-Token: true' "https://index.docker.io/v1/repositories/$image/images" | tr -d '\r' | awk -F ': *' '$1 == "X-Docker-Token" { print $2 }')" + + if [ -z "$imageId" ]; then + imageId="$(curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/repositories/$image/tags/$tag")" + imageId="${imageId//\"/}" + fi + + ancestryJson="$(curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/ancestry")" + if [ "${ancestryJson:0:1}" != '[' ]; then + echo >&2 "error: /v1/images/$imageId/ancestry returned something unexpected:" + echo >&2 " $ancestryJson" + exit 1 + fi + + IFS=',' + ancestry=( ${ancestryJson//[\[\] \"]/} ) + unset IFS + + if [ -s "$dir/tags-$image.tmp" ]; then + echo -n ', ' >> "$dir/tags-$image.tmp" + else + images=( "${images[@]}" "$image" ) + fi + echo -n '"'"$tag"'": "'"$imageId"'"' >> "$dir/tags-$image.tmp" + + echo "Downloading '$imageTag' (${#ancestry[@]} layers)..." + for imageId in "${ancestry[@]}"; do + mkdir -p "$dir/$imageId" + echo '1.0' > "$dir/$imageId/VERSION" + + curl -sSL -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/json" -o "$dir/$imageId/json" + + # TODO figure out why "-C -" doesn't work here + # "curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume." + # "HTTP/1.1 416 Requested Range Not Satisfiable" + if [ -f "$dir/$imageId/layer.tar" ]; then + # TODO hackpatch for no -C support :'( + echo "skipping existing ${imageId:0:12}" + continue + fi + curl -SL --progress -H "Authorization: Token $token" "https://registry-1.docker.io/v1/images/$imageId/layer" -o "$dir/$imageId/layer.tar" # -C - + done + echo +done + +echo -n '{' > "$dir/repositories" +firstImage=1 +for image in "${images[@]}"; do + [ "$firstImage" ] || echo -n ',' >> "$dir/repositories" + firstImage= + echo -n $'\n\t' >> "$dir/repositories" + echo -n '"'"$image"'": { '"$(cat "$dir/tags-$image.tmp")"' }' >> "$dir/repositories" +done +echo -n $'\n}\n' >> "$dir/repositories" + +rm -f "$dir"/tags-*.tmp + +echo "Download of images into '$dir' complete." +echo "Use something like the following to load the result into a Docker daemon:" +echo " tar -cC '$dir' . | docker load" diff --git a/contrib/httpserver/Dockerfile b/contrib/httpserver/Dockerfile new file mode 100644 index 0000000000..747dc91bcf --- /dev/null +++ b/contrib/httpserver/Dockerfile @@ -0,0 +1,4 @@ +FROM busybox +EXPOSE 80/tcp +COPY httpserver . +CMD ["./httpserver"] diff --git a/contrib/httpserver/server.go b/contrib/httpserver/server.go new file mode 100644 index 0000000000..a75d5abb3d --- /dev/null +++ b/contrib/httpserver/server.go @@ -0,0 +1,12 @@ +package main + +import ( + "log" + "net/http" +) + +func main() { + fs := http.FileServer(http.Dir("/static")) + http.Handle("/", fs) + log.Panic(http.ListenAndServe(":80", nil)) +} diff --git a/contrib/init/systemd/MAINTAINERS b/contrib/init/systemd/REVIEWERS similarity index 100% rename from contrib/init/systemd/MAINTAINERS rename to contrib/init/systemd/REVIEWERS diff --git a/contrib/init/upstart/MAINTAINERS b/contrib/init/upstart/REVIEWERS similarity index 100% rename from contrib/init/upstart/MAINTAINERS rename to contrib/init/upstart/REVIEWERS diff --git a/contrib/mkimage/debootstrap b/contrib/mkimage/debootstrap index db35d3177a..72983d249b 100755 --- a/contrib/mkimage/debootstrap +++ b/contrib/mkimage/debootstrap @@ -38,13 +38,13 @@ rootfs_chroot() { # prevent init scripts from running during install/update echo >&2 "+ echo exit 101 > '$rootfsDir/usr/sbin/policy-rc.d'" cat > "$rootfsDir/usr/sbin/policy-rc.d" <<'EOF' -#!/bin/sh + #!/bin/sh -# For most Docker users, "apt-get install" only happens during "docker build", -# where starting services doesn't work and often fails in humorous ways. This -# prevents those failures by stopping the services from attempting to start. + # For most Docker users, "apt-get install" only happens during "docker build", + # where starting services doesn't work and often fails in humorous ways. This + # prevents those failures by stopping the services from attempting to start. -exit 101 + exit 101 EOF chmod +x "$rootfsDir/usr/sbin/policy-rc.d" @@ -69,12 +69,12 @@ if strings "$rootfsDir/usr/bin/dpkg" | grep -q unsafe-io; then # force dpkg not to call sync() after package extraction (speeding up installs) echo >&2 "+ echo force-unsafe-io > '$rootfsDir/etc/dpkg/dpkg.cfg.d/docker-apt-speedup'" cat > "$rootfsDir/etc/dpkg/dpkg.cfg.d/docker-apt-speedup" <<-'EOF' - # For most Docker users, package installs happen during "docker build", which - # doesn't survive power loss and gets restarted clean afterwards anyhow, so - # this minor tweak gives us a nice speedup (much nicer on spinning disks, - # obviously). + # For most Docker users, package installs happen during "docker build", which + # doesn't survive power loss and gets restarted clean afterwards anyhow, so + # this minor tweak gives us a nice speedup (much nicer on spinning disks, + # obviously). - force-unsafe-io + force-unsafe-io EOF fi @@ -107,26 +107,47 @@ if [ -d "$rootfsDir/etc/apt/apt.conf.d" ]; then # remove apt-cache translations for fast "apt-get update" echo >&2 "+ echo Acquire::Languages 'none' > '$rootfsDir/etc/apt/apt.conf.d/docker-no-languages'" cat > "$rootfsDir/etc/apt/apt.conf.d/docker-no-languages" <<-'EOF' - # In Docker, we don't often need the "Translations" files, so we're just wasting - # time and space by downloading them, and this inhibits that. For users that do - # need them, it's a simple matter to delete this file and "apt-get update". :) + # In Docker, we don't often need the "Translations" files, so we're just wasting + # time and space by downloading them, and this inhibits that. For users that do + # need them, it's a simple matter to delete this file and "apt-get update". :) - Acquire::Languages "none"; + Acquire::Languages "none"; EOF echo >&2 "+ echo Acquire::GzipIndexes 'true' > '$rootfsDir/etc/apt/apt.conf.d/docker-gzip-indexes'" cat > "$rootfsDir/etc/apt/apt.conf.d/docker-gzip-indexes" <<-'EOF' - # Since Docker users using "RUN apt-get update && apt-get install -y ..." in - # their Dockerfiles don't go delete the lists files afterwards, we want them to - # be as small as possible on-disk, so we explicitly request "gz" versions and - # tell Apt to keep them gzipped on-disk. + # Since Docker users using "RUN apt-get update && apt-get install -y ..." in + # their Dockerfiles don't go delete the lists files afterwards, we want them to + # be as small as possible on-disk, so we explicitly request "gz" versions and + # tell Apt to keep them gzipped on-disk. - # For comparison, an "apt-get update" layer without this on a pristine - # "debian:wheezy" base image was "29.88 MB", where with this it was only - # "8.273 MB". + # For comparison, an "apt-get update" layer without this on a pristine + # "debian:wheezy" base image was "29.88 MB", where with this it was only + # "8.273 MB". - Acquire::GzipIndexes "true"; - Acquire::CompressionTypes::Order:: "gz"; + Acquire::GzipIndexes "true"; + Acquire::CompressionTypes::Order:: "gz"; + EOF + + # update "autoremove" configuration to be aggressive about removing suggests deps that weren't manually installed + echo >&2 "+ echo Apt::AutoRemove::SuggestsImportant 'false' > '$rootfsDir/etc/apt/apt.conf.d/docker-autoremove-suggests'" + cat > "$rootfsDir/etc/apt/apt.conf.d/docker-autoremove-suggests" <<-'EOF' + # Since Docker users are looking for the smallest possible final images, the + # following emerges as a very common pattern: + + # RUN apt-get update \ + # && apt-get install -y \ + # && \ + # && apt-get purge -y --auto-remove + + # By default, APT will actually _keep_ packages installed via Recommends or + # Depends if another package Suggests them, even and including if the package + # that originally caused them to be installed is removed. Setting this to + # "false" ensures that APT is appropriately aggressive about removing the + # packages it added. + + # https://aptitude.alioth.debian.org/doc/en/ch02s05s05.html#configApt-AutoRemove-SuggestsImportant + Apt::AutoRemove::SuggestsImportant "false"; EOF fi diff --git a/project/stats.sh b/contrib/project-stats.sh similarity index 100% rename from project/stats.sh rename to contrib/project-stats.sh diff --git a/project/report-issue.sh b/contrib/report-issue.sh similarity index 100% rename from project/report-issue.sh rename to contrib/report-issue.sh diff --git a/contrib/syntax/kate/Dockerfile.xml b/contrib/syntax/kate/Dockerfile.xml index e5602397ba..4fdef2393b 100644 --- a/contrib/syntax/kate/Dockerfile.xml +++ b/contrib/syntax/kate/Dockerfile.xml @@ -22,6 +22,7 @@ CMD WORKDIR USER + LABEL diff --git a/contrib/syntax/textmate/Docker.tmbundle/Syntaxes/Dockerfile.tmLanguage b/contrib/syntax/textmate/Docker.tmbundle/Syntaxes/Dockerfile.tmLanguage index 1d19a3ba2e..75efc2e811 100644 --- a/contrib/syntax/textmate/Docker.tmbundle/Syntaxes/Dockerfile.tmLanguage +++ b/contrib/syntax/textmate/Docker.tmbundle/Syntaxes/Dockerfile.tmLanguage @@ -12,7 +12,7 @@ match - ^\s*(ONBUILD\s+)?(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|VOLUME|USER|WORKDIR|COPY)\s + ^\s*(ONBUILD\s+)?(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|VOLUME|USER|LABEL|WORKDIR|COPY)\s captures 0 diff --git a/contrib/syntax/textmate/MAINTAINERS b/contrib/syntax/textmate/REVIEWERS similarity index 100% rename from contrib/syntax/textmate/MAINTAINERS rename to contrib/syntax/textmate/REVIEWERS diff --git a/contrib/syntax/vim/syntax/dockerfile.vim b/contrib/syntax/vim/syntax/dockerfile.vim index 2984bec5f8..36691e2504 100644 --- a/contrib/syntax/vim/syntax/dockerfile.vim +++ b/contrib/syntax/vim/syntax/dockerfile.vim @@ -11,7 +11,7 @@ let b:current_syntax = "dockerfile" syntax case ignore -syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|VOLUME|WORKDIR|COPY)\s/ +syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|LABEL|VOLUME|WORKDIR|COPY)\s/ highlight link dockerfileKeyword Keyword syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/ diff --git a/daemon/MAINTAINERS b/daemon/MAINTAINERS deleted file mode 100644 index 9360465f2d..0000000000 --- a/daemon/MAINTAINERS +++ /dev/null @@ -1,7 +0,0 @@ -Solomon Hykes (@shykes) -Victor Vieux (@vieux) -Michael Crosby (@crosbymichael) -Cristian Staretu (@unclejack) -Tibor Vass (@tiborvass) -Vishnu Kannan (@vishh) -volumes.go: Brian Goff (@cpuguy83) diff --git a/daemon/config.go b/daemon/config.go index 94deb3424c..4adc025eef 100644 --- a/daemon/config.go +++ b/daemon/config.go @@ -7,6 +7,7 @@ import ( "github.com/docker/docker/opts" flag "github.com/docker/docker/pkg/mflag" "github.com/docker/docker/pkg/ulimit" + "github.com/docker/docker/runconfig" ) const ( @@ -47,6 +48,7 @@ type Config struct { TrustKeyPath string Labels []string Ulimits map[string]*ulimit.Ulimit + LogConfig runconfig.LogConfig } // InstallFlags adds command-line options to the top-level flag parser for @@ -81,6 +83,7 @@ func (config *Config) InstallFlags() { opts.LabelListVar(&config.Labels, []string{"-label"}, "Set key=value labels to the daemon") config.Ulimits = make(map[string]*ulimit.Ulimit) opts.UlimitMapVar(config.Ulimits, []string{"-default-ulimit"}, "Set default ulimits for containers") + flag.StringVar(&config.LogConfig.Type, []string{"-log-driver"}, "json-file", "Containers logging driver(json-file/none)") } func getDefaultNetworkMtu() int { diff --git a/daemon/container.go b/daemon/container.go index 94bf891f5f..e9b360083c 100644 --- a/daemon/container.go +++ b/daemon/container.go @@ -14,11 +14,15 @@ import ( "syscall" "time" + "github.com/docker/libcontainer" + "github.com/docker/libcontainer/configs" "github.com/docker/libcontainer/devices" "github.com/docker/libcontainer/label" log "github.com/Sirupsen/logrus" "github.com/docker/docker/daemon/execdriver" + "github.com/docker/docker/daemon/logger" + "github.com/docker/docker/daemon/logger/jsonfilelog" "github.com/docker/docker/engine" "github.com/docker/docker/image" "github.com/docker/docker/links" @@ -26,6 +30,7 @@ import ( "github.com/docker/docker/pkg/archive" "github.com/docker/docker/pkg/broadcastwriter" "github.com/docker/docker/pkg/common" + "github.com/docker/docker/pkg/directory" "github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/networkfs/etchosts" "github.com/docker/docker/pkg/networkfs/resolvconf" @@ -95,9 +100,12 @@ type Container struct { VolumesRW map[string]bool hostConfig *runconfig.HostConfig - activeLinks map[string]*links.Link - monitor *containerMonitor - execCommands *execStore + activeLinks map[string]*links.Link + monitor *containerMonitor + execCommands *execStore + // logDriver for closing + logDriver logger.Logger + logCopier *logger.Copier AppliedVolumesFrom map[string]struct{} } @@ -258,18 +266,18 @@ func populateCommand(c *Container, env []string) error { pid.HostPid = c.hostConfig.PidMode.IsHost() // Build lists of devices allowed and created within the container. - userSpecifiedDevices := make([]*devices.Device, len(c.hostConfig.Devices)) + userSpecifiedDevices := make([]*configs.Device, len(c.hostConfig.Devices)) for i, deviceMapping := range c.hostConfig.Devices { - device, err := devices.GetDevice(deviceMapping.PathOnHost, deviceMapping.CgroupPermissions) + device, err := devices.DeviceFromPath(deviceMapping.PathOnHost, deviceMapping.CgroupPermissions) if err != nil { return fmt.Errorf("error gathering device information while adding custom device %q: %s", deviceMapping.PathOnHost, err) } device.Path = deviceMapping.PathInContainer userSpecifiedDevices[i] = device } - allowedDevices := append(devices.DefaultAllowedDevices, userSpecifiedDevices...) + allowedDevices := append(configs.DefaultAllowedDevices, userSpecifiedDevices...) - autoCreatedDevices := append(devices.DefaultAutoCreatedDevices, userSpecifiedDevices...) + autoCreatedDevices := append(configs.DefaultAutoCreatedDevices, userSpecifiedDevices...) // TODO: this can be removed after lxc-conf is fully deprecated lxcConfig, err := mergeLxcConfIntoOptions(c.hostConfig) @@ -300,10 +308,10 @@ func populateCommand(c *Container, env []string) error { } resources := &execdriver.Resources{ - Memory: c.Config.Memory, - MemorySwap: c.Config.MemorySwap, - CpuShares: c.Config.CpuShares, - Cpuset: c.Config.Cpuset, + Memory: c.hostConfig.Memory, + MemorySwap: c.hostConfig.MemorySwap, + CpuShares: c.hostConfig.CpuShares, + CpusetCpus: c.hostConfig.CpusetCpus, Rlimits: rlimits, } @@ -337,6 +345,7 @@ func populateCommand(c *Container, env []string) error { MountLabel: c.GetMountLabel(), LxcConfig: lxcConfig, AppArmorProfile: c.AppArmorProfile, + CgroupParent: c.hostConfig.CgroupParent, } return nil @@ -894,7 +903,7 @@ func (container *Container) GetSize() (int64, int64) { ) if err := container.Mount(); err != nil { - log.Errorf("Warning: failed to compute size of container rootfs %s: %s", container.ID, err) + log.Errorf("Failed to compute size of container rootfs %s: %s", container.ID, err) return sizeRw, sizeRootfs } defer container.Unmount() @@ -902,14 +911,14 @@ func (container *Container) GetSize() (int64, int64) { initID := fmt.Sprintf("%s-init", container.ID) sizeRw, err = driver.DiffSize(container.ID, initID) if err != nil { - log.Errorf("Warning: driver %s couldn't return diff size of container %s: %s", driver, container.ID, err) + log.Errorf("Driver %s couldn't return diff size of container %s: %s", driver, container.ID, err) // FIXME: GetSize should return an error. Not changing it now in case // there is a side-effect. sizeRw = -1 } if _, err = os.Stat(container.basefs); err != nil { - if sizeRootfs, err = utils.TreeSize(container.basefs); err != nil { + if sizeRootfs, err = directory.Size(container.basefs); err != nil { sizeRootfs = -1 } } @@ -971,7 +980,7 @@ func (container *Container) Exposes(p nat.Port) bool { return exists } -func (container *Container) GetPtyMaster() (*os.File, error) { +func (container *Container) GetPtyMaster() (libcontainer.Console, error) { ttyConsole, ok := container.command.ProcessConfig.Terminal.(execdriver.TtyTerminal) if !ok { return nil, ErrNoTTY @@ -1233,15 +1242,15 @@ func (container *Container) initializeNetworking() error { // Make sure the config is compatible with the current kernel func (container *Container) verifyDaemonSettings() { if container.Config.Memory > 0 && !container.daemon.sysInfo.MemoryLimit { - log.Infof("WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.") + log.Warnf("Your kernel does not support memory limit capabilities. Limitation discarded.") container.Config.Memory = 0 } if container.Config.Memory > 0 && !container.daemon.sysInfo.SwapLimit { - log.Infof("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.") + log.Warnf("Your kernel does not support swap limit capabilities. Limitation discarded.") container.Config.MemorySwap = -1 } if container.daemon.sysInfo.IPv4ForwardingDisabled { - log.Infof("WARNING: IPv4 forwarding is disabled. Networking will not work") + log.Warnf("IPv4 forwarding is disabled. Networking will not work") } } @@ -1352,21 +1361,37 @@ func (container *Container) setupWorkingDirectory() error { return nil } -func (container *Container) startLoggingToDisk() error { - // Setup logging of stdout and stderr to disk - logPath, err := container.logPath("json") +func (container *Container) startLogging() error { + cfg := container.hostConfig.LogConfig + if cfg.Type == "" { + cfg = container.daemon.defaultLogConfig + } + var l logger.Logger + switch cfg.Type { + case "json-file": + pth, err := container.logPath("json") + if err != nil { + return err + } + + dl, err := jsonfilelog.New(pth) + if err != nil { + return err + } + l = dl + case "none": + return nil + default: + return fmt.Errorf("Unknown logging driver: %s", cfg.Type) + } + + copier, err := logger.NewCopier(container.ID, map[string]io.Reader{"stdout": container.StdoutPipe(), "stderr": container.StderrPipe()}, l) if err != nil { return err } - container.LogPath = logPath - - if err := container.daemon.LogToDisk(container.stdout, container.LogPath, "stdout"); err != nil { - return err - } - - if err := container.daemon.LogToDisk(container.stderr, container.LogPath, "stderr"); err != nil { - return err - } + container.logCopier = copier + copier.Run() + container.logDriver = l return nil } @@ -1467,3 +1492,12 @@ func (container *Container) getNetworkedContainer() (*Container, error) { func (container *Container) Stats() (*execdriver.ResourceStats, error) { return container.daemon.Stats(container) } + +func (c *Container) LogDriverType() string { + c.Lock() + defer c.Unlock() + if c.hostConfig.LogConfig.Type == "" { + return c.daemon.defaultLogConfig.Type + } + return c.hostConfig.LogConfig.Type +} diff --git a/daemon/create.go b/daemon/create.go index 5729cc1e57..e17b63636b 100644 --- a/daemon/create.go +++ b/daemon/create.go @@ -2,6 +2,7 @@ package daemon import ( "fmt" + "strings" "github.com/docker/docker/engine" "github.com/docker/docker/graph" @@ -18,33 +19,31 @@ func (daemon *Daemon) ContainerCreate(job *engine.Job) engine.Status { } else if len(job.Args) > 1 { return job.Errorf("Usage: %s", job.Name) } + config := runconfig.ContainerConfigFromJob(job) - if config.Memory != 0 && config.Memory < 4194304 { + hostConfig := runconfig.ContainerHostConfigFromJob(job) + + if len(hostConfig.LxcConf) > 0 && !strings.Contains(daemon.ExecutionDriver().Name(), "lxc") { + return job.Errorf("Cannot use --lxc-conf with execdriver: %s", daemon.ExecutionDriver().Name()) + } + if hostConfig.Memory != 0 && hostConfig.Memory < 4194304 { return job.Errorf("Minimum memory limit allowed is 4MB") } - if config.Memory > 0 && !daemon.SystemConfig().MemoryLimit { + if hostConfig.Memory > 0 && !daemon.SystemConfig().MemoryLimit { job.Errorf("Your kernel does not support memory limit capabilities. Limitation discarded.\n") - config.Memory = 0 + hostConfig.Memory = 0 } - if config.Memory > 0 && !daemon.SystemConfig().SwapLimit { + if hostConfig.Memory > 0 && !daemon.SystemConfig().SwapLimit { job.Errorf("Your kernel does not support swap limit capabilities. Limitation discarded.\n") - config.MemorySwap = -1 + hostConfig.MemorySwap = -1 } - if config.Memory > 0 && config.MemorySwap > 0 && config.MemorySwap < config.Memory { + if hostConfig.Memory > 0 && hostConfig.MemorySwap > 0 && hostConfig.MemorySwap < hostConfig.Memory { return job.Errorf("Minimum memoryswap limit should be larger than memory limit, see usage.\n") } - if config.Memory == 0 && config.MemorySwap > 0 { + if hostConfig.Memory == 0 && hostConfig.MemorySwap > 0 { return job.Errorf("You should always set the Memory limit when using Memoryswap limit, see usage.\n") } - var hostConfig *runconfig.HostConfig - if job.EnvExists("HostConfig") { - hostConfig = runconfig.ContainerHostConfigFromJob(job) - } else { - // Older versions of the API don't provide a HostConfig. - hostConfig = nil - } - container, buildWarnings, err := daemon.Create(config, hostConfig, name) if err != nil { if daemon.Graph().IsNotExist(err) { diff --git a/daemon/daemon.go b/daemon/daemon.go index 31fcded4c6..ebb43e2484 100644 --- a/daemon/daemon.go +++ b/daemon/daemon.go @@ -89,23 +89,24 @@ func (c *contStore) List() []*Container { } type Daemon struct { - ID string - repository string - sysInitPath string - containers *contStore - execCommands *execStore - graph *graph.Graph - repositories *graph.TagStore - idIndex *truncindex.TruncIndex - sysInfo *sysinfo.SysInfo - volumes *volumes.Repository - eng *engine.Engine - config *Config - containerGraph *graphdb.Database - driver graphdriver.Driver - execDriver execdriver.Driver - trustStore *trust.TrustStore - statsCollector *statsCollector + ID string + repository string + sysInitPath string + containers *contStore + execCommands *execStore + graph *graph.Graph + repositories *graph.TagStore + idIndex *truncindex.TruncIndex + sysInfo *sysinfo.SysInfo + volumes *volumes.Repository + eng *engine.Engine + config *Config + containerGraph *graphdb.Database + driver graphdriver.Driver + execDriver execdriver.Driver + trustStore *trust.TrustStore + statsCollector *statsCollector + defaultLogConfig runconfig.LogConfig } // Install installs daemon capabilities to eng. @@ -345,7 +346,7 @@ func (daemon *Daemon) restore() error { for _, v := range dir { id := v.Name() container, err := daemon.load(id) - if !debug { + if !debug && log.GetLevel() == log.InfoLevel { fmt.Print(".") } if err != nil { @@ -367,7 +368,7 @@ func (daemon *Daemon) restore() error { if entities := daemon.containerGraph.List("/", -1); entities != nil { for _, p := range entities.Paths() { - if !debug { + if !debug && log.GetLevel() == log.InfoLevel { fmt.Print(".") } @@ -419,7 +420,9 @@ func (daemon *Daemon) restore() error { } if !debug { - fmt.Println() + if log.GetLevel() == log.InfoLevel { + fmt.Println() + } log.Infof("Loading containers: done.") } @@ -774,6 +777,13 @@ func (daemon *Daemon) RegisterLinks(container *Container, hostConfig *runconfig. //An error from daemon.Get() means this name could not be found return fmt.Errorf("Could not get container for %s", parts["name"]) } + for child.hostConfig.NetworkMode.IsContainer() { + parts := strings.SplitN(string(child.hostConfig.NetworkMode), ":", 2) + child, err = daemon.Get(parts[1]) + if err != nil { + return fmt.Errorf("Could not get container for %s", parts[1]) + } + } if child.hostConfig.NetworkMode.IsHost() { return runconfig.ErrConflictHostNetworkAndLinks } @@ -817,6 +827,12 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) } config.DisableNetwork = config.BridgeIface == disableNetworkBridge + // register portallocator release on shutdown + eng.OnShutdown(func() { + if err := portallocator.ReleaseAll(); err != nil { + log.Errorf("portallocator.ReleaseAll(): %s", err) + } + }) // Claim the pidfile first, to avoid any and all unexpected race conditions. // Some of the init doesn't need a pidfile lock - but let's not try to be smart. if config.Pidfile != "" { @@ -850,9 +866,6 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } os.Setenv("TMPDIR", realTmp) - if !config.EnableSelinuxSupport { - selinuxSetDisabled() - } // get the canonical path to the Docker root directory var realRoot string @@ -876,13 +889,28 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) // Load storage driver driver, err := graphdriver.New(config.Root, config.GraphOptions) if err != nil { - return nil, err + return nil, fmt.Errorf("error intializing graphdriver: %v", err) } log.Debugf("Using graph driver %s", driver) + // register cleanup for graph driver + eng.OnShutdown(func() { + if err := driver.Cleanup(); err != nil { + log.Errorf("Error during graph storage driver.Cleanup(): %v", err) + } + }) - // As Docker on btrfs and SELinux are incompatible at present, error on both being enabled - if selinuxEnabled() && config.EnableSelinuxSupport && driver.String() == "btrfs" { - return nil, fmt.Errorf("SELinux is not supported with the BTRFS graph driver!") + if config.EnableSelinuxSupport { + if selinuxEnabled() { + // As Docker on btrfs and SELinux are incompatible at present, error on both being enabled + if driver.String() == "btrfs" { + return nil, fmt.Errorf("SELinux is not supported with the BTRFS graph driver") + } + log.Debug("SELinux enabled successfully") + } else { + log.Warn("Docker could not enable SELinux on the host system") + } + } else { + selinuxSetDisabled() } daemonRepo := path.Join(config.Root, "containers") @@ -956,6 +984,12 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) if err != nil { return nil, err } + // register graph close on shutdown + eng.OnShutdown(func() { + if err := graph.Close(); err != nil { + log.Errorf("Error during container graph.Close(): %v", err) + } + }) localCopy := path.Join(config.Root, "init", fmt.Sprintf("dockerinit-%s", dockerversion.VERSION)) sysInitPath := utils.DockerInitPath(localCopy) @@ -984,24 +1018,32 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) } daemon := &Daemon{ - ID: trustKey.PublicKey().KeyID(), - repository: daemonRepo, - containers: &contStore{s: make(map[string]*Container)}, - execCommands: newExecStore(), - graph: g, - repositories: repositories, - idIndex: truncindex.NewTruncIndex([]string{}), - sysInfo: sysInfo, - volumes: volumes, - config: config, - containerGraph: graph, - driver: driver, - sysInitPath: sysInitPath, - execDriver: ed, - eng: eng, - trustStore: t, - statsCollector: newStatsCollector(1 * time.Second), + ID: trustKey.PublicKey().KeyID(), + repository: daemonRepo, + containers: &contStore{s: make(map[string]*Container)}, + execCommands: newExecStore(), + graph: g, + repositories: repositories, + idIndex: truncindex.NewTruncIndex([]string{}), + sysInfo: sysInfo, + volumes: volumes, + config: config, + containerGraph: graph, + driver: driver, + sysInitPath: sysInitPath, + execDriver: ed, + eng: eng, + trustStore: t, + statsCollector: newStatsCollector(1 * time.Second), + defaultLogConfig: config.LogConfig, } + + eng.OnShutdown(func() { + if err := daemon.shutdown(); err != nil { + log.Errorf("Error during daemon.shutdown(): %v", err) + } + }) + if err := daemon.restore(); err != nil { return nil, err } @@ -1011,25 +1053,6 @@ func NewDaemonFromDirectory(config *Config, eng *engine.Engine) (*Daemon, error) return nil, err } - // Setup shutdown handlers - // FIXME: can these shutdown handlers be registered closer to their source? - eng.OnShutdown(func() { - // FIXME: if these cleanup steps can be called concurrently, register - // them as separate handlers to speed up total shutdown time - if err := daemon.shutdown(); err != nil { - log.Errorf("daemon.shutdown(): %s", err) - } - if err := portallocator.ReleaseAll(); err != nil { - log.Errorf("portallocator.ReleaseAll(): %s", err) - } - if err := daemon.driver.Cleanup(); err != nil { - log.Errorf("daemon.driver.Cleanup(): %s", err.Error()) - } - if err := daemon.containerGraph.Close(); err != nil { - log.Errorf("daemon.containerGraph.Close(): %s", err.Error()) - } - }) - return daemon, nil } @@ -1230,11 +1253,11 @@ func checkKernel() error { // the circumstances of pre-3.8 crashes are clearer. // For details see http://github.com/docker/docker/issues/407 if k, err := kernel.GetKernelVersion(); err != nil { - log.Infof("WARNING: %s", err) + log.Warnf("%s", err) } else { if kernel.CompareKernelVersion(k, &kernel.KernelVersionInfo{Kernel: 3, Major: 8, Minor: 0}) < 0 { if os.Getenv("DOCKER_NOWARN_KERNEL_VERSION") == "" { - log.Infof("WARNING: You are running linux kernel version %s, which might be unstable running docker. Please upgrade your kernel to 3.8.0.", k.String()) + log.Warnf("You are running linux kernel version %s, which might be unstable running docker. Please upgrade your kernel to 3.8.0.", k.String()) } } } diff --git a/daemon/execdriver/MAINTAINERS b/daemon/execdriver/MAINTAINERS deleted file mode 100644 index 68a97d2fc2..0000000000 --- a/daemon/execdriver/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Michael Crosby (@crosbymichael) -Victor Vieux (@vieux) diff --git a/daemon/execdriver/driver.go b/daemon/execdriver/driver.go index 24bbc62416..e937de3beb 100644 --- a/daemon/execdriver/driver.go +++ b/daemon/execdriver/driver.go @@ -1,17 +1,22 @@ package execdriver import ( + "encoding/json" "errors" "io" + "io/ioutil" "os" "os/exec" + "path/filepath" + "strconv" "strings" "time" "github.com/docker/docker/daemon/execdriver/native/template" "github.com/docker/docker/pkg/ulimit" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/devices" + "github.com/docker/libcontainer/cgroups/fs" + "github.com/docker/libcontainer/configs" ) // Context is a generic key value pair that allows @@ -42,7 +47,7 @@ type Terminal interface { } type TtyTerminal interface { - Master() *os.File + Master() libcontainer.Console } // ExitStatus provides exit reasons for a container. @@ -104,12 +109,12 @@ type Resources struct { Memory int64 `json:"memory"` MemorySwap int64 `json:"memory_swap"` CpuShares int64 `json:"cpu_shares"` - Cpuset string `json:"cpuset"` + CpusetCpus string `json:"cpuset_cpus"` Rlimits []*ulimit.Rlimit `json:"rlimits"` } type ResourceStats struct { - *libcontainer.ContainerStats + *libcontainer.Stats Read time.Time `json:"read"` MemoryLimit int64 `json:"memory_limit"` SystemUsage uint64 `json:"system_usage"` @@ -149,8 +154,8 @@ type Command struct { Pid *Pid `json:"pid"` Resources *Resources `json:"resources"` Mounts []Mount `json:"mounts"` - AllowedDevices []*devices.Device `json:"allowed_devices"` - AutoCreatedDevices []*devices.Device `json:"autocreated_devices"` + AllowedDevices []*configs.Device `json:"allowed_devices"` + AutoCreatedDevices []*configs.Device `json:"autocreated_devices"` CapAdd []string `json:"cap_add"` CapDrop []string `json:"cap_drop"` ContainerPid int `json:"container_pid"` // the pid for the process inside a container @@ -159,25 +164,27 @@ type Command struct { MountLabel string `json:"mount_label"` LxcConfig []string `json:"lxc_config"` AppArmorProfile string `json:"apparmor_profile"` + CgroupParent string `json:"cgroup_parent"` // The parent cgroup for this command. } -func InitContainer(c *Command) *libcontainer.Config { +func InitContainer(c *Command) *configs.Config { container := template.New() container.Hostname = getEnv("HOSTNAME", c.ProcessConfig.Env) - container.Tty = c.ProcessConfig.Tty - container.User = c.ProcessConfig.User - container.WorkingDir = c.WorkingDir - container.Env = c.ProcessConfig.Env container.Cgroups.Name = c.ID container.Cgroups.AllowedDevices = c.AllowedDevices - container.MountConfig.DeviceNodes = c.AutoCreatedDevices - container.RootFs = c.Rootfs - container.MountConfig.ReadonlyFs = c.ReadonlyRootfs + container.Readonlyfs = c.ReadonlyRootfs + container.Devices = c.AutoCreatedDevices + container.Rootfs = c.Rootfs + container.Readonlyfs = c.ReadonlyRootfs // check to see if we are running in ramdisk to disable pivot root - container.MountConfig.NoPivotRoot = os.Getenv("DOCKER_RAMDISK") != "" - container.RestrictSys = true + container.NoPivotRoot = os.Getenv("DOCKER_RAMDISK") != "" + + // Default parent cgroup is "docker". Override if required. + if c.CgroupParent != "" { + container.Cgroups.Parent = c.CgroupParent + } return container } @@ -191,40 +198,110 @@ func getEnv(key string, env []string) string { return "" } -func SetupCgroups(container *libcontainer.Config, c *Command) error { +func SetupCgroups(container *configs.Config, c *Command) error { if c.Resources != nil { container.Cgroups.CpuShares = c.Resources.CpuShares container.Cgroups.Memory = c.Resources.Memory container.Cgroups.MemoryReservation = c.Resources.Memory container.Cgroups.MemorySwap = c.Resources.MemorySwap - container.Cgroups.CpusetCpus = c.Resources.Cpuset + container.Cgroups.CpusetCpus = c.Resources.CpusetCpus } return nil } -func Stats(stateFile string, containerMemoryLimit int64, machineMemory int64) (*ResourceStats, error) { - state, err := libcontainer.GetState(stateFile) - if err != nil { - if os.IsNotExist(err) { - return nil, ErrNotRunning +// Returns the network statistics for the network interfaces represented by the NetworkRuntimeInfo. +func getNetworkInterfaceStats(interfaceName string) (*libcontainer.NetworkInterface, error) { + out := &libcontainer.NetworkInterface{Name: interfaceName} + // This can happen if the network runtime information is missing - possible if the + // container was created by an old version of libcontainer. + if interfaceName == "" { + return out, nil + } + type netStatsPair struct { + // Where to write the output. + Out *uint64 + // The network stats file to read. + File string + } + // Ingress for host veth is from the container. Hence tx_bytes stat on the host veth is actually number of bytes received by the container. + netStats := []netStatsPair{ + {Out: &out.RxBytes, File: "tx_bytes"}, + {Out: &out.RxPackets, File: "tx_packets"}, + {Out: &out.RxErrors, File: "tx_errors"}, + {Out: &out.RxDropped, File: "tx_dropped"}, + + {Out: &out.TxBytes, File: "rx_bytes"}, + {Out: &out.TxPackets, File: "rx_packets"}, + {Out: &out.TxErrors, File: "rx_errors"}, + {Out: &out.TxDropped, File: "rx_dropped"}, + } + for _, netStat := range netStats { + data, err := readSysfsNetworkStats(interfaceName, netStat.File) + if err != nil { + return nil, err } + *(netStat.Out) = data + } + return out, nil +} + +// Reads the specified statistics available under /sys/class/net//statistics +func readSysfsNetworkStats(ethInterface, statsFile string) (uint64, error) { + data, err := ioutil.ReadFile(filepath.Join("/sys/class/net", ethInterface, "statistics", statsFile)) + if err != nil { + return 0, err + } + return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64) +} + +func Stats(containerDir string, containerMemoryLimit int64, machineMemory int64) (*ResourceStats, error) { + f, err := os.Open(filepath.Join(containerDir, "state.json")) + if err != nil { + return nil, err + } + defer f.Close() + + type network struct { + Type string + HostInterfaceName string + } + + state := struct { + CgroupPaths map[string]string `json:"cgroup_paths"` + Networks []network + }{} + + if err := json.NewDecoder(f).Decode(&state); err != nil { return nil, err } now := time.Now() - stats, err := libcontainer.GetStats(nil, state) + + mgr := fs.Manager{Paths: state.CgroupPaths} + cstats, err := mgr.GetStats() if err != nil { return nil, err } + stats := &libcontainer.Stats{CgroupStats: cstats} // if the container does not have any memory limit specified set the // limit to the machines memory memoryLimit := containerMemoryLimit if memoryLimit == 0 { memoryLimit = machineMemory } + for _, iface := range state.Networks { + switch iface.Type { + case "veth": + istats, err := getNetworkInterfaceStats(iface.HostInterfaceName) + if err != nil { + return nil, err + } + stats.Interfaces = append(stats.Interfaces, istats) + } + } return &ResourceStats{ - Read: now, - ContainerStats: stats, - MemoryLimit: memoryLimit, + Stats: stats, + Read: now, + MemoryLimit: memoryLimit, }, nil } diff --git a/daemon/execdriver/lxc/MAINTAINERS b/daemon/execdriver/lxc/MAINTAINERS deleted file mode 100644 index ac8ff535ff..0000000000 --- a/daemon/execdriver/lxc/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -# the LXC exec driver needs more maintainers and contributions -Dinesh Subhraveti (@dineshs-altiscale) diff --git a/daemon/execdriver/lxc/driver.go b/daemon/execdriver/lxc/driver.go index f467b696c1..f45c21445f 100644 --- a/daemon/execdriver/lxc/driver.go +++ b/daemon/execdriver/lxc/driver.go @@ -23,7 +23,9 @@ import ( "github.com/docker/docker/utils" "github.com/docker/libcontainer" "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/mount/nodes" + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/system" + "github.com/docker/libcontainer/user" "github.com/kr/pty" ) @@ -42,7 +44,7 @@ type driver struct { } type activeContainer struct { - container *libcontainer.Config + container *configs.Config cmd *exec.Cmd } @@ -190,7 +192,7 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba c.ProcessConfig.Path = aname c.ProcessConfig.Args = append([]string{name}, arg...) - if err := nodes.CreateDeviceNodes(c.Rootfs, c.AutoCreatedDevices); err != nil { + if err := createDeviceNodes(c.Rootfs, c.AutoCreatedDevices); err != nil { return execdriver.ExitStatus{ExitCode: -1}, err } @@ -231,11 +233,17 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba } state := &libcontainer.State{ - InitPid: pid, - CgroupPaths: cgroupPaths, + InitProcessPid: pid, + CgroupPaths: cgroupPaths, } - if err := libcontainer.SaveState(dataPath, state); err != nil { + f, err := os.Create(filepath.Join(dataPath, "state.json")) + if err != nil { + return terminate(err) + } + defer f.Close() + + if err := json.NewEncoder(f).Encode(state); err != nil { return terminate(err) } @@ -245,18 +253,19 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba log.Debugf("Invoking startCallback") startCallback(&c.ProcessConfig, pid) } + oomKill := false - oomKillNotification, err := libcontainer.NotifyOnOOM(state) + oomKillNotification, err := notifyOnOOM(cgroupPaths) + + <-waitLock + if err == nil { _, oomKill = <-oomKillNotification log.Debugf("oomKill error %s waitErr %s", oomKill, waitErr) - } else { - log.Warnf("WARNING: Your kernel does not support OOM notifications: %s", err) + log.Warnf("Your kernel does not support OOM notifications: %s", err) } - <-waitLock - // check oom error exitCode := getExitCode(c) if oomKill { @@ -265,9 +274,57 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba return execdriver.ExitStatus{ExitCode: exitCode, OOMKilled: oomKill}, waitErr } +// copy from libcontainer +func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) { + dir := paths["memory"] + if dir == "" { + return nil, fmt.Errorf("There is no path for %q in state", "memory") + } + oomControl, err := os.Open(filepath.Join(dir, "memory.oom_control")) + if err != nil { + return nil, err + } + fd, _, syserr := syscall.RawSyscall(syscall.SYS_EVENTFD2, 0, syscall.FD_CLOEXEC, 0) + if syserr != 0 { + oomControl.Close() + return nil, syserr + } + + eventfd := os.NewFile(fd, "eventfd") + + eventControlPath := filepath.Join(dir, "cgroup.event_control") + data := fmt.Sprintf("%d %d", eventfd.Fd(), oomControl.Fd()) + if err := ioutil.WriteFile(eventControlPath, []byte(data), 0700); err != nil { + eventfd.Close() + oomControl.Close() + return nil, err + } + ch := make(chan struct{}) + go func() { + defer func() { + close(ch) + eventfd.Close() + oomControl.Close() + }() + buf := make([]byte, 8) + for { + if _, err := eventfd.Read(buf); err != nil { + return + } + // When a cgroup is destroyed, an event is sent to eventfd. + // So if the control path is gone, return instead of notifying. + if _, err := os.Lstat(eventControlPath); os.IsNotExist(err) { + return + } + ch <- struct{}{} + } + }() + return ch, nil +} + // createContainer populates and configures the container type with the // data provided by the execdriver.Command -func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, error) { +func (d *driver) createContainer(c *execdriver.Command) (*configs.Config, error) { container := execdriver.InitContainer(c) if err := execdriver.SetupCgroups(container, c); err != nil { return nil, err @@ -297,6 +354,90 @@ func cgroupPaths(containerId string) (map[string]string, error) { return paths, nil } +// this is copy from old libcontainer nodes.go +func createDeviceNodes(rootfs string, nodesToCreate []*configs.Device) error { + oldMask := syscall.Umask(0000) + defer syscall.Umask(oldMask) + + for _, node := range nodesToCreate { + if err := createDeviceNode(rootfs, node); err != nil { + return err + } + } + return nil +} + +// Creates the device node in the rootfs of the container. +func createDeviceNode(rootfs string, node *configs.Device) error { + var ( + dest = filepath.Join(rootfs, node.Path) + parent = filepath.Dir(dest) + ) + + if err := os.MkdirAll(parent, 0755); err != nil { + return err + } + + fileMode := node.FileMode + switch node.Type { + case 'c': + fileMode |= syscall.S_IFCHR + case 'b': + fileMode |= syscall.S_IFBLK + default: + return fmt.Errorf("%c is not a valid device type for device %s", node.Type, node.Path) + } + + if err := syscall.Mknod(dest, uint32(fileMode), node.Mkdev()); err != nil && !os.IsExist(err) { + return fmt.Errorf("mknod %s %s", node.Path, err) + } + + if err := syscall.Chown(dest, int(node.Uid), int(node.Gid)); err != nil { + return fmt.Errorf("chown %s to %d:%d", node.Path, node.Uid, node.Gid) + } + + return nil +} + +// setupUser changes the groups, gid, and uid for the user inside the container +// copy from libcontainer, cause not it's private +func setupUser(userSpec string) error { + // Set up defaults. + defaultExecUser := user.ExecUser{ + Uid: syscall.Getuid(), + Gid: syscall.Getgid(), + Home: "/", + } + passwdPath, err := user.GetPasswdPath() + if err != nil { + return err + } + groupPath, err := user.GetGroupPath() + if err != nil { + return err + } + execUser, err := user.GetExecUserPath(userSpec, &defaultExecUser, passwdPath, groupPath) + if err != nil { + return err + } + if err := syscall.Setgroups(execUser.Sgids); err != nil { + return err + } + if err := system.Setgid(execUser.Gid); err != nil { + return err + } + if err := system.Setuid(execUser.Uid); err != nil { + return err + } + // if we didn't get HOME already, set it based on the user's HOME + if envHome := os.Getenv("HOME"); envHome == "" { + if err := os.Setenv("HOME", execUser.Home); err != nil { + return err + } + } + return nil +} + /// Return the exit code of the process // if the process has not exited -1 will be returned func getExitCode(c *execdriver.Command) int { diff --git a/daemon/execdriver/lxc/lxc_init_linux.go b/daemon/execdriver/lxc/lxc_init_linux.go index 956a283fc2..e7bc2b5f3a 100644 --- a/daemon/execdriver/lxc/lxc_init_linux.go +++ b/daemon/execdriver/lxc/lxc_init_linux.go @@ -3,8 +3,6 @@ package lxc import ( "fmt" - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" "github.com/docker/libcontainer/utils" ) @@ -12,9 +10,7 @@ func finalizeNamespace(args *InitArgs) error { if err := utils.CloseExecFrom(3); err != nil { return err } - if err := namespaces.SetupUser(&libcontainer.Config{ - User: args.User, - }); err != nil { + if err := setupUser(args.User); err != nil { return fmt.Errorf("setup user %s", err) } if err := setupWorkingDirectory(args); err != nil { diff --git a/daemon/execdriver/lxc/lxc_template.go b/daemon/execdriver/lxc/lxc_template.go index 9de799dd52..e4a8ed6b5f 100644 --- a/daemon/execdriver/lxc/lxc_template.go +++ b/daemon/execdriver/lxc/lxc_template.go @@ -11,7 +11,6 @@ import ( nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template" "github.com/docker/docker/utils" "github.com/docker/libcontainer/label" - "github.com/docker/libcontainer/security/capabilities" ) const LxcTemplate = ` @@ -52,7 +51,7 @@ lxc.cgroup.devices.allow = a lxc.cgroup.devices.deny = a #Allow the devices passed to us in the AllowedDevices list. {{range $allowedDevice := .AllowedDevices}} -lxc.cgroup.devices.allow = {{$allowedDevice.GetCgroupAllowString}} +lxc.cgroup.devices.allow = {{$allowedDevice.CgroupString}} {{end}} {{end}} @@ -108,8 +107,8 @@ lxc.cgroup.memory.memsw.limit_in_bytes = {{$memSwap}} {{if .Resources.CpuShares}} lxc.cgroup.cpu.shares = {{.Resources.CpuShares}} {{end}} -{{if .Resources.Cpuset}} -lxc.cgroup.cpuset.cpus = {{.Resources.Cpuset}} +{{if .Resources.CpusetCpus}} +lxc.cgroup.cpuset.cpus = {{.Resources.CpusetCpus}} {{end}} {{end}} @@ -169,7 +168,7 @@ func keepCapabilities(adds []string, drops []string) ([]string, error) { var newCaps []string for _, cap := range caps { log.Debugf("cap %s\n", cap) - realCap := capabilities.GetCapability(cap) + realCap := execdriver.GetCapability(cap) numCap := fmt.Sprintf("%d", realCap.Value) newCaps = append(newCaps, numCap) } @@ -180,13 +179,10 @@ func keepCapabilities(adds []string, drops []string) ([]string, error) { func dropList(drops []string) ([]string, error) { if utils.StringsContainsNoCase(drops, "all") { var newCaps []string - for _, cap := range capabilities.GetAllCapabilities() { - log.Debugf("drop cap %s\n", cap) - realCap := capabilities.GetCapability(cap) - if realCap == nil { - return nil, fmt.Errorf("Invalid capability '%s'", cap) - } - numCap := fmt.Sprintf("%d", realCap.Value) + for _, capName := range execdriver.GetAllCapabilities() { + cap := execdriver.GetCapability(capName) + log.Debugf("drop cap %s\n", cap.Key) + numCap := fmt.Sprintf("%d", cap.Value) newCaps = append(newCaps, numCap) } return newCaps, nil diff --git a/daemon/execdriver/lxc/lxc_template_unit_test.go b/daemon/execdriver/lxc/lxc_template_unit_test.go index bb622d4bc5..65e7b6d551 100644 --- a/daemon/execdriver/lxc/lxc_template_unit_test.go +++ b/daemon/execdriver/lxc/lxc_template_unit_test.go @@ -5,11 +5,6 @@ package lxc import ( "bufio" "fmt" - "github.com/docker/docker/daemon/execdriver" - nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template" - "github.com/docker/libcontainer/devices" - "github.com/docker/libcontainer/security/capabilities" - "github.com/syndtr/gocapability/capability" "io/ioutil" "math/rand" "os" @@ -17,6 +12,11 @@ import ( "strings" "testing" "time" + + "github.com/docker/docker/daemon/execdriver" + nativeTemplate "github.com/docker/docker/daemon/execdriver/native/template" + "github.com/docker/libcontainer/configs" + "github.com/syndtr/gocapability/capability" ) func TestLXCConfig(t *testing.T) { @@ -53,7 +53,7 @@ func TestLXCConfig(t *testing.T) { Mtu: 1500, Interface: nil, }, - AllowedDevices: make([]*devices.Device, 0), + AllowedDevices: make([]*configs.Device, 0), ProcessConfig: execdriver.ProcessConfig{}, } p, err := driver.generateLXCConfig(command) @@ -295,7 +295,7 @@ func TestCustomLxcConfigMisc(t *testing.T) { grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1") container := nativeTemplate.New() for _, cap := range container.Capabilities { - realCap := capabilities.GetCapability(cap) + realCap := execdriver.GetCapability(cap) numCap := fmt.Sprintf("%d", realCap.Value) if cap != "MKNOD" && cap != "KILL" { grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap)) @@ -359,7 +359,7 @@ func TestCustomLxcConfigMiscOverride(t *testing.T) { grepFile(t, p, "lxc.cgroup.cpuset.cpus = 0,1") container := nativeTemplate.New() for _, cap := range container.Capabilities { - realCap := capabilities.GetCapability(cap) + realCap := execdriver.GetCapability(cap) numCap := fmt.Sprintf("%d", realCap.Value) if cap != "MKNOD" && cap != "KILL" { grepFile(t, p, fmt.Sprintf("lxc.cap.keep = %s", numCap)) diff --git a/daemon/execdriver/native/create.go b/daemon/execdriver/native/create.go index 3442f66a00..a988fba529 100644 --- a/daemon/execdriver/native/create.go +++ b/daemon/execdriver/native/create.go @@ -3,21 +3,24 @@ package native import ( + "errors" "fmt" - "os/exec" + "net" "path/filepath" + "strings" + "syscall" "github.com/docker/docker/daemon/execdriver" - "github.com/docker/libcontainer" + "github.com/docker/docker/pkg/symlink" "github.com/docker/libcontainer/apparmor" + "github.com/docker/libcontainer/configs" "github.com/docker/libcontainer/devices" - "github.com/docker/libcontainer/mount" - "github.com/docker/libcontainer/security/capabilities" + "github.com/docker/libcontainer/utils" ) // createContainer populates and configures the container type with the // data provided by the execdriver.Command -func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, error) { +func (d *driver) createContainer(c *execdriver.Command) (*configs.Config, error) { container := execdriver.InitContainer(c) if err := d.createIpc(container, c); err != nil { @@ -33,6 +36,14 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, e } if c.ProcessConfig.Privileged { + // clear readonly for /sys + for i := range container.Mounts { + if container.Mounts[i].Destination == "/sys" { + container.Mounts[i].Flags &= ^syscall.MS_RDONLY + } + } + container.ReadonlyPaths = nil + container.MaskPaths = nil if err := d.setPrivileged(container); err != nil { return nil, err } @@ -57,43 +68,52 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Config, e if err := d.setupLabels(container, c); err != nil { return nil, err } - d.setupRlimits(container, c) - - cmds := make(map[string]*exec.Cmd) - d.Lock() - for k, v := range d.activeContainers { - cmds[k] = v.cmd - } - d.Unlock() - return container, nil } -func (d *driver) createNetwork(container *libcontainer.Config, c *execdriver.Command) error { +func generateIfaceName() (string, error) { + for i := 0; i < 10; i++ { + name, err := utils.GenerateRandomName("veth", 7) + if err != nil { + continue + } + if _, err := net.InterfaceByName(name); err != nil { + if strings.Contains(err.Error(), "no such") { + return name, nil + } + return "", err + } + } + return "", errors.New("Failed to find name for new interface") +} + +func (d *driver) createNetwork(container *configs.Config, c *execdriver.Command) error { if c.Network.HostNetworking { - container.Namespaces.Remove(libcontainer.NEWNET) + container.Namespaces.Remove(configs.NEWNET) return nil } - container.Networks = []*libcontainer.Network{ + container.Networks = []*configs.Network{ { - Mtu: c.Network.Mtu, - Address: fmt.Sprintf("%s/%d", "127.0.0.1", 0), - Gateway: "localhost", - Type: "loopback", + Type: "loopback", }, } + iName, err := generateIfaceName() + if err != nil { + return err + } if c.Network.Interface != nil { - vethNetwork := libcontainer.Network{ - Mtu: c.Network.Mtu, - Address: fmt.Sprintf("%s/%d", c.Network.Interface.IPAddress, c.Network.Interface.IPPrefixLen), - MacAddress: c.Network.Interface.MacAddress, - Gateway: c.Network.Interface.Gateway, - Type: "veth", - Bridge: c.Network.Interface.Bridge, - VethPrefix: "veth", + vethNetwork := configs.Network{ + Name: "eth0", + HostInterfaceName: iName, + Mtu: c.Network.Mtu, + Address: fmt.Sprintf("%s/%d", c.Network.Interface.IPAddress, c.Network.Interface.IPPrefixLen), + MacAddress: c.Network.Interface.MacAddress, + Gateway: c.Network.Interface.Gateway, + Type: "veth", + Bridge: c.Network.Interface.Bridge, } if c.Network.Interface.GlobalIPv6Address != "" { vethNetwork.IPv6Address = fmt.Sprintf("%s/%d", c.Network.Interface.GlobalIPv6Address, c.Network.Interface.GlobalIPv6PrefixLen) @@ -107,21 +127,24 @@ func (d *driver) createNetwork(container *libcontainer.Config, c *execdriver.Com active := d.activeContainers[c.Network.ContainerID] d.Unlock() - if active == nil || active.cmd.Process == nil { + if active == nil { return fmt.Errorf("%s is not a valid running container to join", c.Network.ContainerID) } - cmd := active.cmd - nspath := filepath.Join("/proc", fmt.Sprint(cmd.Process.Pid), "ns", "net") - container.Namespaces.Add(libcontainer.NEWNET, nspath) + state, err := active.State() + if err != nil { + return err + } + + container.Namespaces.Add(configs.NEWNET, state.NamespacePaths[configs.NEWNET]) } return nil } -func (d *driver) createIpc(container *libcontainer.Config, c *execdriver.Command) error { +func (d *driver) createIpc(container *configs.Config, c *execdriver.Command) error { if c.Ipc.HostIpc { - container.Namespaces.Remove(libcontainer.NEWIPC) + container.Namespaces.Remove(configs.NEWIPC) return nil } @@ -130,37 +153,38 @@ func (d *driver) createIpc(container *libcontainer.Config, c *execdriver.Command active := d.activeContainers[c.Ipc.ContainerID] d.Unlock() - if active == nil || active.cmd.Process == nil { + if active == nil { return fmt.Errorf("%s is not a valid running container to join", c.Ipc.ContainerID) } - cmd := active.cmd - container.Namespaces.Add(libcontainer.NEWIPC, filepath.Join("/proc", fmt.Sprint(cmd.Process.Pid), "ns", "ipc")) + state, err := active.State() + if err != nil { + return err + } + container.Namespaces.Add(configs.NEWIPC, state.NamespacePaths[configs.NEWIPC]) } return nil } -func (d *driver) createPid(container *libcontainer.Config, c *execdriver.Command) error { +func (d *driver) createPid(container *configs.Config, c *execdriver.Command) error { if c.Pid.HostPid { - container.Namespaces.Remove(libcontainer.NEWPID) + container.Namespaces.Remove(configs.NEWPID) return nil } return nil } -func (d *driver) setPrivileged(container *libcontainer.Config) (err error) { - container.Capabilities = capabilities.GetAllCapabilities() +func (d *driver) setPrivileged(container *configs.Config) (err error) { + container.Capabilities = execdriver.GetAllCapabilities() container.Cgroups.AllowAllDevices = true - hostDeviceNodes, err := devices.GetHostDeviceNodes() + hostDevices, err := devices.HostDevices() if err != nil { return err } - container.MountConfig.DeviceNodes = hostDeviceNodes - - container.RestrictSys = false + container.Devices = hostDevices if apparmor.IsEnabled() { container.AppArmorProfile = "unconfined" @@ -169,39 +193,66 @@ func (d *driver) setPrivileged(container *libcontainer.Config) (err error) { return nil } -func (d *driver) setCapabilities(container *libcontainer.Config, c *execdriver.Command) (err error) { +func (d *driver) setCapabilities(container *configs.Config, c *execdriver.Command) (err error) { container.Capabilities, err = execdriver.TweakCapabilities(container.Capabilities, c.CapAdd, c.CapDrop) return err } -func (d *driver) setupRlimits(container *libcontainer.Config, c *execdriver.Command) { +func (d *driver) setupRlimits(container *configs.Config, c *execdriver.Command) { if c.Resources == nil { return } for _, rlimit := range c.Resources.Rlimits { - container.Rlimits = append(container.Rlimits, libcontainer.Rlimit((*rlimit))) - } -} - -func (d *driver) setupMounts(container *libcontainer.Config, c *execdriver.Command) error { - for _, m := range c.Mounts { - container.MountConfig.Mounts = append(container.MountConfig.Mounts, &mount.Mount{ - Type: "bind", - Source: m.Source, - Destination: m.Destination, - Writable: m.Writable, - Private: m.Private, - Slave: m.Slave, + container.Rlimits = append(container.Rlimits, configs.Rlimit{ + Type: rlimit.Type, + Hard: rlimit.Hard, + Soft: rlimit.Soft, }) } +} +func (d *driver) setupMounts(container *configs.Config, c *execdriver.Command) error { + userMounts := make(map[string]struct{}) + for _, m := range c.Mounts { + userMounts[m.Destination] = struct{}{} + } + + // Filter out mounts that are overriden by user supplied mounts + var defaultMounts []*configs.Mount + for _, m := range container.Mounts { + if _, ok := userMounts[m.Destination]; !ok { + defaultMounts = append(defaultMounts, m) + } + } + container.Mounts = defaultMounts + + for _, m := range c.Mounts { + dest, err := symlink.FollowSymlinkInScope(filepath.Join(c.Rootfs, m.Destination), c.Rootfs) + if err != nil { + return err + } + flags := syscall.MS_BIND | syscall.MS_REC + if !m.Writable { + flags |= syscall.MS_RDONLY + } + if m.Slave { + flags |= syscall.MS_SLAVE + } + + container.Mounts = append(container.Mounts, &configs.Mount{ + Source: m.Source, + Destination: dest, + Device: "bind", + Flags: flags, + }) + } return nil } -func (d *driver) setupLabels(container *libcontainer.Config, c *execdriver.Command) error { +func (d *driver) setupLabels(container *configs.Config, c *execdriver.Command) error { container.ProcessLabel = c.ProcessLabel - container.MountConfig.MountLabel = c.MountLabel + container.MountLabel = c.MountLabel return nil } diff --git a/daemon/execdriver/native/driver.go b/daemon/execdriver/native/driver.go index f5abcf02e9..99019d0f8e 100644 --- a/daemon/execdriver/native/driver.go +++ b/daemon/execdriver/native/driver.go @@ -4,28 +4,28 @@ package native import ( "encoding/json" - "errors" "fmt" "io" "io/ioutil" "os" "os/exec" "path/filepath" + "strings" "sync" "syscall" + "time" log "github.com/Sirupsen/logrus" "github.com/docker/docker/daemon/execdriver" + "github.com/docker/docker/pkg/reexec" sysinfo "github.com/docker/docker/pkg/system" "github.com/docker/docker/pkg/term" "github.com/docker/libcontainer" "github.com/docker/libcontainer/apparmor" - "github.com/docker/libcontainer/cgroups/fs" "github.com/docker/libcontainer/cgroups/systemd" - consolepkg "github.com/docker/libcontainer/console" - "github.com/docker/libcontainer/namespaces" - _ "github.com/docker/libcontainer/namespaces/nsenter" + "github.com/docker/libcontainer/configs" "github.com/docker/libcontainer/system" + "github.com/docker/libcontainer/utils" ) const ( @@ -33,16 +33,12 @@ const ( Version = "0.2" ) -type activeContainer struct { - container *libcontainer.Config - cmd *exec.Cmd -} - type driver struct { root string initPath string - activeContainers map[string]*activeContainer + activeContainers map[string]libcontainer.Container machineMemory int64 + factory libcontainer.Factory sync.Mutex } @@ -59,11 +55,27 @@ func NewDriver(root, initPath string) (*driver, error) { if err := apparmor.InstallDefaultProfile(); err != nil { return nil, err } + cgm := libcontainer.Cgroupfs + if systemd.UseSystemd() { + cgm = libcontainer.SystemdCgroups + } + + f, err := libcontainer.New( + root, + cgm, + libcontainer.InitPath(reexec.Self(), DriverName), + libcontainer.TmpfsRoot, + ) + if err != nil { + return nil, err + } + return &driver{ root: root, initPath: initPath, - activeContainers: make(map[string]*activeContainer), + activeContainers: make(map[string]libcontainer.Container), machineMemory: meminfo.MemTotal, + factory: f, }, nil } @@ -81,101 +93,141 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba var term execdriver.Terminal + p := &libcontainer.Process{ + Args: append([]string{c.ProcessConfig.Entrypoint}, c.ProcessConfig.Arguments...), + Env: c.ProcessConfig.Env, + Cwd: c.WorkingDir, + User: c.ProcessConfig.User, + } + if c.ProcessConfig.Tty { - term, err = NewTtyConsole(&c.ProcessConfig, pipes) + rootuid, err := container.HostUID() + if err != nil { + return execdriver.ExitStatus{ExitCode: -1}, err + } + cons, err := p.NewConsole(rootuid) + if err != nil { + return execdriver.ExitStatus{ExitCode: -1}, err + } + term, err = NewTtyConsole(cons, pipes, rootuid) } else { - term, err = execdriver.NewStdConsole(&c.ProcessConfig, pipes) + p.Stdout = pipes.Stdout + p.Stderr = pipes.Stderr + r, w, err := os.Pipe() + if err != nil { + return execdriver.ExitStatus{ExitCode: -1}, err + } + if pipes.Stdin != nil { + go func() { + io.Copy(w, pipes.Stdin) + w.Close() + }() + p.Stdin = r + } + term = &execdriver.StdConsole{} } if err != nil { return execdriver.ExitStatus{ExitCode: -1}, err } c.ProcessConfig.Terminal = term + cont, err := d.factory.Create(c.ID, container) + if err != nil { + return execdriver.ExitStatus{ExitCode: -1}, err + } d.Lock() - d.activeContainers[c.ID] = &activeContainer{ - container: container, - cmd: &c.ProcessConfig.Cmd, - } + d.activeContainers[c.ID] = cont d.Unlock() - - var ( - dataPath = filepath.Join(d.root, c.ID) - args = append([]string{c.ProcessConfig.Entrypoint}, c.ProcessConfig.Arguments...) - ) - - if err := d.createContainerRoot(c.ID); err != nil { - return execdriver.ExitStatus{ExitCode: -1}, err - } - defer d.cleanContainer(c.ID) - - if err := d.writeContainerFile(container, c.ID); err != nil { - return execdriver.ExitStatus{ExitCode: -1}, err - } - - execOutputChan := make(chan execOutput, 1) - waitForStart := make(chan struct{}) - - go func() { - exitCode, err := namespaces.Exec(container, c.ProcessConfig.Stdin, c.ProcessConfig.Stdout, c.ProcessConfig.Stderr, c.ProcessConfig.Console, dataPath, args, func(container *libcontainer.Config, console, dataPath, init string, child *os.File, args []string) *exec.Cmd { - c.ProcessConfig.Path = d.initPath - c.ProcessConfig.Args = append([]string{ - DriverName, - "-console", console, - "-pipe", "3", - "-root", filepath.Join(d.root, c.ID), - "--", - }, args...) - - // set this to nil so that when we set the clone flags anything else is reset - c.ProcessConfig.SysProcAttr = &syscall.SysProcAttr{ - Cloneflags: uintptr(namespaces.GetNamespaceFlags(container.Namespaces)), - } - c.ProcessConfig.ExtraFiles = []*os.File{child} - - c.ProcessConfig.Env = container.Env - c.ProcessConfig.Dir = container.RootFs - - return &c.ProcessConfig.Cmd - }, func() { - close(waitForStart) - if startCallback != nil { - c.ContainerPid = c.ProcessConfig.Process.Pid - startCallback(&c.ProcessConfig, c.ContainerPid) - } - }) - execOutputChan <- execOutput{exitCode, err} + defer func() { + cont.Destroy() + d.cleanContainer(c.ID) }() - select { - case execOutput := <-execOutputChan: - return execdriver.ExitStatus{ExitCode: execOutput.exitCode}, execOutput.err - case <-waitForStart: - break + if err := cont.Start(p); err != nil { + return execdriver.ExitStatus{ExitCode: -1}, err } - oomKill := false - state, err := libcontainer.GetState(filepath.Join(d.root, c.ID)) - if err == nil { - oomKillNotification, err := libcontainer.NotifyOnOOM(state) - if err == nil { - _, oomKill = <-oomKillNotification - } else { - log.Warnf("WARNING: Your kernel does not support OOM notifications: %s", err) + if startCallback != nil { + pid, err := p.Pid() + if err != nil { + p.Signal(os.Kill) + p.Wait() + return execdriver.ExitStatus{ExitCode: -1}, err } - } else { - log.Warnf("Failed to get container state, oom notify will not work: %s", err) + startCallback(&c.ProcessConfig, pid) } - // wait for the container to exit. - execOutput := <-execOutputChan - return execdriver.ExitStatus{ExitCode: execOutput.exitCode, OOMKilled: oomKill}, execOutput.err + oomKillNotification, err := cont.NotifyOOM() + if err != nil { + oomKillNotification = nil + log.Warnf("Your kernel does not support OOM notifications: %s", err) + } + waitF := p.Wait + if nss := cont.Config().Namespaces; nss.Contains(configs.NEWPID) { + // we need such hack for tracking processes with inerited fds, + // because cmd.Wait() waiting for all streams to be copied + waitF = waitInPIDHost(p, cont) + } + ps, err := waitF() + if err != nil { + if err, ok := err.(*exec.ExitError); !ok { + return execdriver.ExitStatus{ExitCode: -1}, err + } else { + ps = err.ProcessState + } + } + cont.Destroy() + + _, oomKill := <-oomKillNotification + + return execdriver.ExitStatus{ExitCode: utils.ExitStatus(ps.Sys().(syscall.WaitStatus)), OOMKilled: oomKill}, nil } -func (d *driver) Kill(p *execdriver.Command, sig int) error { - if p.ProcessConfig.Process == nil { - return errors.New("exec: not started") +func waitInPIDHost(p *libcontainer.Process, c libcontainer.Container) func() (*os.ProcessState, error) { + return func() (*os.ProcessState, error) { + pid, err := p.Pid() + if err != nil { + return nil, err + } + + process, err := os.FindProcess(pid) + s, err := process.Wait() + if err != nil { + if err, ok := err.(*exec.ExitError); !ok { + return s, err + } else { + s = err.ProcessState + } + } + processes, err := c.Processes() + if err != nil { + return s, err + } + + for _, pid := range processes { + process, err := os.FindProcess(pid) + if err != nil { + log.Errorf("Failed to kill process: %d", pid) + continue + } + process.Kill() + } + + p.Wait() + return s, err } - return syscall.Kill(p.ProcessConfig.Process.Pid, syscall.Signal(sig)) +} + +func (d *driver) Kill(c *execdriver.Command, sig int) error { + active := d.activeContainers[c.ID] + if active == nil { + return fmt.Errorf("active container for %s does not exist", c.ID) + } + state, err := active.State() + if err != nil { + return err + } + return syscall.Kill(state.InitProcessPid, syscall.Signal(sig)) } func (d *driver) Pause(c *execdriver.Command) error { @@ -183,11 +235,7 @@ func (d *driver) Pause(c *execdriver.Command) error { if active == nil { return fmt.Errorf("active container for %s does not exist", c.ID) } - active.container.Cgroups.Freezer = "FROZEN" - if systemd.UseSystemd() { - return systemd.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer) - } - return fs.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer) + return active.Pause() } func (d *driver) Unpause(c *execdriver.Command) error { @@ -195,44 +243,31 @@ func (d *driver) Unpause(c *execdriver.Command) error { if active == nil { return fmt.Errorf("active container for %s does not exist", c.ID) } - active.container.Cgroups.Freezer = "THAWED" - if systemd.UseSystemd() { - return systemd.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer) - } - return fs.Freeze(active.container.Cgroups, active.container.Cgroups.Freezer) + return active.Resume() } -func (d *driver) Terminate(p *execdriver.Command) error { +func (d *driver) Terminate(c *execdriver.Command) error { + defer d.cleanContainer(c.ID) // lets check the start time for the process - state, err := libcontainer.GetState(filepath.Join(d.root, p.ID)) - if err != nil { - if !os.IsNotExist(err) { - return err - } - // TODO: Remove this part for version 1.2.0 - // This is added only to ensure smooth upgrades from pre 1.1.0 to 1.1.0 - data, err := ioutil.ReadFile(filepath.Join(d.root, p.ID, "start")) - if err != nil { - // if we don't have the data on disk then we can assume the process is gone - // because this is only removed after we know the process has stopped - if os.IsNotExist(err) { - return nil - } - return err - } - state = &libcontainer.State{InitStartTime: string(data)} + active := d.activeContainers[c.ID] + if active == nil { + return fmt.Errorf("active container for %s does not exist", c.ID) } + state, err := active.State() + if err != nil { + return err + } + pid := state.InitProcessPid - currentStartTime, err := system.GetProcessStartTime(p.ProcessConfig.Process.Pid) + currentStartTime, err := system.GetProcessStartTime(pid) if err != nil { return err } - if state.InitStartTime == currentStartTime { - err = syscall.Kill(p.ProcessConfig.Process.Pid, 9) - syscall.Wait4(p.ProcessConfig.Process.Pid, nil, 0, nil) + if state.InitProcessStartTime == currentStartTime { + err = syscall.Kill(pid, 9) + syscall.Wait4(pid, nil, 0, nil) } - d.cleanContainer(p.ID) return err @@ -257,15 +292,10 @@ func (d *driver) GetPidsForContainer(id string) ([]int, error) { if active == nil { return nil, fmt.Errorf("active container for %s does not exist", id) } - c := active.container.Cgroups - - if systemd.UseSystemd() { - return systemd.GetPids(c) - } - return fs.GetPids(c) + return active.Processes() } -func (d *driver) writeContainerFile(container *libcontainer.Config, id string) error { +func (d *driver) writeContainerFile(container *configs.Config, id string) error { data, err := json.Marshal(container) if err != nil { return err @@ -277,7 +307,7 @@ func (d *driver) cleanContainer(id string) error { d.Lock() delete(d.activeContainers, id) d.Unlock() - return os.RemoveAll(filepath.Join(d.root, id, "container.json")) + return os.RemoveAll(filepath.Join(d.root, id)) } func (d *driver) createContainerRoot(id string) error { @@ -289,42 +319,64 @@ func (d *driver) Clean(id string) error { } func (d *driver) Stats(id string) (*execdriver.ResourceStats, error) { - return execdriver.Stats(filepath.Join(d.root, id), d.activeContainers[id].container.Cgroups.Memory, d.machineMemory) -} - -type TtyConsole struct { - MasterPty *os.File -} - -func NewTtyConsole(processConfig *execdriver.ProcessConfig, pipes *execdriver.Pipes) (*TtyConsole, error) { - ptyMaster, console, err := consolepkg.CreateMasterAndConsole() + c := d.activeContainers[id] + if c == nil { + return nil, execdriver.ErrNotRunning + } + now := time.Now() + stats, err := c.Stats() if err != nil { return nil, err } + memoryLimit := c.Config().Cgroups.Memory + // if the container does not have any memory limit specified set the + // limit to the machines memory + if memoryLimit == 0 { + memoryLimit = d.machineMemory + } + return &execdriver.ResourceStats{ + Stats: stats, + Read: now, + MemoryLimit: memoryLimit, + }, nil +} +func getEnv(key string, env []string) string { + for _, pair := range env { + parts := strings.Split(pair, "=") + if parts[0] == key { + return parts[1] + } + } + return "" +} + +type TtyConsole struct { + console libcontainer.Console +} + +func NewTtyConsole(console libcontainer.Console, pipes *execdriver.Pipes, rootuid int) (*TtyConsole, error) { tty := &TtyConsole{ - MasterPty: ptyMaster, + console: console, } - if err := tty.AttachPipes(&processConfig.Cmd, pipes); err != nil { + if err := tty.AttachPipes(pipes); err != nil { tty.Close() return nil, err } - processConfig.Console = console - return tty, nil } -func (t *TtyConsole) Master() *os.File { - return t.MasterPty +func (t *TtyConsole) Master() libcontainer.Console { + return t.console } func (t *TtyConsole) Resize(h, w int) error { - return term.SetWinsize(t.MasterPty.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)}) + return term.SetWinsize(t.console.Fd(), &term.Winsize{Height: uint16(h), Width: uint16(w)}) } -func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) error { +func (t *TtyConsole) AttachPipes(pipes *execdriver.Pipes) error { go func() { if wb, ok := pipes.Stdout.(interface { CloseWriters() error @@ -332,12 +384,12 @@ func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) err defer wb.CloseWriters() } - io.Copy(pipes.Stdout, t.MasterPty) + io.Copy(pipes.Stdout, t.console) }() if pipes.Stdin != nil { go func() { - io.Copy(t.MasterPty, pipes.Stdin) + io.Copy(t.console, pipes.Stdin) pipes.Stdin.Close() }() @@ -347,5 +399,5 @@ func (t *TtyConsole) AttachPipes(command *exec.Cmd, pipes *execdriver.Pipes) err } func (t *TtyConsole) Close() error { - return t.MasterPty.Close() + return t.console.Close() } diff --git a/daemon/execdriver/native/exec.go b/daemon/execdriver/native/exec.go index 84ad096725..af6dcd2adb 100644 --- a/daemon/execdriver/native/exec.go +++ b/daemon/execdriver/native/exec.go @@ -4,67 +4,77 @@ package native import ( "fmt" - "log" "os" "os/exec" - "path/filepath" - "runtime" + "syscall" "github.com/docker/docker/daemon/execdriver" - "github.com/docker/docker/pkg/reexec" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" + _ "github.com/docker/libcontainer/nsenter" + "github.com/docker/libcontainer/utils" ) -const execCommandName = "nsenter-exec" - -func init() { - reexec.Register(execCommandName, nsenterExec) -} - -func nsenterExec() { - runtime.LockOSThread() - - // User args are passed after '--' in the command line. - userArgs := findUserArgs() - - config, err := loadConfigFromFd() - if err != nil { - log.Fatalf("docker-exec: unable to receive config from sync pipe: %s", err) - } - - if err := namespaces.FinalizeSetns(config, userArgs); err != nil { - log.Fatalf("docker-exec: failed to exec: %s", err) - } -} - // TODO(vishh): Add support for running in priviledged mode and running as a different user. func (d *driver) Exec(c *execdriver.Command, processConfig *execdriver.ProcessConfig, pipes *execdriver.Pipes, startCallback execdriver.StartCallback) (int, error) { active := d.activeContainers[c.ID] if active == nil { return -1, fmt.Errorf("No active container exists with ID %s", c.ID) } - state, err := libcontainer.GetState(filepath.Join(d.root, c.ID)) - if err != nil { - return -1, fmt.Errorf("State unavailable for container with ID %s. The container may have been cleaned up already. Error: %s", c.ID, err) - } var term execdriver.Terminal + var err error + + p := &libcontainer.Process{ + Args: append([]string{processConfig.Entrypoint}, processConfig.Arguments...), + Env: c.ProcessConfig.Env, + Cwd: c.WorkingDir, + User: c.ProcessConfig.User, + } if processConfig.Tty { - term, err = NewTtyConsole(processConfig, pipes) + config := active.Config() + rootuid, err := config.HostUID() + if err != nil { + return -1, err + } + cons, err := p.NewConsole(rootuid) + if err != nil { + return -1, err + } + term, err = NewTtyConsole(cons, pipes, rootuid) } else { - term, err = execdriver.NewStdConsole(processConfig, pipes) + p.Stdout = pipes.Stdout + p.Stderr = pipes.Stderr + p.Stdin = pipes.Stdin + term = &execdriver.StdConsole{} + } + if err != nil { + return -1, err } processConfig.Terminal = term - args := append([]string{processConfig.Entrypoint}, processConfig.Arguments...) + if err := active.Start(p); err != nil { + return -1, err + } - return namespaces.ExecIn(active.container, state, args, os.Args[0], "exec", processConfig.Stdin, processConfig.Stdout, processConfig.Stderr, processConfig.Console, - func(cmd *exec.Cmd) { - if startCallback != nil { - startCallback(&c.ProcessConfig, cmd.Process.Pid) - } - }) + if startCallback != nil { + pid, err := p.Pid() + if err != nil { + p.Signal(os.Kill) + p.Wait() + return -1, err + } + startCallback(&c.ProcessConfig, pid) + } + + ps, err := p.Wait() + if err != nil { + exitErr, ok := err.(*exec.ExitError) + if !ok { + return -1, err + } + ps = exitErr.ProcessState + } + return utils.ExitStatus(ps.Sys().(syscall.WaitStatus)), nil } diff --git a/daemon/execdriver/native/info.go b/daemon/execdriver/native/info.go index 601b97e810..9d7342da86 100644 --- a/daemon/execdriver/native/info.go +++ b/daemon/execdriver/native/info.go @@ -2,13 +2,6 @@ package native -import ( - "os" - "path/filepath" - - "github.com/docker/libcontainer" -) - type info struct { ID string driver *driver @@ -18,13 +11,6 @@ type info struct { // pid file for a container. If the file exists then the // container is currently running func (i *info) IsRunning() bool { - if _, err := libcontainer.GetState(filepath.Join(i.driver.root, i.ID)); err == nil { - return true - } - // TODO: Remove this part for version 1.2.0 - // This is added only to ensure smooth upgrades from pre 1.1.0 to 1.1.0 - if _, err := os.Stat(filepath.Join(i.driver.root, i.ID, "pid")); err == nil { - return true - } - return false + _, ok := i.driver.activeContainers[i.ID] + return ok } diff --git a/daemon/execdriver/native/init.go b/daemon/execdriver/native/init.go index 754d842c3b..f57d6cddec 100644 --- a/daemon/execdriver/native/init.go +++ b/daemon/execdriver/native/init.go @@ -3,55 +3,40 @@ package native import ( - "encoding/json" - "flag" "fmt" "os" - "path/filepath" "runtime" "github.com/docker/docker/pkg/reexec" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" ) func init() { reexec.Register(DriverName, initializer) } +func fatal(err error) { + if lerr, ok := err.(libcontainer.Error); ok { + lerr.Detail(os.Stderr) + os.Exit(1) + } + + fmt.Fprintln(os.Stderr, err) + os.Exit(1) +} + func initializer() { + runtime.GOMAXPROCS(1) runtime.LockOSThread() - - var ( - pipe = flag.Int("pipe", 0, "sync pipe fd") - console = flag.String("console", "", "console (pty slave) path") - root = flag.String("root", ".", "root path for configuration files") - ) - - flag.Parse() - - var container *libcontainer.Config - f, err := os.Open(filepath.Join(*root, "container.json")) + factory, err := libcontainer.New("") if err != nil { - writeError(err) + fatal(err) + } + if err := factory.StartInitialization(3); err != nil { + fatal(err) } - if err := json.NewDecoder(f).Decode(&container); err != nil { - f.Close() - writeError(err) - } - f.Close() - - rootfs, err := os.Getwd() - if err != nil { - writeError(err) - } - - if err := namespaces.Init(container, rootfs, *console, os.NewFile(uintptr(*pipe), "child"), flag.Args()); err != nil { - writeError(err) - } - - panic("Unreachable") + panic("unreachable") } func writeError(err error) { diff --git a/daemon/execdriver/native/template/default_template.go b/daemon/execdriver/native/template/default_template.go index f7d6be746d..76e3cea787 100644 --- a/daemon/execdriver/native/template/default_template.go +++ b/daemon/execdriver/native/template/default_template.go @@ -1,14 +1,17 @@ package template import ( - "github.com/docker/libcontainer" + "syscall" + "github.com/docker/libcontainer/apparmor" - "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) +const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV + // New returns the docker default configuration for libcontainer -func New() *libcontainer.Config { - container := &libcontainer.Config{ +func New() *configs.Config { + container := &configs.Config{ Capabilities: []string{ "CHOWN", "DAC_OVERRIDE", @@ -25,18 +28,64 @@ func New() *libcontainer.Config { "KILL", "AUDIT_WRITE", }, - Namespaces: libcontainer.Namespaces([]libcontainer.Namespace{ + Namespaces: configs.Namespaces([]configs.Namespace{ {Type: "NEWNS"}, {Type: "NEWUTS"}, {Type: "NEWIPC"}, {Type: "NEWPID"}, {Type: "NEWNET"}, }), - Cgroups: &cgroups.Cgroup{ + Cgroups: &configs.Cgroup{ Parent: "docker", AllowAllDevices: false, }, - MountConfig: &libcontainer.MountConfig{}, + Mounts: []*configs.Mount{ + { + Source: "proc", + Destination: "/proc", + Device: "proc", + Flags: defaultMountFlags, + }, + { + Source: "tmpfs", + Destination: "/dev", + Device: "tmpfs", + Flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME, + Data: "mode=755", + }, + { + Source: "devpts", + Destination: "/dev/pts", + Device: "devpts", + Flags: syscall.MS_NOSUID | syscall.MS_NOEXEC, + Data: "newinstance,ptmxmode=0666,mode=0620,gid=5", + }, + { + Device: "tmpfs", + Source: "shm", + Destination: "/dev/shm", + Data: "mode=1777,size=65536k", + Flags: defaultMountFlags, + }, + { + Source: "mqueue", + Destination: "/dev/mqueue", + Device: "mqueue", + Flags: defaultMountFlags, + }, + { + Source: "sysfs", + Destination: "/sys", + Device: "sysfs", + Flags: defaultMountFlags | syscall.MS_RDONLY, + }, + }, + MaskPaths: []string{ + "/proc/kcore", + }, + ReadonlyPaths: []string{ + "/proc/sys", "/proc/sysrq-trigger", "/proc/irq", "/proc/bus", + }, } if apparmor.IsEnabled() { diff --git a/daemon/execdriver/native/utils.go b/daemon/execdriver/native/utils.go index 88aefaf382..a703926453 100644 --- a/daemon/execdriver/native/utils.go +++ b/daemon/execdriver/native/utils.go @@ -2,28 +2,21 @@ package native -import ( - "encoding/json" - "os" +//func findUserArgs() []string { +//for i, a := range os.Args { +//if a == "--" { +//return os.Args[i+1:] +//} +//} +//return []string{} +//} - "github.com/docker/libcontainer" -) - -func findUserArgs() []string { - for i, a := range os.Args { - if a == "--" { - return os.Args[i+1:] - } - } - return []string{} -} - -// loadConfigFromFd loads a container's config from the sync pipe that is provided by -// fd 3 when running a process -func loadConfigFromFd() (*libcontainer.Config, error) { - var config *libcontainer.Config - if err := json.NewDecoder(os.NewFile(3, "child")).Decode(&config); err != nil { - return nil, err - } - return config, nil -} +//// loadConfigFromFd loads a container's config from the sync pipe that is provided by +//// fd 3 when running a process +//func loadConfigFromFd() (*configs.Config, error) { +//var config *libcontainer.Config +//if err := json.NewDecoder(os.NewFile(3, "child")).Decode(&config); err != nil { +//return nil, err +//} +//return config, nil +//} diff --git a/daemon/execdriver/utils.go b/daemon/execdriver/utils.go index 37042ef83a..e1fc9b9014 100644 --- a/daemon/execdriver/utils.go +++ b/daemon/execdriver/utils.go @@ -5,13 +5,83 @@ import ( "strings" "github.com/docker/docker/utils" - "github.com/docker/libcontainer/security/capabilities" + "github.com/syndtr/gocapability/capability" ) +var capabilityList = Capabilities{ + {Key: "SETPCAP", Value: capability.CAP_SETPCAP}, + {Key: "SYS_MODULE", Value: capability.CAP_SYS_MODULE}, + {Key: "SYS_RAWIO", Value: capability.CAP_SYS_RAWIO}, + {Key: "SYS_PACCT", Value: capability.CAP_SYS_PACCT}, + {Key: "SYS_ADMIN", Value: capability.CAP_SYS_ADMIN}, + {Key: "SYS_NICE", Value: capability.CAP_SYS_NICE}, + {Key: "SYS_RESOURCE", Value: capability.CAP_SYS_RESOURCE}, + {Key: "SYS_TIME", Value: capability.CAP_SYS_TIME}, + {Key: "SYS_TTY_CONFIG", Value: capability.CAP_SYS_TTY_CONFIG}, + {Key: "MKNOD", Value: capability.CAP_MKNOD}, + {Key: "AUDIT_WRITE", Value: capability.CAP_AUDIT_WRITE}, + {Key: "AUDIT_CONTROL", Value: capability.CAP_AUDIT_CONTROL}, + {Key: "MAC_OVERRIDE", Value: capability.CAP_MAC_OVERRIDE}, + {Key: "MAC_ADMIN", Value: capability.CAP_MAC_ADMIN}, + {Key: "NET_ADMIN", Value: capability.CAP_NET_ADMIN}, + {Key: "SYSLOG", Value: capability.CAP_SYSLOG}, + {Key: "CHOWN", Value: capability.CAP_CHOWN}, + {Key: "NET_RAW", Value: capability.CAP_NET_RAW}, + {Key: "DAC_OVERRIDE", Value: capability.CAP_DAC_OVERRIDE}, + {Key: "FOWNER", Value: capability.CAP_FOWNER}, + {Key: "DAC_READ_SEARCH", Value: capability.CAP_DAC_READ_SEARCH}, + {Key: "FSETID", Value: capability.CAP_FSETID}, + {Key: "KILL", Value: capability.CAP_KILL}, + {Key: "SETGID", Value: capability.CAP_SETGID}, + {Key: "SETUID", Value: capability.CAP_SETUID}, + {Key: "LINUX_IMMUTABLE", Value: capability.CAP_LINUX_IMMUTABLE}, + {Key: "NET_BIND_SERVICE", Value: capability.CAP_NET_BIND_SERVICE}, + {Key: "NET_BROADCAST", Value: capability.CAP_NET_BROADCAST}, + {Key: "IPC_LOCK", Value: capability.CAP_IPC_LOCK}, + {Key: "IPC_OWNER", Value: capability.CAP_IPC_OWNER}, + {Key: "SYS_CHROOT", Value: capability.CAP_SYS_CHROOT}, + {Key: "SYS_PTRACE", Value: capability.CAP_SYS_PTRACE}, + {Key: "SYS_BOOT", Value: capability.CAP_SYS_BOOT}, + {Key: "LEASE", Value: capability.CAP_LEASE}, + {Key: "SETFCAP", Value: capability.CAP_SETFCAP}, + {Key: "WAKE_ALARM", Value: capability.CAP_WAKE_ALARM}, + {Key: "BLOCK_SUSPEND", Value: capability.CAP_BLOCK_SUSPEND}, +} + +type ( + CapabilityMapping struct { + Key string `json:"key,omitempty"` + Value capability.Cap `json:"value,omitempty"` + } + Capabilities []*CapabilityMapping +) + +func (c *CapabilityMapping) String() string { + return c.Key +} + +func GetCapability(key string) *CapabilityMapping { + for _, capp := range capabilityList { + if capp.Key == key { + cpy := *capp + return &cpy + } + } + return nil +} + +func GetAllCapabilities() []string { + output := make([]string, len(capabilityList)) + for i, capability := range capabilityList { + output[i] = capability.String() + } + return output +} + func TweakCapabilities(basics, adds, drops []string) ([]string, error) { var ( newCaps []string - allCaps = capabilities.GetAllCapabilities() + allCaps = GetAllCapabilities() ) // look for invalid cap in the drop list @@ -26,7 +96,7 @@ func TweakCapabilities(basics, adds, drops []string) ([]string, error) { // handle --cap-add=all if utils.StringsContainsNoCase(adds, "all") { - basics = capabilities.GetAllCapabilities() + basics = allCaps } if !utils.StringsContainsNoCase(drops, "all") { diff --git a/daemon/graphdriver/aufs/aufs.go b/daemon/graphdriver/aufs/aufs.go index 103a568e21..4d0c71c7ec 100644 --- a/daemon/graphdriver/aufs/aufs.go +++ b/daemon/graphdriver/aufs/aufs.go @@ -35,8 +35,8 @@ import ( "github.com/docker/docker/pkg/archive" "github.com/docker/docker/pkg/chrootarchive" "github.com/docker/docker/pkg/common" + "github.com/docker/docker/pkg/directory" mountpk "github.com/docker/docker/pkg/mount" - "github.com/docker/docker/utils" "github.com/docker/libcontainer/label" ) @@ -216,7 +216,7 @@ func (a *Driver) Remove(id string) error { defer a.Unlock() if a.active[id] != 0 { - log.Errorf("Warning: removing active id %s", id) + log.Errorf("Removing active id %s", id) } // Make sure the dir is umounted first @@ -320,7 +320,7 @@ func (a *Driver) applyDiff(id string, diff archive.ArchiveReader) error { // relative to its base filesystem directory. func (a *Driver) DiffSize(id, parent string) (size int64, err error) { // AUFS doesn't need the parent layer to calculate the diff size. - return utils.TreeSize(path.Join(a.rootPath(), "diff", id)) + return directory.Size(path.Join(a.rootPath(), "diff", id)) } // ApplyDiff extracts the changeset from the given diff into the @@ -378,7 +378,7 @@ func (a *Driver) mount(id, mountLabel string) error { } if err := a.aufsMount(layers, rw, target, mountLabel); err != nil { - return err + return fmt.Errorf("error creating aufs mount to %s: %v", target, err) } return nil } diff --git a/daemon/graphdriver/aufs/mount.go b/daemon/graphdriver/aufs/mount.go index bb935f6919..a3a5a86595 100644 --- a/daemon/graphdriver/aufs/mount.go +++ b/daemon/graphdriver/aufs/mount.go @@ -9,7 +9,7 @@ import ( func Unmount(target string) error { if err := exec.Command("auplink", target, "flush").Run(); err != nil { - log.Errorf("[warning]: couldn't run auplink before unmount: %s", err) + log.Errorf("Couldn't run auplink before unmount: %s", err) } if err := syscall.Unmount(target, 0); err != nil { return err diff --git a/daemon/graphdriver/btrfs/MAINTAINERS b/daemon/graphdriver/btrfs/MAINTAINERS deleted file mode 100644 index 9e629d5fcc..0000000000 --- a/daemon/graphdriver/btrfs/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Alexander Larsson (@alexlarsson) diff --git a/daemon/graphdriver/devmapper/MAINTAINERS b/daemon/graphdriver/devmapper/MAINTAINERS deleted file mode 100644 index 9382fc3a42..0000000000 --- a/daemon/graphdriver/devmapper/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Alexander Larsson (@alexlarsson) -Vincent Batts (@vbatts) diff --git a/daemon/graphdriver/devmapper/README.md b/daemon/graphdriver/devmapper/README.md index 30e63f7ab2..1dc918016d 100644 --- a/daemon/graphdriver/devmapper/README.md +++ b/daemon/graphdriver/devmapper/README.md @@ -150,7 +150,7 @@ Here is the list of supported options: If using a block device for device mapper storage, ideally lvm2 would be used to create/manage the thin-pool volume that is then handed to docker to exclusively create/manage the thin and thin - snapshot volumes needed for it's containers. Managing the thin-pool + snapshot volumes needed for its containers. Managing the thin-pool outside of docker makes for the most feature-rich method of having docker utilize device mapper thin provisioning as the backing storage for docker's containers. lvm2-based thin-pool management diff --git a/daemon/graphdriver/devmapper/deviceset.go b/daemon/graphdriver/devmapper/deviceset.go index 9d30aee671..686d72b951 100644 --- a/daemon/graphdriver/devmapper/deviceset.go +++ b/daemon/graphdriver/devmapper/deviceset.go @@ -347,7 +347,7 @@ func (devices *DeviceSet) deviceFileWalkFunction(path string, finfo os.FileInfo) } if dinfo.DeviceId > MaxDeviceId { - log.Errorf("Warning: Ignoring Invalid DeviceId=%d", dinfo.DeviceId) + log.Errorf("Ignoring Invalid DeviceId=%d", dinfo.DeviceId) return nil } @@ -554,7 +554,7 @@ func (devices *DeviceSet) createRegisterDevice(hash string) (*DevInfo, error) { // happen. Now we have a mechianism to find // a free device Id. So something is not right. // Give a warning and continue. - log.Errorf("Warning: Device Id %d exists in pool but it is supposed to be unused", deviceId) + log.Errorf("Device Id %d exists in pool but it is supposed to be unused", deviceId) deviceId, err = devices.getNextFreeDeviceId() if err != nil { return nil, err @@ -606,7 +606,7 @@ func (devices *DeviceSet) createRegisterSnapDevice(hash string, baseInfo *DevInf // happen. Now we have a mechianism to find // a free device Id. So something is not right. // Give a warning and continue. - log.Errorf("Warning: Device Id %d exists in pool but it is supposed to be unused", deviceId) + log.Errorf("Device Id %d exists in pool but it is supposed to be unused", deviceId) deviceId, err = devices.getNextFreeDeviceId() if err != nil { return err @@ -852,18 +852,18 @@ func (devices *DeviceSet) rollbackTransaction() error { // closed. In that case this call will fail. Just leave a message // in case of failure. if err := devicemapper.DeleteDevice(devices.getPoolDevName(), devices.DeviceId); err != nil { - log.Errorf("Warning: Unable to delete device: %s", err) + log.Errorf("Unable to delete device: %s", err) } dinfo := &DevInfo{Hash: devices.DeviceIdHash} if err := devices.removeMetadata(dinfo); err != nil { - log.Errorf("Warning: Unable to remove metadata: %s", err) + log.Errorf("Unable to remove metadata: %s", err) } else { devices.markDeviceIdFree(devices.DeviceId) } if err := devices.removeTransactionMetaData(); err != nil { - log.Errorf("Warning: Unable to remove transaction meta file %s: %s", devices.transactionMetaFile(), err) + log.Errorf("Unable to remove transaction meta file %s: %s", devices.transactionMetaFile(), err) } return nil @@ -883,7 +883,7 @@ func (devices *DeviceSet) processPendingTransaction() error { // If open transaction Id is less than pool transaction Id, something // is wrong. Bail out. if devices.OpenTransactionId < devices.TransactionId { - log.Errorf("Warning: Open Transaction id %d is less than pool transaction id %d", devices.OpenTransactionId, devices.TransactionId) + log.Errorf("Open Transaction id %d is less than pool transaction id %d", devices.OpenTransactionId, devices.TransactionId) return nil } @@ -963,7 +963,7 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error { // https://github.com/docker/docker/issues/4036 if supported := devicemapper.UdevSetSyncSupport(true); !supported { - log.Warnf("WARNING: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors") + log.Warnf("Udev sync is not supported. This will lead to unexpected behavior, data loss and errors") } log.Debugf("devicemapper: udev sync support: %v", devicemapper.UdevSyncSupported()) @@ -1221,7 +1221,7 @@ func (devices *DeviceSet) deactivateDevice(info *DevInfo) error { // Wait for the unmount to be effective, // by watching the value of Info.OpenCount for the device if err := devices.waitClose(info); err != nil { - log.Errorf("Warning: error waiting for device %s to close: %s", info.Hash, err) + log.Errorf("Error waiting for device %s to close: %s", info.Hash, err) } devinfo, err := devicemapper.GetInfo(info.Name()) @@ -1584,7 +1584,7 @@ func (devices *DeviceSet) getUnderlyingAvailableSpace(loopFile string) (uint64, buf := new(syscall.Statfs_t) err := syscall.Statfs(loopFile, buf) if err != nil { - log.Warnf("Warning: Couldn't stat loopfile filesystem %v: %v", loopFile, err) + log.Warnf("Couldn't stat loopfile filesystem %v: %v", loopFile, err) return 0, err } return buf.Bfree * uint64(buf.Bsize), nil @@ -1594,7 +1594,7 @@ func (devices *DeviceSet) isRealFile(loopFile string) (bool, error) { if loopFile != "" { fi, err := os.Stat(loopFile) if err != nil { - log.Warnf("Warning: Couldn't stat loopfile %v: %v", loopFile, err) + log.Warnf("Couldn't stat loopfile %v: %v", loopFile, err) return false, err } return fi.Mode().IsRegular(), nil diff --git a/daemon/graphdriver/devmapper/driver.go b/daemon/graphdriver/devmapper/driver.go index 1d3d803e2c..6dd05ca375 100644 --- a/daemon/graphdriver/devmapper/driver.go +++ b/daemon/graphdriver/devmapper/driver.go @@ -164,7 +164,7 @@ func (d *Driver) Get(id, mountLabel string) (string, error) { func (d *Driver) Put(id string) error { err := d.DeviceSet.UnmountDevice(id) if err != nil { - log.Errorf("Warning: error unmounting device %s: %s", id, err) + log.Errorf("Error unmounting device %s: %s", id, err) } return err } diff --git a/daemon/graphdriver/driver.go b/daemon/graphdriver/driver.go index fa2ed2c924..9e7f92a0ed 100644 --- a/daemon/graphdriver/driver.go +++ b/daemon/graphdriver/driver.go @@ -184,6 +184,6 @@ func checkPriorDriver(name, root string) { } } if len(priorDrivers) > 0 { - log.Warnf("graphdriver %s selected. Warning: your graphdriver directory %s already contains data managed by other graphdrivers: %s", name, root, strings.Join(priorDrivers, ",")) + log.Warnf("Graphdriver %s selected. Your graphdriver directory %s already contains data managed by other graphdrivers: %s", name, root, strings.Join(priorDrivers, ",")) } } diff --git a/daemon/graphdriver/graphtest/graphtest.go b/daemon/graphdriver/graphtest/graphtest.go index 4f14e7aed4..2bd30f6aeb 100644 --- a/daemon/graphdriver/graphtest/graphtest.go +++ b/daemon/graphdriver/graphtest/graphtest.go @@ -73,7 +73,7 @@ func newDriver(t *testing.T, name string) *Driver { d, err := graphdriver.GetDriver(name, root, nil) if err != nil { - t.Logf("graphdriver: %s\n", err.Error()) + t.Logf("graphdriver: %v\n", err) if err == graphdriver.ErrNotSupported || err == graphdriver.ErrPrerequisites || err == graphdriver.ErrIncompatibleFS { t.Skipf("Driver %s not supported", name) } diff --git a/daemon/graphdriver/overlay/overlay.go b/daemon/graphdriver/overlay/overlay.go index 37162b5caf..afe12c5091 100644 --- a/daemon/graphdriver/overlay/overlay.go +++ b/daemon/graphdriver/overlay/overlay.go @@ -301,7 +301,7 @@ func (d *Driver) Get(id string, mountLabel string) (string, error) { opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lowerDir, upperDir, workDir) if err := syscall.Mount("overlay", mergedDir, "overlay", 0, label.FormatMountLabel(opts, mountLabel)); err != nil { - return "", err + return "", fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err) } mount.path = mergedDir mount.mounted = true diff --git a/daemon/image_delete.go b/daemon/image_delete.go index c193164765..0c0a534cfd 100644 --- a/daemon/image_delete.go +++ b/daemon/image_delete.go @@ -9,6 +9,7 @@ import ( "github.com/docker/docker/image" "github.com/docker/docker/pkg/common" "github.com/docker/docker/pkg/parsers" + "github.com/docker/docker/utils" ) func (daemon *Daemon) ImageDelete(job *engine.Job) engine.Status { @@ -48,7 +49,7 @@ func (daemon *Daemon) DeleteImage(eng *engine.Engine, name string, imgs *engine. img, err := daemon.Repositories().LookupImage(name) if err != nil { if r, _ := daemon.Repositories().Get(repoName); r != nil { - return fmt.Errorf("No such image: %s:%s", repoName, tag) + return fmt.Errorf("No such image: %s", utils.ImageReference(repoName, tag)) } return fmt.Errorf("No such image: %s", name) } @@ -102,7 +103,7 @@ func (daemon *Daemon) DeleteImage(eng *engine.Engine, name string, imgs *engine. } if tagDeleted { out := &engine.Env{} - out.Set("Untagged", repoName+":"+tag) + out.Set("Untagged", utils.ImageReference(repoName, tag)) imgs.Add(out) eng.Job("log", "untag", img.ID, "").Run() } diff --git a/daemon/info.go b/daemon/info.go index f0fc1241b5..965c370328 100644 --- a/daemon/info.go +++ b/daemon/info.go @@ -3,6 +3,7 @@ package daemon import ( "os" "runtime" + "time" log "github.com/Sirupsen/logrus" "github.com/docker/docker/autogen/dockerversion" @@ -76,6 +77,7 @@ func (daemon *Daemon) CmdInfo(job *engine.Job) engine.Status { v.SetBool("Debug", os.Getenv("DEBUG") != "") v.SetInt("NFd", utils.GetTotalUsedFds()) v.SetInt("NGoroutines", runtime.NumGoroutine()) + v.Set("SystemTime", time.Now().Format(time.RFC3339Nano)) v.Set("ExecutionDriver", daemon.ExecutionDriver().Name()) v.SetInt("NEventsListener", env.GetInt("count")) v.Set("KernelVersion", kernelVersion) @@ -87,6 +89,16 @@ func (daemon *Daemon) CmdInfo(job *engine.Job) engine.Status { v.SetInt("NCPU", runtime.NumCPU()) v.SetInt64("MemTotal", meminfo.MemTotal) v.Set("DockerRootDir", daemon.Config().Root) + if http_proxy := os.Getenv("http_proxy"); http_proxy != "" { + v.Set("HttpProxy", http_proxy) + } + if https_proxy := os.Getenv("https_proxy"); https_proxy != "" { + v.Set("HttpsProxy", https_proxy) + } + if no_proxy := os.Getenv("no_proxy"); no_proxy != "" { + v.Set("NoProxy", no_proxy) + } + if hostname, err := os.Hostname(); err == nil { v.SetJson("Name", hostname) } diff --git a/daemon/inspect.go b/daemon/inspect.go index df68881431..08265795ec 100644 --- a/daemon/inspect.go +++ b/daemon/inspect.go @@ -62,6 +62,14 @@ func (daemon *Daemon) ContainerInspect(job *engine.Job) engine.Status { container.hostConfig.Links = append(container.hostConfig.Links, fmt.Sprintf("%s:%s", child.Name, linkAlias)) } } + // we need this trick to preserve empty log driver, so + // container will use daemon defaults even if daemon change them + if container.hostConfig.LogConfig.Type == "" { + container.hostConfig.LogConfig = daemon.defaultLogConfig + defer func() { + container.hostConfig.LogConfig = runconfig.LogConfig{} + }() + } out.SetJson("HostConfig", container.hostConfig) diff --git a/daemon/list.go b/daemon/list.go index 5885ebd499..130ac05376 100644 --- a/daemon/list.go +++ b/daemon/list.go @@ -8,6 +8,7 @@ import ( "github.com/docker/docker/graph" "github.com/docker/docker/pkg/graphdb" + "github.com/docker/docker/utils" "github.com/docker/docker/engine" "github.com/docker/docker/pkg/parsers" @@ -90,6 +91,10 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status { return nil } + if !psFilters.MatchKVList("label", container.Config.Labels) { + return nil + } + if before != "" && !foundBefore { if container.ID == beforeCont.ID { foundBefore = true @@ -127,7 +132,7 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status { img := container.Config.Image _, tag := parsers.ParseRepositoryTag(container.Config.Image) if tag == "" { - img = img + ":" + graph.DEFAULTTAG + img = utils.ImageReference(img, graph.DEFAULTTAG) } out.SetJson("Image", img) if len(container.Args) > 0 { @@ -157,6 +162,7 @@ func (daemon *Daemon) Containers(job *engine.Job) engine.Status { out.SetInt64("SizeRw", sizeRw) out.SetInt64("SizeRootFs", sizeRootFs) } + out.SetJson("Labels", container.Config.Labels) outs.Add(out) return nil } diff --git a/daemon/logger/copier.go b/daemon/logger/copier.go new file mode 100644 index 0000000000..462e42346d --- /dev/null +++ b/daemon/logger/copier.go @@ -0,0 +1,57 @@ +package logger + +import ( + "bufio" + "io" + "sync" + "time" + + "github.com/Sirupsen/logrus" +) + +// Copier can copy logs from specified sources to Logger and attach +// ContainerID and Timestamp. +// Writes are concurrent, so you need implement some sync in your logger +type Copier struct { + // cid is container id for which we copying logs + cid string + // srcs is map of name -> reader pairs, for example "stdout", "stderr" + srcs map[string]io.Reader + dst Logger + copyJobs sync.WaitGroup +} + +// NewCopier creates new Copier +func NewCopier(cid string, srcs map[string]io.Reader, dst Logger) (*Copier, error) { + return &Copier{ + cid: cid, + srcs: srcs, + dst: dst, + }, nil +} + +// Run starts logs copying +func (c *Copier) Run() { + for src, w := range c.srcs { + c.copyJobs.Add(1) + go c.copySrc(src, w) + } +} + +func (c *Copier) copySrc(name string, src io.Reader) { + defer c.copyJobs.Done() + scanner := bufio.NewScanner(src) + for scanner.Scan() { + if err := c.dst.Log(&Message{ContainerID: c.cid, Line: scanner.Bytes(), Source: name, Timestamp: time.Now().UTC()}); err != nil { + logrus.Errorf("Failed to log msg %q for logger %s: %s", scanner.Bytes(), c.dst.Name(), err) + } + } + if err := scanner.Err(); err != nil { + logrus.Errorf("Error scanning log stream: %s", err) + } +} + +// Wait waits until all copying is done +func (c *Copier) Wait() { + c.copyJobs.Wait() +} diff --git a/daemon/logger/copier_test.go b/daemon/logger/copier_test.go new file mode 100644 index 0000000000..45f76ac8e8 --- /dev/null +++ b/daemon/logger/copier_test.go @@ -0,0 +1,109 @@ +package logger + +import ( + "bytes" + "encoding/json" + "io" + "testing" + "time" +) + +type TestLoggerJSON struct { + *json.Encoder +} + +func (l *TestLoggerJSON) Log(m *Message) error { + return l.Encode(m) +} + +func (l *TestLoggerJSON) Close() error { + return nil +} + +func (l *TestLoggerJSON) Name() string { + return "json" +} + +type TestLoggerText struct { + *bytes.Buffer +} + +func (l *TestLoggerText) Log(m *Message) error { + _, err := l.WriteString(m.ContainerID + " " + m.Source + " " + string(m.Line) + "\n") + return err +} + +func (l *TestLoggerText) Close() error { + return nil +} + +func (l *TestLoggerText) Name() string { + return "text" +} + +func TestCopier(t *testing.T) { + stdoutLine := "Line that thinks that it is log line from docker stdout" + stderrLine := "Line that thinks that it is log line from docker stderr" + var stdout bytes.Buffer + var stderr bytes.Buffer + for i := 0; i < 30; i++ { + if _, err := stdout.WriteString(stdoutLine + "\n"); err != nil { + t.Fatal(err) + } + if _, err := stderr.WriteString(stderrLine + "\n"); err != nil { + t.Fatal(err) + } + } + + var jsonBuf bytes.Buffer + + jsonLog := &TestLoggerJSON{Encoder: json.NewEncoder(&jsonBuf)} + + cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657" + c, err := NewCopier(cid, + map[string]io.Reader{ + "stdout": &stdout, + "stderr": &stderr, + }, + jsonLog) + if err != nil { + t.Fatal(err) + } + c.Run() + wait := make(chan struct{}) + go func() { + c.Wait() + close(wait) + }() + select { + case <-time.After(1 * time.Second): + t.Fatal("Copier failed to do its work in 1 second") + case <-wait: + } + dec := json.NewDecoder(&jsonBuf) + for { + var msg Message + if err := dec.Decode(&msg); err != nil { + if err == io.EOF { + break + } + t.Fatal(err) + } + if msg.Source != "stdout" && msg.Source != "stderr" { + t.Fatalf("Wrong Source: %q, should be %q or %q", msg.Source, "stdout", "stderr") + } + if msg.ContainerID != cid { + t.Fatalf("Wrong ContainerID: %q, expected %q", msg.ContainerID, cid) + } + if msg.Source == "stdout" { + if string(msg.Line) != stdoutLine { + t.Fatalf("Wrong Line: %q, expected %q", msg.Line, stdoutLine) + } + } + if msg.Source == "stderr" { + if string(msg.Line) != stderrLine { + t.Fatalf("Wrong Line: %q, expected %q", msg.Line, stderrLine) + } + } + } +} diff --git a/daemon/logger/jsonfilelog/jsonfilelog.go b/daemon/logger/jsonfilelog/jsonfilelog.go new file mode 100644 index 0000000000..faa6bf92e2 --- /dev/null +++ b/daemon/logger/jsonfilelog/jsonfilelog.go @@ -0,0 +1,58 @@ +package jsonfilelog + +import ( + "bytes" + "os" + "sync" + + "github.com/docker/docker/daemon/logger" + "github.com/docker/docker/pkg/jsonlog" +) + +// JSONFileLogger is Logger implementation for default docker logging: +// JSON objects to file +type JSONFileLogger struct { + buf *bytes.Buffer + f *os.File // store for closing + mu sync.Mutex // protects buffer +} + +// New creates new JSONFileLogger which writes to filename +func New(filename string) (logger.Logger, error) { + log, err := os.OpenFile(filename, os.O_RDWR|os.O_APPEND|os.O_CREATE, 0600) + if err != nil { + return nil, err + } + return &JSONFileLogger{ + f: log, + buf: bytes.NewBuffer(nil), + }, nil +} + +// Log converts logger.Message to jsonlog.JSONLog and serializes it to file +func (l *JSONFileLogger) Log(msg *logger.Message) error { + l.mu.Lock() + defer l.mu.Unlock() + err := (&jsonlog.JSONLog{Log: string(msg.Line) + "\n", Stream: msg.Source, Created: msg.Timestamp}).MarshalJSONBuf(l.buf) + if err != nil { + return err + } + l.buf.WriteByte('\n') + _, err = l.buf.WriteTo(l.f) + if err != nil { + // this buffer is screwed, replace it with another to avoid races + l.buf = bytes.NewBuffer(nil) + return err + } + return nil +} + +// Close closes underlying file +func (l *JSONFileLogger) Close() error { + return l.f.Close() +} + +// Name returns name of this logger +func (l *JSONFileLogger) Name() string { + return "JSONFile" +} diff --git a/daemon/logger/jsonfilelog/jsonfilelog_test.go b/daemon/logger/jsonfilelog/jsonfilelog_test.go new file mode 100644 index 0000000000..e951c1b869 --- /dev/null +++ b/daemon/logger/jsonfilelog/jsonfilelog_test.go @@ -0,0 +1,78 @@ +package jsonfilelog + +import ( + "io/ioutil" + "os" + "path/filepath" + "testing" + "time" + + "github.com/docker/docker/daemon/logger" + "github.com/docker/docker/pkg/jsonlog" +) + +func TestJSONFileLogger(t *testing.T) { + tmp, err := ioutil.TempDir("", "docker-logger-") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmp) + filename := filepath.Join(tmp, "container.log") + l, err := New(filename) + if err != nil { + t.Fatal(err) + } + defer l.Close() + cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657" + if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line1"), Source: "src1"}); err != nil { + t.Fatal(err) + } + if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line2"), Source: "src2"}); err != nil { + t.Fatal(err) + } + if err := l.Log(&logger.Message{ContainerID: cid, Line: []byte("line3"), Source: "src3"}); err != nil { + t.Fatal(err) + } + res, err := ioutil.ReadFile(filename) + if err != nil { + t.Fatal(err) + } + expected := `{"log":"line1\n","stream":"src1","time":"0001-01-01T00:00:00Z"} +{"log":"line2\n","stream":"src2","time":"0001-01-01T00:00:00Z"} +{"log":"line3\n","stream":"src3","time":"0001-01-01T00:00:00Z"} +` + + if string(res) != expected { + t.Fatalf("Wrong log content: %q, expected %q", res, expected) + } +} + +func BenchmarkJSONFileLogger(b *testing.B) { + tmp, err := ioutil.TempDir("", "docker-logger-") + if err != nil { + b.Fatal(err) + } + defer os.RemoveAll(tmp) + filename := filepath.Join(tmp, "container.log") + l, err := New(filename) + if err != nil { + b.Fatal(err) + } + defer l.Close() + cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657" + testLine := "Line that thinks that it is log line from docker\n" + msg := &logger.Message{ContainerID: cid, Line: []byte(testLine), Source: "stderr", Timestamp: time.Now().UTC()} + jsonlog, err := (&jsonlog.JSONLog{Log: string(msg.Line) + "\n", Stream: msg.Source, Created: msg.Timestamp}).MarshalJSON() + if err != nil { + b.Fatal(err) + } + b.SetBytes(int64(len(jsonlog)+1) * 30) + b.ResetTimer() + for i := 0; i < b.N; i++ { + for j := 0; j < 30; j++ { + if err := l.Log(msg); err != nil { + b.Fatal(err) + } + } + } +} diff --git a/daemon/logger/logger.go b/daemon/logger/logger.go new file mode 100644 index 0000000000..078e67d8e9 --- /dev/null +++ b/daemon/logger/logger.go @@ -0,0 +1,18 @@ +package logger + +import "time" + +// Message is datastructure that represents record from some container +type Message struct { + ContainerID string + Line []byte + Source string + Timestamp time.Time +} + +// Logger is interface for docker logging drivers +type Logger interface { + Log(*Message) error + Name() string + Close() error +} diff --git a/daemon/logs.go b/daemon/logs.go index db977ddac1..356d08c5c8 100644 --- a/daemon/logs.go +++ b/daemon/logs.go @@ -44,6 +44,9 @@ func (daemon *Daemon) ContainerLogs(job *engine.Job) engine.Status { if err != nil { return job.Error(err) } + if container.LogDriverType() != "json-file" { + return job.Errorf("\"logs\" endpoint is supported only for \"json-file\" logging driver") + } cLog, err := container.ReadLog("json") if err != nil && os.IsNotExist(err) { // Legacy logs diff --git a/daemon/monitor.go b/daemon/monitor.go index 7d862afb42..7c18b7a38c 100644 --- a/daemon/monitor.go +++ b/daemon/monitor.go @@ -123,7 +123,7 @@ func (m *containerMonitor) Start() error { for { m.container.RestartCount++ - if err := m.container.startLoggingToDisk(); err != nil { + if err := m.container.startLogging(); err != nil { m.resetContainer(false) return err @@ -182,7 +182,7 @@ func (m *containerMonitor) Start() error { } // resetMonitor resets the stateful fields on the containerMonitor based on the -// previous runs success or failure. Reguardless of success, if the container had +// previous runs success or failure. Regardless of success, if the container had // an execution time of more than 10s then reset the timer back to the default func (m *containerMonitor) resetMonitor(successful bool) { executionTime := time.Now().Sub(m.lastStartTime).Seconds() @@ -302,6 +302,24 @@ func (m *containerMonitor) resetContainer(lock bool) { container.stdin, container.stdinPipe = io.Pipe() } + if container.logDriver != nil { + if container.logCopier != nil { + exit := make(chan struct{}) + go func() { + container.logCopier.Wait() + close(exit) + }() + select { + case <-time.After(1 * time.Second): + log.Warnf("Logger didn't exit in time: logs may be truncated") + case <-exit: + } + } + container.logDriver.Close() + container.logCopier = nil + container.logDriver = nil + } + c := container.command.ProcessConfig.Cmd container.command.ProcessConfig.Cmd = exec.Cmd{ diff --git a/daemon/networkdriver/bridge/driver.go b/daemon/networkdriver/bridge/driver.go index 329052be41..aa139b9a39 100644 --- a/daemon/networkdriver/bridge/driver.go +++ b/daemon/networkdriver/bridge/driver.go @@ -284,10 +284,11 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error { // Enable NAT if ipmasq { - natArgs := []string{"POSTROUTING", "-t", "nat", "-s", addr.String(), "!", "-o", bridgeIface, "-j", "MASQUERADE"} + natArgs := []string{"-s", addr.String(), "!", "-o", bridgeIface, "-j", "MASQUERADE"} - if !iptables.Exists(natArgs...) { - if output, err := iptables.Raw(append([]string{"-I"}, natArgs...)...); err != nil { + if !iptables.Exists(iptables.Nat, "POSTROUTING", natArgs...) { + if output, err := iptables.Raw(append([]string{ + "-t", string(iptables.Nat), "-I", "POSTROUTING"}, natArgs...)...); err != nil { return fmt.Errorf("Unable to enable network bridge NAT: %s", err) } else if len(output) != 0 { return &iptables.ChainError{Chain: "POSTROUTING", Output: output} @@ -296,28 +297,28 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error { } var ( - args = []string{"FORWARD", "-i", bridgeIface, "-o", bridgeIface, "-j"} + args = []string{"-i", bridgeIface, "-o", bridgeIface, "-j"} acceptArgs = append(args, "ACCEPT") dropArgs = append(args, "DROP") ) if !icc { - iptables.Raw(append([]string{"-D"}, acceptArgs...)...) + iptables.Raw(append([]string{"-D", "FORWARD"}, acceptArgs...)...) - if !iptables.Exists(dropArgs...) { + if !iptables.Exists(iptables.Filter, "FORWARD", dropArgs...) { log.Debugf("Disable inter-container communication") - if output, err := iptables.Raw(append([]string{"-I"}, dropArgs...)...); err != nil { + if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, dropArgs...)...); err != nil { return fmt.Errorf("Unable to prevent intercontainer communication: %s", err) } else if len(output) != 0 { return fmt.Errorf("Error disabling intercontainer communication: %s", output) } } } else { - iptables.Raw(append([]string{"-D"}, dropArgs...)...) + iptables.Raw(append([]string{"-D", "FORWARD"}, dropArgs...)...) - if !iptables.Exists(acceptArgs...) { + if !iptables.Exists(iptables.Filter, "FORWARD", acceptArgs...) { log.Debugf("Enable inter-container communication") - if output, err := iptables.Raw(append([]string{"-I"}, acceptArgs...)...); err != nil { + if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, acceptArgs...)...); err != nil { return fmt.Errorf("Unable to allow intercontainer communication: %s", err) } else if len(output) != 0 { return fmt.Errorf("Error enabling intercontainer communication: %s", output) @@ -326,9 +327,9 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error { } // Accept all non-intercontainer outgoing packets - outgoingArgs := []string{"FORWARD", "-i", bridgeIface, "!", "-o", bridgeIface, "-j", "ACCEPT"} - if !iptables.Exists(outgoingArgs...) { - if output, err := iptables.Raw(append([]string{"-I"}, outgoingArgs...)...); err != nil { + outgoingArgs := []string{"-i", bridgeIface, "!", "-o", bridgeIface, "-j", "ACCEPT"} + if !iptables.Exists(iptables.Filter, "FORWARD", outgoingArgs...) { + if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, outgoingArgs...)...); err != nil { return fmt.Errorf("Unable to allow outgoing packets: %s", err) } else if len(output) != 0 { return &iptables.ChainError{Chain: "FORWARD outgoing", Output: output} @@ -336,10 +337,10 @@ func setupIPTables(addr net.Addr, icc, ipmasq bool) error { } // Accept incoming packets for existing connections - existingArgs := []string{"FORWARD", "-o", bridgeIface, "-m", "conntrack", "--ctstate", "RELATED,ESTABLISHED", "-j", "ACCEPT"} + existingArgs := []string{"-o", bridgeIface, "-m", "conntrack", "--ctstate", "RELATED,ESTABLISHED", "-j", "ACCEPT"} - if !iptables.Exists(existingArgs...) { - if output, err := iptables.Raw(append([]string{"-I"}, existingArgs...)...); err != nil { + if !iptables.Exists(iptables.Filter, "FORWARD", existingArgs...) { + if output, err := iptables.Raw(append([]string{"-I", "FORWARD"}, existingArgs...)...); err != nil { return fmt.Errorf("Unable to allow incoming packets: %s", err) } else if len(output) != 0 { return &iptables.ChainError{Chain: "FORWARD incoming", Output: output} @@ -522,7 +523,8 @@ func Allocate(job *engine.Job) engine.Status { // If globalIPv6Network Size is at least a /80 subnet generate IPv6 address from MAC address netmask_ones, _ := globalIPv6Network.Mask.Size() if requestedIPv6 == nil && netmask_ones <= 80 { - requestedIPv6 = globalIPv6Network.IP + requestedIPv6 = make(net.IP, len(globalIPv6Network.IP)) + copy(requestedIPv6, globalIPv6Network.IP) for i, h := range mac { requestedIPv6[i+10] = h } @@ -530,7 +532,7 @@ func Allocate(job *engine.Job) engine.Status { globalIPv6, err = ipallocator.RequestIP(globalIPv6Network, requestedIPv6) if err != nil { - log.Errorf("Allocator: RequestIP v6: %s", err.Error()) + log.Errorf("Allocator: RequestIP v6: %v", err) return job.Error(err) } log.Infof("Allocated IPv6 %s", globalIPv6) diff --git a/daemon/networkdriver/bridge/driver_test.go b/daemon/networkdriver/bridge/driver_test.go index 02bea9ce13..8c20dffb85 100644 --- a/daemon/networkdriver/bridge/driver_test.go +++ b/daemon/networkdriver/bridge/driver_test.go @@ -1,6 +1,7 @@ package bridge import ( + "fmt" "net" "strconv" "testing" @@ -104,6 +105,123 @@ func TestHostnameFormatChecking(t *testing.T) { } } +func newInterfaceAllocation(t *testing.T, input engine.Env) (output engine.Env) { + eng := engine.New() + eng.Logging = false + + done := make(chan bool) + + // set IPv6 global if given + if input.Exists("globalIPv6Network") { + _, globalIPv6Network, _ = net.ParseCIDR(input.Get("globalIPv6Network")) + } + + job := eng.Job("allocate_interface", "container_id") + job.Env().Init(&input) + reader, _ := job.Stdout.AddPipe() + go func() { + output.Decode(reader) + done <- true + }() + + res := Allocate(job) + job.Stdout.Close() + <-done + + if input.Exists("expectFail") && input.GetBool("expectFail") { + if res == engine.StatusOK { + t.Fatal("Doesn't fail to allocate network interface") + } + } else { + if res != engine.StatusOK { + t.Fatal("Failed to allocate network interface") + } + } + + if input.Exists("globalIPv6Network") { + // check for bug #11427 + _, subnet, _ := net.ParseCIDR(input.Get("globalIPv6Network")) + if globalIPv6Network.IP.String() != subnet.IP.String() { + t.Fatal("globalIPv6Network was modified during allocation") + } + // clean up IPv6 global + globalIPv6Network = nil + } + + return +} + +func TestIPv6InterfaceAllocationAutoNetmaskGt80(t *testing.T) { + + input := engine.Env{} + + _, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/81") + + // set global ipv6 + input.Set("globalIPv6Network", subnet.String()) + + output := newInterfaceAllocation(t, input) + + // ensure low manually assigend global ip + ip := net.ParseIP(output.Get("GlobalIPv6")) + _, subnet, _ = net.ParseCIDR(fmt.Sprintf("%s/%d", subnet.IP.String(), 120)) + if !subnet.Contains(ip) { + t.Fatalf("Error ip %s not in subnet %s", ip.String(), subnet.String()) + } +} + +func TestIPv6InterfaceAllocationAutoNetmaskLe80(t *testing.T) { + + input := engine.Env{} + + _, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/80") + + // set global ipv6 + input.Set("globalIPv6Network", subnet.String()) + input.Set("RequestedMac", "ab:cd:ab:cd:ab:cd") + + output := newInterfaceAllocation(t, input) + + // ensure global ip with mac + ip := net.ParseIP(output.Get("GlobalIPv6")) + expected_ip := net.ParseIP("2001:db8:1234:1234:1234:abcd:abcd:abcd") + if ip.String() != expected_ip.String() { + t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String()) + } + + // ensure link local format + ip = net.ParseIP(output.Get("LinkLocalIPv6")) + expected_ip = net.ParseIP("fe80::a9cd:abff:fecd:abcd") + if ip.String() != expected_ip.String() { + t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String()) + } + +} + +func TestIPv6InterfaceAllocationRequest(t *testing.T) { + + input := engine.Env{} + + _, subnet, _ := net.ParseCIDR("2001:db8:1234:1234:1234::/80") + expected_ip := net.ParseIP("2001:db8:1234:1234:1234::1328") + + // set global ipv6 + input.Set("globalIPv6Network", subnet.String()) + input.Set("RequestedIPv6", expected_ip.String()) + + output := newInterfaceAllocation(t, input) + + // ensure global ip with mac + ip := net.ParseIP(output.Get("GlobalIPv6")) + if ip.String() != expected_ip.String() { + t.Fatalf("Error ip %s should be %s", ip.String(), expected_ip.String()) + } + + // retry -> fails for duplicated address + input.SetBool("expectFail", true) + output = newInterfaceAllocation(t, input) +} + func TestMacAddrGeneration(t *testing.T) { ip := net.ParseIP("192.168.0.1") mac := generateMacAddr(ip).String() diff --git a/daemon/networkdriver/portallocator/portallocator.go b/daemon/networkdriver/portallocator/portallocator.go index 3414d11e7a..da9f987397 100644 --- a/daemon/networkdriver/portallocator/portallocator.go +++ b/daemon/networkdriver/portallocator/portallocator.go @@ -1,10 +1,24 @@ package portallocator import ( + "bufio" "errors" "fmt" "net" + "os" "sync" + + log "github.com/Sirupsen/logrus" +) + +const ( + DefaultPortRangeStart = 49153 + DefaultPortRangeEnd = 65535 +) + +var ( + beginPortRange = DefaultPortRangeStart + endPortRange = DefaultPortRangeEnd ) type portMap struct { @@ -15,7 +29,7 @@ type portMap struct { func newPortMap() *portMap { return &portMap{ p: map[int]struct{}{}, - last: EndPortRange, + last: endPortRange, } } @@ -30,11 +44,6 @@ func newProtoMap() protoMap { type ipMapping map[string]protoMap -const ( - BeginPortRange = 49153 - EndPortRange = 65535 -) - var ( ErrAllPortsAllocated = errors.New("all ports are allocated") ErrUnknownProtocol = errors.New("unknown protocol") @@ -59,6 +68,31 @@ func NewErrPortAlreadyAllocated(ip string, port int) ErrPortAlreadyAllocated { } } +func init() { + const portRangeKernelParam = "/proc/sys/net/ipv4/ip_local_port_range" + + file, err := os.Open(portRangeKernelParam) + if err != nil { + log.Warnf("Failed to read %s kernel parameter: %v", portRangeKernelParam, err) + return + } + var start, end int + n, err := fmt.Fscanf(bufio.NewReader(file), "%d\t%d", &start, &end) + if n != 2 || err != nil { + if err == nil { + err = fmt.Errorf("unexpected count of parsed numbers (%d)", n) + } + log.Errorf("Failed to parse port range from %s: %v", portRangeKernelParam, err) + return + } + beginPortRange = start + endPortRange = end +} + +func PortRange() (int, int) { + return beginPortRange, endPortRange +} + func (e ErrPortAlreadyAllocated) IP() string { return e.ip } @@ -137,10 +171,10 @@ func ReleaseAll() error { func (pm *portMap) findPort() (int, error) { port := pm.last - for i := 0; i <= EndPortRange-BeginPortRange; i++ { + for i := 0; i <= endPortRange-beginPortRange; i++ { port++ - if port > EndPortRange { - port = BeginPortRange + if port > endPortRange { + port = beginPortRange } if _, ok := pm.p[port]; !ok { diff --git a/daemon/networkdriver/portallocator/portallocator_test.go b/daemon/networkdriver/portallocator/portallocator_test.go index 72581f1040..bac558fa41 100644 --- a/daemon/networkdriver/portallocator/portallocator_test.go +++ b/daemon/networkdriver/portallocator/portallocator_test.go @@ -5,6 +5,11 @@ import ( "testing" ) +func init() { + beginPortRange = DefaultPortRangeStart + endPortRange = DefaultPortRangeEnd +} + func reset() { ReleaseAll() } @@ -17,7 +22,7 @@ func TestRequestNewPort(t *testing.T) { t.Fatal(err) } - if expected := BeginPortRange; port != expected { + if expected := beginPortRange; port != expected { t.Fatalf("Expected port %d got %d", expected, port) } } @@ -102,13 +107,13 @@ func TestUnknowProtocol(t *testing.T) { func TestAllocateAllPorts(t *testing.T) { defer reset() - for i := 0; i <= EndPortRange-BeginPortRange; i++ { + for i := 0; i <= endPortRange-beginPortRange; i++ { port, err := RequestPort(defaultIP, "tcp", 0) if err != nil { t.Fatal(err) } - if expected := BeginPortRange + i; port != expected { + if expected := beginPortRange + i; port != expected { t.Fatalf("Expected port %d got %d", expected, port) } } @@ -123,7 +128,7 @@ func TestAllocateAllPorts(t *testing.T) { } // release a port in the middle and ensure we get another tcp port - port := BeginPortRange + 5 + port := beginPortRange + 5 if err := ReleasePort(defaultIP, "tcp", port); err != nil { t.Fatal(err) } @@ -153,13 +158,13 @@ func BenchmarkAllocatePorts(b *testing.B) { defer reset() for i := 0; i < b.N; i++ { - for i := 0; i <= EndPortRange-BeginPortRange; i++ { + for i := 0; i <= endPortRange-beginPortRange; i++ { port, err := RequestPort(defaultIP, "tcp", 0) if err != nil { b.Fatal(err) } - if expected := BeginPortRange + i; port != expected { + if expected := beginPortRange + i; port != expected { b.Fatalf("Expected port %d got %d", expected, port) } } @@ -231,15 +236,15 @@ func TestPortAllocation(t *testing.T) { func TestNoDuplicateBPR(t *testing.T) { defer reset() - if port, err := RequestPort(defaultIP, "tcp", BeginPortRange); err != nil { + if port, err := RequestPort(defaultIP, "tcp", beginPortRange); err != nil { t.Fatal(err) - } else if port != BeginPortRange { - t.Fatalf("Expected port %d got %d", BeginPortRange, port) + } else if port != beginPortRange { + t.Fatalf("Expected port %d got %d", beginPortRange, port) } if port, err := RequestPort(defaultIP, "tcp", 0); err != nil { t.Fatal(err) - } else if port == BeginPortRange { + } else if port == beginPortRange { t.Fatalf("Acquire(0) allocated the same port twice: %d", port) } } diff --git a/daemon/networkdriver/portmapper/mapper_test.go b/daemon/networkdriver/portmapper/mapper_test.go index 42e44a11df..fa7bdecdbf 100644 --- a/daemon/networkdriver/portmapper/mapper_test.go +++ b/daemon/networkdriver/portmapper/mapper_test.go @@ -129,7 +129,8 @@ func TestMapAllPortsSingleInterface(t *testing.T) { }() for i := 0; i < 10; i++ { - for i := portallocator.BeginPortRange; i < portallocator.EndPortRange; i++ { + start, end := portallocator.PortRange() + for i := start; i < end; i++ { if host, err = Map(srcAddr1, dstIp1, 0); err != nil { t.Fatal(err) } @@ -137,8 +138,8 @@ func TestMapAllPortsSingleInterface(t *testing.T) { hosts = append(hosts, host) } - if _, err := Map(srcAddr1, dstIp1, portallocator.BeginPortRange); err == nil { - t.Fatalf("Port %d should be bound but is not", portallocator.BeginPortRange) + if _, err := Map(srcAddr1, dstIp1, start); err == nil { + t.Fatalf("Port %d should be bound but is not", start) } for _, val := range hosts { diff --git a/daemon/rename.go b/daemon/rename.go index 1e3cd5370c..6d8293f127 100644 --- a/daemon/rename.go +++ b/daemon/rename.go @@ -1,8 +1,6 @@ package daemon -import ( - "github.com/docker/docker/engine" -) +import "github.com/docker/docker/engine" func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status { if len(job.Args) != 2 { @@ -26,9 +24,21 @@ func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status { container.Name = newName + undo := func() { + container.Name = oldName + daemon.reserveName(container.ID, oldName) + daemon.containerGraph.Delete(newName) + } + if err := daemon.containerGraph.Delete(oldName); err != nil { + undo() return job.Errorf("Failed to delete container %q: %v", oldName, err) } + if err := container.toDisk(); err != nil { + undo() + return job.Error(err) + } + return engine.StatusOK } diff --git a/daemon/start.go b/daemon/start.go index e5076d9128..e51ada22a2 100644 --- a/daemon/start.go +++ b/daemon/start.go @@ -66,7 +66,7 @@ func (daemon *Daemon) setHostConfig(container *Container, hostConfig *runconfig. if err != nil && os.IsNotExist(err) { err = os.MkdirAll(source, 0755) if err != nil { - return fmt.Errorf("Could not create local directory '%s' for bind mount: %s!", source, err.Error()) + return fmt.Errorf("Could not create local directory '%s' for bind mount: %v!", source, err) } } } diff --git a/daemon/stats.go b/daemon/stats.go index b36b28ae72..85d4a08550 100644 --- a/daemon/stats.go +++ b/daemon/stats.go @@ -18,7 +18,7 @@ func (daemon *Daemon) ContainerStats(job *engine.Job) engine.Status { enc := json.NewEncoder(job.Stdout) for v := range updates { update := v.(*execdriver.ResourceStats) - ss := convertToAPITypes(update.ContainerStats) + ss := convertToAPITypes(update.Stats) ss.MemoryStats.Limit = uint64(update.MemoryLimit) ss.Read = update.Read ss.CpuStats.SystemUsage = update.SystemUsage @@ -31,20 +31,21 @@ func (daemon *Daemon) ContainerStats(job *engine.Job) engine.Status { return engine.StatusOK } -// convertToAPITypes converts the libcontainer.ContainerStats to the api specific +// convertToAPITypes converts the libcontainer.Stats to the api specific // structs. This is done to preserve API compatibility and versioning. -func convertToAPITypes(ls *libcontainer.ContainerStats) *types.Stats { +func convertToAPITypes(ls *libcontainer.Stats) *types.Stats { s := &types.Stats{} - if ls.NetworkStats != nil { - s.Network = types.Network{ - RxBytes: ls.NetworkStats.RxBytes, - RxPackets: ls.NetworkStats.RxPackets, - RxErrors: ls.NetworkStats.RxErrors, - RxDropped: ls.NetworkStats.RxDropped, - TxBytes: ls.NetworkStats.TxBytes, - TxPackets: ls.NetworkStats.TxPackets, - TxErrors: ls.NetworkStats.TxErrors, - TxDropped: ls.NetworkStats.TxDropped, + if ls.Interfaces != nil { + s.Network = types.Network{} + for _, iface := range ls.Interfaces { + s.Network.RxBytes += iface.RxBytes + s.Network.RxPackets += iface.RxPackets + s.Network.RxErrors += iface.RxErrors + s.Network.RxDropped += iface.RxDropped + s.Network.TxBytes += iface.TxBytes + s.Network.TxPackets += iface.TxPackets + s.Network.TxErrors += iface.TxErrors + s.Network.TxDropped += iface.TxDropped } } cs := ls.CgroupStats diff --git a/daemon/volumes.go b/daemon/volumes.go index 16d00dd945..b389ad0b86 100644 --- a/daemon/volumes.go +++ b/daemon/volumes.go @@ -8,12 +8,12 @@ import ( "path/filepath" "sort" "strings" - "syscall" log "github.com/Sirupsen/logrus" "github.com/docker/docker/daemon/execdriver" "github.com/docker/docker/pkg/chrootarchive" "github.com/docker/docker/pkg/symlink" + "github.com/docker/docker/pkg/system" "github.com/docker/docker/volumes" ) @@ -385,15 +385,14 @@ func copyExistingContents(source, destination string) error { // copyOwnership copies the permissions and uid:gid of the source file // into the destination file func copyOwnership(source, destination string) error { - var stat syscall.Stat_t - - if err := syscall.Stat(source, &stat); err != nil { + stat, err := system.Stat(source) + if err != nil { return err } - if err := os.Chown(destination, int(stat.Uid), int(stat.Gid)); err != nil { + if err := os.Chown(destination, int(stat.Uid()), int(stat.Gid())); err != nil { return err } - return os.Chmod(destination, os.FileMode(stat.Mode)) + return os.Chmod(destination, os.FileMode(stat.Mode())) } diff --git a/daemon/wait.go b/daemon/wait.go index e2747a3e42..7579467a00 100644 --- a/daemon/wait.go +++ b/daemon/wait.go @@ -13,7 +13,7 @@ func (daemon *Daemon) ContainerWait(job *engine.Job) engine.Status { name := job.Args[0] container, err := daemon.Get(name) if err != nil { - return job.Errorf("%s: %s", job.Name, err.Error()) + return job.Errorf("%s: %v", job.Name, err) } status, _ := container.WaitStop(-1 * time.Second) job.Printf("%d\n", status) diff --git a/docker/daemon.go b/docker/daemon.go index 5f799a46a5..e3bd06d901 100644 --- a/docker/daemon.go +++ b/docker/daemon.go @@ -7,6 +7,7 @@ import ( "io" "os" "path/filepath" + "strings" log "github.com/Sirupsen/logrus" "github.com/docker/docker/autogen/dockerversion" @@ -101,11 +102,14 @@ func mainDaemon() { // load the daemon in the background so we can immediately start // the http api so that connections don't fail while the daemon // is booting + daemonInitWait := make(chan error) go func() { d, err := daemon.NewDaemon(daemonCfg, eng) if err != nil { - log.Fatal(err) + daemonInitWait <- err + return } + log.Infof("docker daemon: %s %s; execdriver: %s; graphdriver: %s", dockerversion.VERSION, dockerversion.GITCOMMIT, @@ -114,7 +118,8 @@ func mainDaemon() { ) if err := d.Install(eng); err != nil { - log.Fatal(err) + daemonInitWait <- err + return } b := &builder.BuilderJob{eng, d} @@ -123,8 +128,10 @@ func mainDaemon() { // after the daemon is done setting up we can tell the api to start // accepting connections if err := eng.Job("acceptconnections").Run(); err != nil { - log.Fatal(err) + daemonInitWait <- err + return } + daemonInitWait <- nil }() // Serve api @@ -141,7 +148,46 @@ func mainDaemon() { job.Setenv("TlsCert", *flCert) job.Setenv("TlsKey", *flKey) job.SetenvBool("BufferRequests", true) - if err := job.Run(); err != nil { - log.Fatal(err) + + // The serve API job never exits unless an error occurs + // We need to start it as a goroutine and wait on it so + // daemon doesn't exit + serveAPIWait := make(chan error) + go func() { + if err := job.Run(); err != nil { + log.Errorf("ServeAPI error: %v", err) + serveAPIWait <- err + return + } + serveAPIWait <- nil + }() + + // Wait for the daemon startup goroutine to finish + // This makes sure we can actually cleanly shutdown the daemon + log.Debug("waiting for daemon to initialize") + errDaemon := <-daemonInitWait + if errDaemon != nil { + eng.Shutdown() + outStr := fmt.Sprintf("Shutting down daemon due to errors: %v", errDaemon) + if strings.Contains(errDaemon.Error(), "engine is shutdown") { + // if the error is "engine is shutdown", we've already reported (or + // will report below in API server errors) the error + outStr = "Shutting down daemon due to reported errors" + } + // we must "fatal" exit here as the API server may be happy to + // continue listening forever if the error had no impact to API + log.Fatal(outStr) + } else { + log.Info("Daemon has completed initialization") } + + // Daemon is fully initialized and handling API traffic + // Wait for serve API job to complete + errAPI := <-serveAPIWait + // If we have an error here it is unique to API (as daemonErr would have + // exited the daemon process above) + if errAPI != nil { + log.Errorf("Shutting down due to ServeAPI error: %v", errAPI) + } + eng.Shutdown() } diff --git a/docs/Dockerfile b/docs/Dockerfile index 969c0e814f..dd61fa8df0 100644 --- a/docs/Dockerfile +++ b/docs/Dockerfile @@ -21,6 +21,7 @@ COPY ./VERSION VERSION # TODO: don't do this - look at merging the yml file in build.sh COPY ./mkdocs.yml mkdocs.yml COPY ./s3_website.json s3_website.json +COPY ./release.sh release.sh # Docker Swarm #ADD https://raw.githubusercontent.com/docker/swarm/master/docs/mkdocs.yml /docs/mkdocs-swarm.yml diff --git a/docs/MAINTAINERS b/docs/MAINTAINERS deleted file mode 100644 index ecf56752c2..0000000000 --- a/docs/MAINTAINERS +++ /dev/null @@ -1,3 +0,0 @@ -Fred Lifton (@fredlf) -James Turnbull (@jamtur01) -Sven Dowideit (@SvenDowideit) diff --git a/docs/README.md b/docs/README.md index 8e49af7aa2..72172112ce 100755 --- a/docs/README.md +++ b/docs/README.md @@ -106,7 +106,7 @@ also update the root docs pages by running > if you are using Boot2Docker on OSX and the above command returns an error, > `Post http:///var/run/docker.sock/build?rm=1&t=docker-docs%3Apost-1.2.0-docs_update-2: > dial unix /var/run/docker.sock: no such file or directory', you need to set the Docker -> host. Run `$(boot2docker shellinit)` to see the correct variable to set. The command +> host. Run `eval "$(boot2docker shellinit)"` to see the correct variable to set. The command > will return the full `export` command, so you can just cut and paste. ## Cherry-picking documentation changes to update an existing release. @@ -152,3 +152,32 @@ _if_ the `DISTRIBUTION_ID` is set to the Cloudfront distribution ID (ask the met team) - this will take at least 15 minutes to run and you can check its progress with the CDN Cloudfront Chrome addin. +## Removing files from the docs.docker.com site + +Sometimes it becomes necessary to remove files from the historical published documentation. +The most reliable way to do this is to do it directly using `aws s3` commands running in a +docs container: + +Start the docs container like `make docs-shell`, but bind mount in your `awsconfig`: + +``` +docker run --rm -it -v $(CURDIR)/docs/awsconfig:/docs/awsconfig docker-docs:master bash +``` + +and then the following example shows deleting 2 documents from s3, and then requesting the +CloudFlare cache to invalidate them: + + +``` +export BUCKET BUCKET=docs.docker.com +export AWS_CONFIG_FILE=$(pwd)/awsconfig +aws s3 --profile $BUCKET ls s3://$BUCKET +aws s3 --profile $BUCKET rm s3://$BUCKET/v1.0/reference/api/docker_io_oauth_api/index.html +aws s3 --profile $BUCKET rm s3://$BUCKET/v1.1/reference/api/docker_io_oauth_api/index.html + +aws configure set preview.cloudfront true +export DISTRIBUTION_ID=YUTIYUTIUTIUYTIUT +aws cloudfront create-invalidation --profile docs.docker.com --distribution-id $DISTRIBUTION_ID --invalidation-batch '{"Paths":{"Quantity":1, "Items":["/v1.0/reference/api/docker_io_oauth_api/"]},"CallerReference":"6Mar2015sventest1"}' +aws cloudfront create-invalidation --profile docs.docker.com --distribution-id $DISTRIBUTION_ID --invalidation-batch '{"Paths":{"Quantity":1, "Items":["/v1.1/reference/api/docker_io_oauth_api/"]},"CallerReference":"6Mar2015sventest1"}' +``` + diff --git a/docs/man/Dockerfile.5.md b/docs/man/Dockerfile.5.md index d66bcad067..7f884888e2 100644 --- a/docs/man/Dockerfile.5.md +++ b/docs/man/Dockerfile.5.md @@ -97,6 +97,9 @@ A Dockerfile is similar to a Makefile. exec form makes it possible to avoid shell string munging. The exec form makes it possible to **RUN** commands using a base image that does not contain `/bin/sh`. + Note that the exec form is parsed as a JSON array, which means that you must + use double-quotes (") around words not single-quotes ('). + **CMD** -- **CMD** has three forms: @@ -120,6 +123,9 @@ A Dockerfile is similar to a Makefile. be executed when running the image. If you use the shell form of the **CMD**, the `` executes in `/bin/sh -c`: + Note that the exec form is parsed as a JSON array, which means that you must + use double-quotes (") around words not single-quotes ('). + ``` FROM ubuntu CMD echo "This is a test." | wc - @@ -143,6 +149,25 @@ A Dockerfile is similar to a Makefile. **CMD** executes nothing at build time, but specifies the intended command for the image. +**LABEL** + -- `LABEL [=] [[=] ...]` + The **LABEL** instruction adds metadata to an image. A **LABEL** is a + key-value pair. To include spaces within a **LABEL** value, use quotes and + backslashes as you would in command-line parsing. + + ``` + LABEL "com.example.vendor"="ACME Incorporated" + ``` + + An image can have more than one label. To specify multiple labels, separate + each key-value pair by a space. + + Labels are additive including `LABEL`s in `FROM` images. As the system + encounters and then applies a new label, new `key`s override any previous + labels with identical keys. + + To display an image's labels, use the `docker inspect` command. + **EXPOSE** -- `EXPOSE [...]` The **EXPOSE** instruction informs Docker that the container listens on the @@ -269,20 +294,22 @@ A Dockerfile is similar to a Makefile. **ONBUILD** -- `ONBUILD [INSTRUCTION]` - The **ONBUILD** instruction adds a trigger instruction to the image, which is - executed at a later time, when the image is used as the base for another - build. The trigger is executed in the context of the downstream build, as - if it had been inserted immediately after the **FROM** instruction in the - downstream Dockerfile. Any build instruction can be registered as a - trigger. This is useful if you are building an image to be - used as a base for building other images, for example an application build - environment or a daemon to be customized with a user-specific - configuration. For example, if your image is a reusable python - application builder, it requires application source code to be - added in a particular directory, and might require a build script - to be called after that. You can't just call **ADD** and **RUN** now, because - you don't yet have access to the application source code, and it - is different for each application build. + The **ONBUILD** instruction adds a trigger instruction to an image. The + trigger is executed at a later time, when the image is used as the base for + another build. Docker executes the trigger in the context of the downstream + build, as if the trigger existed immediately after the **FROM** instruction in + the downstream Dockerfile. + + You can register any build instruction as a trigger. A trigger is useful if + you are defining an image to use as a base for building other images. For + example, if you are defining an application build environment or a daemon that + is customized with a user-specific configuration. + + Consider an image intended as a reusable python application builder. It must + add application source code to a particular directory, and might need a build + script called after that. You can't just call **ADD** and **RUN** now, because + you don't yet have access to the application source code, and it is different + for each application build. -- Providing application developers with a boilerplate Dockerfile to copy-paste into their application is inefficient, error-prone, and diff --git a/docs/man/docker-build.1.md b/docs/man/docker-build.1.md index f6a89b54ed..fe6250fc19 100644 --- a/docs/man/docker-build.1.md +++ b/docs/man/docker-build.1.md @@ -7,13 +7,18 @@ docker-build - Build a new image from the source code at PATH # SYNOPSIS **docker build** [**--help**] -[**-f**|**--file**[=*Dockerfile*]] +[**-f**|**--file**[=*PATH/Dockerfile*]] [**--force-rm**[=*false*]] [**--no-cache**[=*false*]] [**--pull**[=*false*]] [**-q**|**--quiet**[=*false*]] [**--rm**[=*true*]] [**-t**|**--tag**[=*TAG*]] +[**-m**|**--memory**[=*MEMORY*]] +[**--memory-swap**[=*MEMORY-SWAP*]] +[**-c**|**--cpu-shares**[=*0*]] +[**--cpuset-cpus**[=*CPUSET-CPUS*]] + PATH | URL | - # DESCRIPTION @@ -33,7 +38,7 @@ When a Git repository is set as the **URL**, the repository is used as context. # OPTIONS -**-f**, **--file**=*Dockerfile* +**-f**, **--file**=*PATH/Dockerfile* Path to the Dockerfile to use. If the path is a relative path then it must be relative to the current directory. The file must be within the build context. The default is *Dockerfile*. **--force-rm**=*true*|*false* diff --git a/docs/man/docker-commit.1.md b/docs/man/docker-commit.1.md index 663dfdc68f..e3459197a7 100644 --- a/docs/man/docker-commit.1.md +++ b/docs/man/docker-commit.1.md @@ -22,7 +22,7 @@ Using an existing container's name or ID you can create a new image. **-c** , **--change**=[] Apply specified Dockerfile instructions while committing the image - Supported Dockerfile instructions: CMD, ENTRYPOINT, ENV, EXPOSE, ONBUILD, USER, VOLUME, WORKDIR + Supported Dockerfile instructions: ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|LABEL|VOLUME|WORKDIR|COPY **--help** Print usage statement diff --git a/docs/man/docker-cp.1.md b/docs/man/docker-cp.1.md index ac49a47a54..3cd203a83d 100644 --- a/docs/man/docker-cp.1.md +++ b/docs/man/docker-cp.1.md @@ -2,17 +2,57 @@ % Docker Community % JUNE 2014 # NAME -docker-cp - Copy files/folders from the PATH to the HOSTPATH +docker-cp - Copy files or folders from a container's PATH to a HOSTDIR +or to STDOUT. # SYNOPSIS **docker cp** [**--help**] -CONTAINER:PATH HOSTPATH +CONTAINER:PATH HOSTDIR|- # DESCRIPTION -Copy files/folders from a container's filesystem to the host -path. Paths are relative to the root of the filesystem. Files -can be copied from a running or stopped container. + +Copy files or folders from a `CONTAINER:PATH` to the `HOSTDIR` or to `STDOUT`. +The `CONTAINER:PATH` is relative to the root of the container's filesystem. You +can copy from either a running or stopped container. + +The `PATH` can be a file or directory. The `docker cp` command assumes all +`PATH` values start at the `/` (root) directory. This means supplying the +initial forward slash is optional; The command sees +`compassionate_darwin:/tmp/foo/myfile.txt` and +`compassionate_darwin:tmp/foo/myfile.txt` as identical. + +The `HOSTDIR` refers to a directory on the host. If you do not specify an +absolute path for your `HOSTDIR` value, Docker creates the directory relative to +where you run the `docker cp` command. For example, suppose you want to copy the +`/tmp/foo` directory from a container to the `/tmp` directory on your host. If +you run `docker cp` in your `~` (home) directory on the host: + + $ docker cp compassionate_darwin:tmp/foo /tmp + +Docker creates a `/tmp/foo` directory on your host. Alternatively, you can omit +the leading slash in the command. If you execute this command from your home directory: + + $ docker cp compassionate_darwin:tmp/foo tmp + +Docker creates a `~/tmp/foo` subdirectory. + +When copying files to an existing `HOSTDIR`, the `cp` command adds the new files to +the directory. For example, this command: + + $ docker cp sharp_ptolemy:/tmp/foo/myfile.txt /tmp + +Creates a `/tmp/foo` directory on the host containing the `myfile.txt` file. If +you repeat the command but change the filename: + + $ docker cp sharp_ptolemy:/tmp/foo/secondfile.txt /tmp + +Your host's `/tmp/foo` directory will contain both files: + + $ ls /tmp/foo + myfile.txt secondfile.txt + +Finally, use '-' to write the data as a `tar` file to STDOUT. # OPTIONS **--help** diff --git a/docs/man/docker-create.1.md b/docs/man/docker-create.1.md index e5c4fa600f..62a4c60bb1 100644 --- a/docs/man/docker-create.1.md +++ b/docs/man/docker-create.1.md @@ -12,7 +12,7 @@ docker-create - Create a new container [**--cap-add**[=*[]*]] [**--cap-drop**[=*[]*]] [**--cidfile**[=*CIDFILE*]] -[**--cpuset**[=*CPUSET*]] +[**--cpuset-cpus**[=*CPUSET-CPUS*]] [**--device**[=*[]*]] [**--dns-search**[=*[]*]] [**--dns**[=*[]*]] @@ -24,8 +24,11 @@ docker-create - Create a new container [**--help**] [**-i**|**--interactive**[=*false*]] [**--ipc**[=*IPC*]] +[**-l**|**--label**[=*[]*]] +[**--label-file**[=*[]*]] [**--link**[=*[]*]] [**--lxc-conf**[=*[]*]] +[**--log-driver**[=*[]*]] [**-m**|**--memory**[=*MEMORY*]] [**--memory-swap**[=*MEMORY-SWAP*]] [**--mac-address**[=*MAC-ADDRESS*]] @@ -43,6 +46,7 @@ docker-create - Create a new container [**-v**|**--volume**[=*[]*]] [**--volumes-from**[=*[]*]] [**-w**|**--workdir**[=*WORKDIR*]] +[**--cgroup-parent**[=*CGROUP-PATH*]] IMAGE [COMMAND] [ARG...] # OPTIONS @@ -64,7 +68,10 @@ IMAGE [COMMAND] [ARG...] **--cidfile**="" Write the container ID to the file -**--cpuset**="" +**--cgroup-parent**="" + Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist. + +**--cpuset-cpus**="" CPUs in which to allow execution (0-3, 0,1) **--device**=[] @@ -102,12 +109,22 @@ IMAGE [COMMAND] [ARG...] 'container:': reuses another container shared memory, semaphores and message queues 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. +**-l**, **--label**=[] + Adds metadata to a container (e.g., --label=com.example.key=value) + +**--label-file**=[] + Read labels from a file. Delimit each label with an EOL. + **--link**=[] Add link to another container in the form of :alias **--lxc-conf**=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" +**--log-driver**="|*json-file*|*none*" + Logging driver for container. Default is defined by daemon `--log-driver` flag. + **Warning**: `docker logs` command works only for `json-file` logging driver. + **-m**, **--memory**="" Memory limit (format: , where unit = b, k, m or g) @@ -157,7 +174,7 @@ This value should always larger than **-m**, so you should alway use this with * **--read-only**=*true*|*false* Mount the container's root filesystem as read only. -**--restart**="" +**--restart**="no" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) **--security-opt**=[] diff --git a/docs/man/docker-export.1.md b/docs/man/docker-export.1.md index 226ae5c1d5..df69bc37d8 100644 --- a/docs/man/docker-export.1.md +++ b/docs/man/docker-export.1.md @@ -14,17 +14,24 @@ Export the contents of a container's filesystem using the full or shortened container ID or container name. The output is exported to STDOUT and can be redirected to a tar file. +Stream to a file instead of STDOUT by using **-o**. + # OPTIONS **--help** Print usage statement +**-o**, **--output**="" + Write to a file, instead of STDOUT # EXAMPLES Export the contents of the container called angry_bell to a tar file -called test.tar: +called angry_bell.tar: - # docker export angry_bell > test.tar - # ls *.tar - test.tar + # docker export angry_bell > angry_bell.tar + # docker export --output=angry_bell-latest.tar angry_bell + # ls -sh angry_bell.tar + 321M angry_bell.tar + # ls -sh angry_bell-latest.tar + 321M angry_bell-latest.tar # See also **docker-import(1)** to create an empty filesystem image @@ -34,3 +41,4 @@ and import the contents of the tarball into it, then optionally tag it. April 2014, Originally compiled by William Henry (whenry at redhat dot com) based on docker.com source material and internal work. June 2014, updated by Sven Dowideit +Janurary 2015, updated by Joseph Kern (josephakern at gmail dot com) diff --git a/docs/man/docker-images.1.md b/docs/man/docker-images.1.md index 16fad991ce..c5151f1107 100644 --- a/docs/man/docker-images.1.md +++ b/docs/man/docker-images.1.md @@ -8,6 +8,7 @@ docker-images - List images **docker images** [**--help**] [**-a**|**--all**[=*false*]] +[**--digests**[=*false*]] [**-f**|**--filter**[=*[]*]] [**--no-trunc**[=*false*]] [**-q**|**--quiet**[=*false*]] @@ -33,8 +34,11 @@ versions. **-a**, **--all**=*true*|*false* Show all images (by default filter out the intermediate image layers). The default is *false*. +**--digests**=*true*|*false* + Show image digests. The default is *false*. + **-f**, **--filter**=[] - Provide filter values (i.e., 'dangling=true') + Filters the output. The dangling=true filter finds unused images. While label=com.foo=amd64 filters for images with a com.foo value of amd64. The label=com.foo filter finds images with the label com.foo of any value. **--help** Print usage statement diff --git a/docs/man/docker-import.1.md b/docs/man/docker-import.1.md index 3f9b8bb3e4..6b3899b6a7 100644 --- a/docs/man/docker-import.1.md +++ b/docs/man/docker-import.1.md @@ -13,7 +13,7 @@ URL|- [REPOSITORY[:TAG]] # OPTIONS **-c**, **--change**=[] Apply specified Dockerfile instructions while importing the image - Supported Dockerfile instructions: CMD, ENTRYPOINT, ENV, EXPOSE, ONBUILD, USER, VOLUME, WORKDIR + Supported Dockerfile instructions: `ADD`|`CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`FROM`|`MAINTAINER`|`RUN`|`USER`|`LABEL`|`VOLUME`|`WORKDIR`|`COPY` # DESCRIPTION Create a new filesystem image from the contents of a tarball (`.tar`, diff --git a/docs/man/docker-inspect.1.md b/docs/man/docker-inspect.1.md index 23ec6bedef..85f6730004 100644 --- a/docs/man/docker-inspect.1.md +++ b/docs/man/docker-inspect.1.md @@ -83,6 +83,11 @@ To get information on a container use it's ID or instance name: "Ghost": false }, "Image": "df53773a4390e25936f9fd3739e0c0e60a62d024ea7b669282b27e65ae8458e6", + "Labels": { + "com.example.vendor": "Acme", + "com.example.license": "GPL", + "com.example.version": "1.0" + }, "NetworkSettings": { "IPAddress": "172.17.0.2", "IPPrefixLen": 16, diff --git a/docs/man/docker-logs.1.md b/docs/man/docker-logs.1.md index d55e8d8365..01a15f54dc 100644 --- a/docs/man/docker-logs.1.md +++ b/docs/man/docker-logs.1.md @@ -22,6 +22,8 @@ The **docker logs --follow** command combines commands **docker logs** and **docker attach**. It will first return all logs from the beginning and then continue streaming new output from the container’s stdout and stderr. +**Warning**: This command works only for **json-file** logging driver. + # OPTIONS **--help** Print usage statement diff --git a/docs/man/docker-ps.1.md b/docs/man/docker-ps.1.md index bd1e04d813..783f4621f6 100644 --- a/docs/man/docker-ps.1.md +++ b/docs/man/docker-ps.1.md @@ -36,6 +36,7 @@ the running containers. **-f**, **--filter**=[] Provide filter values. Valid filters: exited= - containers with exit code of + label= or label== status=(restarting|running|paused|exited) name= - container's name id= - container's ID diff --git a/docs/man/docker-run.1.md b/docs/man/docker-run.1.md index 7dd69841f1..ef2c9061ba 100644 --- a/docs/man/docker-run.1.md +++ b/docs/man/docker-run.1.md @@ -12,7 +12,7 @@ docker-run - Run a command in a new container [**--cap-add**[=*[]*]] [**--cap-drop**[=*[]*]] [**--cidfile**[=*CIDFILE*]] -[**--cpuset**[=*CPUSET*]] +[**--cpuset-cpus**[=*CPUSET-CPUS*]] [**-d**|**--detach**[=*false*]] [**--device**[=*[]*]] [**--dns-search**[=*[]*]] @@ -25,10 +25,13 @@ docker-run - Run a command in a new container [**--help**] [**-i**|**--interactive**[=*false*]] [**--ipc**[=*IPC*]] +[**-l**|**--label**[=*[]*]] +[**--label-file**[=*[]*]] [**--link**[=*[]*]] [**--lxc-conf**[=*[]*]] +[**--log-driver**[=*[]*]] [**-m**|**--memory**[=*MEMORY*]] -[**--memory-swap**[=*MEMORY-SWAP]] +[**--memory-swap**[=*MEMORY-SWAP*]] [**--mac-address**[=*MAC-ADDRESS*]] [**--name**[=*NAME*]] [**--net**[=*"bridge"*]] @@ -46,6 +49,7 @@ docker-run - Run a command in a new container [**-v**|**--volume**[=*[]*]] [**--volumes-from**[=*[]*]] [**-w**|**--workdir**[=*WORKDIR*]] +[**--cgroup-parent**[=*CGROUP-PATH*]] IMAGE [COMMAND] [ARG...] # DESCRIPTION @@ -82,24 +86,38 @@ option can be set multiple times. **-c**, **--cpu-shares**=0 CPU shares (relative weight) - You can increase the priority of a container -with the -c option. By default, all containers run at the same priority and get -the same proportion of CPU cycles, but you can tell the kernel to give more -shares of CPU time to one or more containers when you start them via **docker -run**. + By default, all containers get the same proportion of CPU cycles. This proportion +can be modified by changing the container's CPU share weighting relative +to the weighting of all other running containers. -The flag `-c` or `--cpu-shares` with value 0 indicates that the running -container has access to all 1024 (default) CPU shares. However, this value -can be modified to run a container with a different priority or different -proportion of CPU cycles. +To modify the proportion from the default of 1024, use the **-c** or **--cpu-shares** +flag to set the weighting to 2 or higher. -E.g., If we start three {C0, C1, C2} containers with default values -(`-c` OR `--cpu-shares` = 0) and one {C3} with (`-c` or `--cpu-shares`=512) -then C0, C1, and C2 would have access to 100% CPU shares (1024) and C3 would -only have access to 50% CPU shares (512). In the context of a time-sliced OS -with time quantum set as 100 milliseconds, containers C0, C1, and C2 will run -for full-time quantum, and container C3 will run for half-time quantum i.e 50 -milliseconds. +The proportion will only apply when CPU-intensive processes are running. +When tasks in one container are idle, other containers can use the +left-over CPU time. The actual amount of CPU time will vary depending on +the number of containers running on the system. + +For example, consider three containers, one has a cpu-share of 1024 and +two others have a cpu-share setting of 512. When processes in all three +containers attempt to use 100% of CPU, the first container would receive +50% of the total CPU time. If you add a fouth container with a cpu-share +of 1024, the first container only gets 33% of the CPU. The remaining containers +receive 16.5%, 16.5% and 33% of the CPU. + +On a multi-core system, the shares of CPU time are distributed over all CPU +cores. Even if a container is limited to less than 100% of CPU time, it can +use 100% of each individual CPU core. + +For example, consider a system with more than three cores. If you start one +container **{C0}** with **-c=512** running one process, and another container +**{C1}** with **-c=1024** running two processes, this can result in the following +division of CPU shares: + + PID container CPU CPU share + 100 {C0} 0 100% of CPU0 + 101 {C1} 1 100% of CPU1 + 102 {C1} 2 100% of CPU2 **--cap-add**=[] Add Linux capabilities @@ -107,10 +125,13 @@ milliseconds. **--cap-drop**=[] Drop Linux capabilities +**--cgroup-parent**="" + Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist. + **--cidfile**="" Write the container ID to the file -**--cpuset**="" +**--cpuset-cpus**="" CPUs in which to allow execution (0-3, 0,1) **-d**, **--detach**=*true*|*false* @@ -183,6 +204,12 @@ ENTRYPOINT. 'container:': reuses another container shared memory, semaphores and message queues 'host': use the host shared memory,semaphores and message queues inside the container. Note: the host mode gives the container full access to local shared memory and is therefore considered insecure. +**-l**, **--label**=[] + Set metadata on the container (e.g., --label com.example.key=value) + +**--label-file**=[] + Read in a line delimited file of labels + **--link**=[] Add link to another container in the form of :alias @@ -195,6 +222,10 @@ which interface and port to use. **--lxc-conf**=[] (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1" +**--log-driver**="|*json-file*|*none*" + Logging driver for container. Default is defined by daemon `--log-driver` flag. + **Warning**: `docker logs` command works only for `json-file` logging driver. + **-m**, **--memory**="" Memory limit (format: , where unit = b, k, m or g) @@ -244,9 +275,10 @@ and foreground Docker containers. When set to true publish all exposed ports to the host interfaces. The default is false. If the operator uses -P (or -p) then Docker will make the exposed port accessible on the host and the ports will be available to any -client that can reach the host. When using -P, Docker will bind the exposed -ports to a random port on the host between 49153 and 65535. To find the -mapping between the host ports and the exposed ports, use **docker port**. +client that can reach the host. When using -P, Docker will bind any exposed +port to a random port on the host within an *ephemeral port range* defined by +`/proc/sys/net/ipv4/ip_local_port_range`. To find the mapping between the host +ports and the exposed ports, use `docker port`. **-p**, **--publish**=[] Publish a container's port, or range of ports, to the host. @@ -274,15 +306,15 @@ allow the container nearly all the same access to the host as processes running outside of a container on the host. **--read-only**=*true*|*false* - Mount the container's root filesystem as read only. + Mount the container's root filesystem as read only. - By default a container will have its root filesystem writable allowing processes + By default a container will have its root filesystem writable allowing processes to write files anywhere. By specifying the `--read-only` flag the container will have its root filesystem mounted as read only prohibiting any writes. -**--restart**="" +**--restart**="no" Restart policy to apply when a container exits (no, on-failure[:max-retry], always) - + **--rm**=*true*|*false* Automatically remove the container when it exits (incompatible with -d). The default is *false*. @@ -325,16 +357,20 @@ read-write. See examples. **--volumes-from**=[] Mount volumes from the specified container(s) - Will mount volumes from the specified container identified by container-id. -Once a volume is mounted in a one container it can be shared with other -containers using the **--volumes-from** option when running those other -containers. The volumes can be shared even if the original container with the -mount is not running. + Mounts already mounted volumes from a source container onto another + container. You must supply the source's container-id. To share + a volume, use the **--volumes-from** option when running + the target container. You can share volumes even if the source container + is not running. - The container ID may be optionally suffixed with :ro or -:rw to mount the volumes in read-only or read-write mode, respectively. By -default, the volumes are mounted in the same mode (read write or read only) as -the reference container. + By default, Docker mounts the volumes in the same mode (read-write or + read-only) as it is mounted in the source container. Optionally, you + can change this by suffixing the container-id with either the `:ro` or + `:rw ` keyword. + + If the location of the volume from the source container overlaps with + data residing on a target container, then the volume hides + that data on the target. **-w**, **--workdir**="" Working directory inside the container diff --git a/docs/man/docker-start.1.md b/docs/man/docker-start.1.md index c2f91d2053..523b315594 100644 --- a/docs/man/docker-start.1.md +++ b/docs/man/docker-start.1.md @@ -2,7 +2,7 @@ % Docker Community % JUNE 2014 # NAME -docker-start - Restart a stopped container +docker-start - Start one or more stopped containers # SYNOPSIS **docker start** @@ -13,7 +13,7 @@ CONTAINER [CONTAINER...] # DESCRIPTION -Start a stopped container. +Start one or more stopped containers. # OPTIONS **-a**, **--attach**=*true*|*false* diff --git a/docs/man/docker.1.md b/docs/man/docker.1.md index 4a6d0cf152..530fa95019 100644 --- a/docs/man/docker.1.md +++ b/docs/man/docker.1.md @@ -23,37 +23,29 @@ its own man page which explain usage and arguments. To see the man page for a command run **man docker **. # OPTIONS -**-D**=*true*|*false* - Enable debug mode. Default is false. - -**--help** +**-h**, **--help** Print usage statement -**-H**, **--host**=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or -unix://[/path/to/socket] to use. - The socket(s) to bind to in daemon mode specified using one or more - tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. - -**--api-enable-cors**=*true*|*false* - Enable CORS headers in the remote API. Default is false. - **--api-cors-header**="" Set CORS headers in the remote API. Default is cors disabled. Give urls like "http://foo, http://bar, ...". Give "*" to allow all. -**-b**="" +**-b**, **--bridge**="" Attach containers to a pre\-existing network bridge; use 'none' to disable container networking **--bip**="" Use the provided CIDR notation address for the dynamically created bridge (docker0); Mutually exclusive of \-b -**-d**=*true*|*false* +**-D**, **--debug**=*true*|*false* + Enable debug mode. Default is false. + +**-d**, **--daemon**=*true*|*false* Enable daemon mode. Default is false. **--dns**="" Force Docker to use specific DNS servers -**-g**="" - Path to use as the root of the Docker runtime. Default is `/var/lib/docker`. +**-e**, **--exec-driver**="" + Force Docker to use specific exec driver. Default is `native`. **--fixed-cidr**="" IPv4 subnet for fixed IPs (e.g., 10.20.0.0/16); this subnet must be nested in the bridge subnet (which is defined by \-b or \-\-bip) @@ -61,6 +53,18 @@ unix://[/path/to/socket] to use. **--fixed-cidr-v6**="" IPv6 subnet for global IPv6 addresses (e.g., 2a00:1450::/64) +**-G**, **--group**="" + Group to assign the unix socket specified by -H when running in daemon mode. + use '' (the empty string) to disable setting of a group. Default is `docker`. + +**-g**, **--graph**="" + Path to use as the root of the Docker runtime. Default is `/var/lib/docker`. + +**-H**, **--host**=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or +unix://[/path/to/socket] to use. + The socket(s) to bind to in daemon mode specified using one or more + tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd. + **--icc**=*true*|*false* Allow unrestricted inter\-container and Docker daemon host communication. If disabled, containers can still be linked together using **--link** option (see **docker-run(1)**). Default is true. @@ -74,7 +78,7 @@ unix://[/path/to/socket] to use. Enable IP masquerading for bridge's IP range. Default is true. **--iptables**=*true*|*false* - Disable Docker's addition of iptables rules. Default is true. + Enable Docker's addition of iptables rules. Default is true. **--ipv6**=*true*|*false* Enable IPv6 support. Default is false. Docker will create an IPv6-enabled bridge with address fe80::1 which will allow you to create IPv6-enabled containers. Use together with `--fixed-cidr-v6` to provide globally routable IPv6 addresses. IPv6 forwarding will be enabled if not used with `--ip-forward=false`. This may collide with your host's current IPv6 settings. For more information please consult the documentation about "Advanced Networking - IPv6". @@ -85,22 +89,33 @@ unix://[/path/to/socket] to use. **--label**="[]" Set key=value labels to the daemon (displayed in `docker info`) -**--mtu**=VALUE - Set the containers network mtu. Default is `1500`. +**--log-driver**="*json-file*|*none*" + Container's logging driver. Default is `default`. + **Warning**: `docker logs` command works only for `json-file` logging driver. -**-p**="" +**--mtu**=VALUE + Set the containers network mtu. Default is `0`. + +**-p**, **--pidfile**="" Path to use for daemon PID file. Default is `/var/run/docker.pid` **--registry-mirror**=:// Prepend a registry mirror to be used for image pulls. May be specified multiple times. -**-s**="" +**-s**, **--storage-driver**="" Force the Docker runtime to use a specific storage driver. **--storage-opt**=[] Set storage driver options. See STORAGE DRIVER OPTIONS. -**-v**=*true*|*false* +**-tls**=*true*|*false* + Use TLS; implied by --tlsverify. Default is false. + +**-tlsverify**=*true*|*false* + Use TLS and verify the remote (daemon: verify client, client: verify daemon). + Default is false. + +**-v**, **--version**=*true*|*false* Print version information and quit. Default is false. **--selinux-enabled**=*true*|*false* @@ -117,7 +132,7 @@ unix://[/path/to/socket] to use. Create a new image from a container's changes **docker-cp(1)** - Copy files/folders from a container's filesystem to the host at path + Copy files/folders from a container's filesystem to the host **docker-create(1)** Create a new container @@ -201,6 +216,9 @@ inside it) **docker-start(1)** Start a stopped container +**docker-stats(1)** + Display a live stream of one or more containers' resource usage statistics + **docker-stop(1)** Stop a running container diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 76711640c0..49449f7843 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -31,24 +31,24 @@ pages: # Installation: - ['installation/index.md', '**HIDDEN**'] -- ['installation/mac.md', 'Installation', 'Mac OS X'] - ['installation/ubuntulinux.md', 'Installation', 'Ubuntu'] +- ['installation/mac.md', 'Installation', 'Mac OS X'] +- ['installation/windows.md', 'Installation', 'Microsoft Windows'] +- ['installation/amazon.md', 'Installation', 'Amazon EC2'] +- ['installation/archlinux.md', 'Installation', 'Arch Linux'] +- ['installation/binaries.md', 'Installation', 'Binaries'] +- ['installation/centos.md', 'Installation', 'CentOS'] +- ['installation/cruxlinux.md', 'Installation', 'CRUX Linux'] +- ['installation/debian.md', 'Installation', 'Debian'] +- ['installation/fedora.md', 'Installation', 'Fedora'] +- ['installation/frugalware.md', 'Installation', 'FrugalWare'] +- ['installation/google.md', 'Installation', 'Google Cloud Platform'] +- ['installation/gentoolinux.md', 'Installation', 'Gentoo'] +- ['installation/softlayer.md', 'Installation', 'IBM Softlayer'] +- ['installation/rackspace.md', 'Installation', 'Rackspace Cloud'] - ['installation/rhel.md', 'Installation', 'Red Hat Enterprise Linux'] - ['installation/oracle.md', 'Installation', 'Oracle Linux'] -- ['installation/centos.md', 'Installation', 'CentOS'] -- ['installation/debian.md', 'Installation', 'Debian'] -- ['installation/gentoolinux.md', 'Installation', 'Gentoo'] -- ['installation/google.md', 'Installation', 'Google Cloud Platform'] -- ['installation/rackspace.md', 'Installation', 'Rackspace Cloud'] -- ['installation/amazon.md', 'Installation', 'Amazon EC2'] -- ['installation/softlayer.md', 'Installation', 'IBM Softlayer'] -- ['installation/archlinux.md', 'Installation', 'Arch Linux'] -- ['installation/frugalware.md', 'Installation', 'FrugalWare'] -- ['installation/fedora.md', 'Installation', 'Fedora'] - ['installation/SUSE.md', 'Installation', 'SUSE'] -- ['installation/cruxlinux.md', 'Installation', 'CRUX Linux'] -- ['installation/windows.md', 'Installation', 'Microsoft Windows'] -- ['installation/binaries.md', 'Installation', 'Binaries'] - ['compose/install.md', 'Installation', 'Docker Compose'] # User Guide: @@ -59,6 +59,7 @@ pages: - ['userguide/dockerimages.md', 'User Guide', 'Working with Docker Images' ] - ['userguide/dockerlinks.md', 'User Guide', 'Linking containers together' ] - ['userguide/dockervolumes.md', 'User Guide', 'Managing data in containers' ] +- ['userguide/labels-custom-metadata.md', 'User Guide', 'Apply custom metadata' ] - ['userguide/dockerrepos.md', 'User Guide', 'Working with Docker Hub' ] - ['userguide/level1.md', '**HIDDEN**' ] - ['userguide/level2.md', '**HIDDEN**' ] @@ -116,7 +117,7 @@ pages: # Reference - ['reference/index.md', '**HIDDEN**'] - ['reference/commandline/index.md', '**HIDDEN**'] -- ['reference/commandline/cli.md', 'Reference', 'Command line'] +- ['reference/commandline/cli.md', 'Reference', 'Docker command line'] - ['reference/builder.md', 'Reference', 'Dockerfile'] - ['faq.md', 'Reference', 'FAQ'] - ['reference/run.md', 'Reference', 'Run Reference'] @@ -169,8 +170,21 @@ pages: - ['terms/filesystem.md', '**HIDDEN**'] - ['terms/image.md', '**HIDDEN**'] -# Contribute: -- ['contributing/index.md', '**HIDDEN**'] -- ['contributing/contributing.md', 'Contribute', 'Contributing'] -- ['contributing/devenvironment.md', 'Contribute', 'Development environment'] -- ['contributing/docs_style-guide.md', 'Contribute', 'Documentation style guide'] + +# Project: +- ['project/index.md', '**HIDDEN**'] +- ['project/who-written-for.md', 'Contributor Guide', 'README first'] +- ['project/software-required.md', 'Contributor Guide', 'Get required software'] +- ['project/set-up-git.md', 'Contributor Guide', 'Configure Git for contributing'] +- ['project/set-up-dev-env.md', 'Contributor Guide', 'Work with a development container'] +- ['project/test-and-docs.md', 'Contributor Guide', 'Run tests and test documentation'] +- ['project/make-a-contribution.md', 'Contributor Guide', 'Understand contribution workflow'] +- ['project/find-an-issue.md', 'Contributor Guide', 'Find an issue'] +- ['project/work-issue.md', 'Contributor Guide', 'Work on an issue'] +- ['project/create-pr.md', 'Contributor Guide', 'Create a pull request'] +- ['project/review-pr.md', 'Contributor Guide', 'Participate in the PR review'] +- ['project/advanced-contributing.md', 'Contributor Guide', 'Advanced contributing'] +- ['project/get-help.md', 'Contributor Guide', 'Where to get help'] +- ['project/coding-style.md', 'Contributor Guide', 'Coding style guide'] +- ['project/doc-style.md', 'Contributor Guide', 'Documentation style guide'] + diff --git a/docs/release.sh b/docs/release.sh new file mode 100755 index 0000000000..7e2ed5f112 --- /dev/null +++ b/docs/release.sh @@ -0,0 +1,169 @@ +#!/bin/bash +set -e + +set -o pipefail + +usage() { + cat >&2 <<'EOF' +To publish the Docker documentation you need to set your access_key and secret_key in the docs/awsconfig file +(with the keys in a [profile $AWS_S3_BUCKET] section - so you can have more than one set of keys in your file) +and set the AWS_S3_BUCKET env var to the name of your bucket. + +If you're publishing the current release's documentation, also set `BUILD_ROOT=yes` + +make AWS_S3_BUCKET=docs-stage.docker.com docs-release + +will then push the documentation site to your s3 bucket. + + Note: you can add `OPTIONS=--dryrun` to see what will be done without sending to the server + You can also add NOCACHE=1 to publish without a cache, which is what we do for the master docs. +EOF + exit 1 +} + +create_robots_txt() { + cat > ./sources/robots.txt <<'EOF' +User-agent: * +Disallow: / +EOF +} + +setup_s3() { + # Try creating the bucket. Ignore errors (it might already exist). + echo "create $BUCKET if it does not exist" + aws s3 mb --profile $BUCKET s3://$BUCKET 2>/dev/null || true + + # Check access to the bucket. + echo "test $BUCKET exists" + aws s3 --profile $BUCKET ls s3://$BUCKET + + # Make the bucket accessible through website endpoints. + echo "make $BUCKET accessible as a website" + #aws s3 website s3://$BUCKET --index-document index.html --error-document jsearch/index.html + local s3conf=$(cat s3_website.json | envsubst) + aws s3api --profile $BUCKET put-bucket-website --bucket $BUCKET --website-configuration "$s3conf" +} + +build_current_documentation() { + mkdocs build + cd site/ + gzip -9k -f search_content.json + cd .. +} + +upload_current_documentation() { + src=site/ + dst=s3://$BUCKET$1 + + cache=max-age=3600 + if [ "$NOCACHE" ]; then + cache=no-cache + fi + + printf "\nUploading $src to $dst\n" + + # a really complicated way to send only the files we want + # if there are too many in any one set, aws s3 sync seems to fall over with 2 files to go + # versions.html_fragment + include="--recursive --include \"*.$i\" " + run="aws s3 cp $src $dst $OPTIONS --profile $BUCKET --cache-control $cache --acl public-read $include" + printf "\n=====\n$run\n=====\n" + $run + + # Make sure the search_content.json.gz file has the right content-encoding + aws s3 cp --profile $BUCKET --cache-control $cache --content-encoding="gzip" --acl public-read "site/search_content.json.gz" "$dst" +} + +invalidate_cache() { + if [[ -z "$DISTRIBUTION_ID" ]]; then + echo "Skipping Cloudfront cache invalidation" + return + fi + + dst=$1 + + aws configure set preview.cloudfront true + + # Get all the files + # not .md~ files + # replace spaces w %20 so urlencoded + files=( $(find site/ -not -name "*.md*" -type f | sed 's/site//g' | sed 's/ /%20/g') ) + + len=${#files[@]} + last_file=${files[$((len-1))]} + + echo "aws cloudfront create-invalidation --profile $AWS_S3_BUCKET --distribution-id $DISTRIBUTION_ID --invalidation-batch '" > batchfile + echo "{\"Paths\":{\"Quantity\":$len," >> batchfile + echo "\"Items\": [" >> batchfile + + for file in "${files[@]}" ; do + if [[ $file == $last_file ]]; then + comma="" + else + comma="," + fi + echo "\"$dst$file\"$comma" >> batchfile + done + + echo "]}, \"CallerReference\":\"$(date)\"}'" >> batchfile + + sh batchfile +} + +main() { + [ "$AWS_S3_BUCKET" ] || usage + + # Make sure there is an awsconfig file + export AWS_CONFIG_FILE=$(pwd)/awsconfig + [ -f "$AWS_CONFIG_FILE" ] || usage + + # Get the version + VERSION=$(cat VERSION) + + # Disallow pushing dev docs to master + if [ "$AWS_S3_BUCKET" == "docs.docker.com" ] && [ "${VERSION%-dev}" != "$VERSION" ]; then + echo "Please do not push '-dev' documentation to docs.docker.com ($VERSION)" + exit 1 + fi + + # Clean version - 1.0.2-dev -> 1.0 + export MAJOR_MINOR="v${VERSION%.*}" + + export BUCKET=$AWS_S3_BUCKET + export AWS_DEFAULT_PROFILE=$BUCKET + + # debug variables + echo "bucket: $BUCKET, full version: $VERSION, major-minor: $MAJOR_MINOR" + echo "cfg file: $AWS_CONFIG_FILE ; profile: $AWS_DEFAULT_PROFILE" + + # create the robots.txt + create_robots_txt + + if [ "$OPTIONS" != "--dryrun" ]; then + setup_s3 + fi + + # Default to only building the version specific docs + # so we don't clober the latest by accident with old versions + if [ "$BUILD_ROOT" == "yes" ]; then + echo "Building root documentation" + build_current_documentation + + echo "Uploading root documentation" + upload_current_documentation + [ "$NOCACHE" ] || invalidate_cache + fi + + #build again with /v1.0/ prefix + sed -i "s/^site_url:.*/site_url: \/$MAJOR_MINOR\//" mkdocs.yml + echo "Building the /$MAJOR_MINOR/ documentation" + build_current_documentation + + echo "Uploading the documentation" + upload_current_documentation "/$MAJOR_MINOR/" + + # Invalidating cache + [ "$NOCACHE" ] || invalidate_cache "/$MAJOR_MINOR" +} + +main diff --git a/docs/s3_website.json b/docs/s3_website.json index 7e32d99734..490c492ea4 100644 --- a/docs/s3_website.json +++ b/docs/s3_website.json @@ -36,7 +36,10 @@ { "Condition": { "KeyPrefixEquals": "examples/using_supervisord/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "articles/using_supervisord/" } }, { "Condition": { "KeyPrefixEquals": "reference/api/registry_index_spec/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/hub_registry_spec/" } }, { "Condition": { "KeyPrefixEquals": "use/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "examples/" } }, - { "Condition": { "KeyPrefixEquals": "installation/openSUSE/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "installation/SUSE/" } } + { "Condition": { "KeyPrefixEquals": "installation/openSUSE/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "installation/SUSE/" } }, + { "Condition": { "KeyPrefixEquals": "contributing/contributing/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "project/who-written-for/" } }, + { "Condition": { "KeyPrefixEquals": "contributing/devenvironment/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "project/set-up-prereqs/" } }, + { "Condition": { "KeyPrefixEquals": "contributing/docs_style-guide/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "project/doc-style/" } } ] } diff --git a/docs/sources/articles/host_integration.md b/docs/sources/articles/host_integration.md index 89fd2a1f7a..cbcb21a357 100644 --- a/docs/sources/articles/host_integration.md +++ b/docs/sources/articles/host_integration.md @@ -59,12 +59,11 @@ a new service that will be started after the docker daemon service has started. /usr/bin/docker start -a redis_server end script - ### systemd [Unit] Description=Redis container - Author=Me + Requires=docker.service After=docker.service [Service] @@ -74,3 +73,14 @@ a new service that will be started after the docker daemon service has started. [Install] WantedBy=local.target + +If you need to pass options to the redis container (such as `--env`), +then you'll need to use `docker run` rather than `docker start`. This will +create a new container every time the service is started, which will be stopped +and removed when the service is stopped. + + [Service] + ... + ExecStart=/usr/bin/docker run --env foo=bar --name redis_server redis + ExecStop=/usr/bin/docker stop -t 2 redis_server ; /usr/bin/docker rm -f redis_server + ... diff --git a/docs/sources/articles/networking.md b/docs/sources/articles/networking.md index e1195e10d1..7247d298d8 100644 --- a/docs/sources/articles/networking.md +++ b/docs/sources/articles/networking.md @@ -183,10 +183,27 @@ Four different options affect container domain name services. only look up `host` but also `host.example.com`. Use `--dns-search=.` if you don't wish to set the search domain. -Note that Docker, in the absence of either of the last two options -above, will make `/etc/resolv.conf` inside of each container look like -the `/etc/resolv.conf` of the host machine where the `docker` daemon is -running. You might wonder what happens when the host machine's +Regarding DNS settings, in the absence of either the `--dns=IP_ADDRESS...` +or the `--dns-search=DOMAIN...` option, Docker makes each container's +`/etc/resolv.conf` look like the `/etc/resolv.conf` of the host machine (where +the `docker` daemon runs). When creating the container's `/etc/resolv.conf`, +the daemon filters out all localhost IP address `nameserver` entries from +the host's original file. + +Filtering is necessary because all localhost addresses on the host are +unreachable from the container's network. After this filtering, if there +are no more `nameserver` entries left in the container's `/etc/resolv.conf` +file, the daemon adds public Google DNS nameservers +(8.8.8.8 and 8.8.4.4) to the container's DNS configuration. If IPv6 is +enabled on the daemon, the public IPv6 Google DNS nameservers will also +be added (2001:4860:4860::8888 and 2001:4860:4860::8844). + +> **Note**: +> If you need access to a host's localhost resolver, you must modify your +> DNS service on the host to listen on a non-localhost address that is +> reachable from within the container. + +You might wonder what happens when the host machine's `/etc/resolv.conf` file changes. The `docker` daemon has a file change notifier active which will watch for changes to the host DNS configuration. @@ -228,13 +245,11 @@ Whether a container can talk to the world is governed by two factors. Docker will go set `ip_forward` to `1` for you when the server starts up. To check the setting or turn it on manually: - ``` - $ cat /proc/sys/net/ipv4/ip_forward - 0 - $ echo 1 > /proc/sys/net/ipv4/ip_forward - $ cat /proc/sys/net/ipv4/ip_forward - 1 - ``` + $ sysctl net.ipv4.conf.all.forwarding + net.ipv4.conf.all.forwarding = 0 + $ sysctl net.ipv4.conf.all.forwarding=1 + $ sysctl net.ipv4.conf.all.forwarding + net.ipv4.conf.all.forwarding = 1 Many using Docker will want `ip_forward` to be on, to at least make communication *possible* between containers and @@ -370,17 +385,18 @@ to provide special options when invoking `docker run`. These options are covered in more detail in the [Docker User Guide](/userguide/dockerlinks) page. There are two approaches. -First, you can supply `-P` or `--publish-all=true|false` to `docker run` -which is a blanket operation that identifies every port with an `EXPOSE` -line in the image's `Dockerfile` and maps it to a host port somewhere in -the range 49153–65535. This tends to be a bit inconvenient, since you -then have to run other `docker` sub-commands to learn which external -port a given service was mapped to. +First, you can supply `-P` or `--publish-all=true|false` to `docker run` which +is a blanket operation that identifies every port with an `EXPOSE` line in the +image's `Dockerfile` or `--expose ` commandline flag and maps it to a +host port somewhere within an *ephemeral port range*. The `docker port` command +then needs to be used to inspect created mapping. The *ephemeral port range* is +configured by `/proc/sys/net/ipv4/ip_local_port_range` kernel parameter, +typically ranging from 32768 to 61000. -More convenient is the `-p SPEC` or `--publish=SPEC` option which lets -you be explicit about exactly which external port on the Docker server — -which can be any port at all, not just those in the 49153-65535 block — -you want mapped to which port in the container. +Mapping can be specified explicitly using `-p SPEC` or `--publish=SPEC` option. +It allows you to particularize which port on docker server - which can be any +port at all, not just one within the *ephemeral port range* — you want mapped +to which port in the container. Either way, you should be able to peek at what Docker has accomplished in your network stack by examining your NAT tables. @@ -463,9 +479,7 @@ your host's interfaces you should set `accept_ra` to `2`. Otherwise IPv6 enabled forwarding will result in rejecting Router Advertisements. E.g., if you want to configure `eth0` via Router Advertisements you should set: - ``` $ sysctl net.ipv6.conf.eth0.accept_ra=2 - ``` ![](/article-img/ipv6_basic_host_config.svg) @@ -840,10 +854,11 @@ The steps with which Docker configures a container are: 5. Give the container's `eth0` a new IP address from within the bridge's range of network addresses, and set its default route to - the IP address that the Docker host owns on the bridge. If available - the IP address is generated from the MAC address. This prevents ARP - cache invalidation problems, when a new container comes up with an - IP used in the past by another container with another MAC. + the IP address that the Docker host owns on the bridge. The MAC + address is generated from the IP address unless otherwise specified. + This prevents ARP cache invalidation problems, when a new container + comes up with an IP used in the past by another container with another + MAC. With these steps complete, the container now possesses an `eth0` (virtual) network card and will find itself able to communicate with diff --git a/docs/sources/articles/registry_mirror.md b/docs/sources/articles/registry_mirror.md index a7493e9aec..adc470d713 100644 --- a/docs/sources/articles/registry_mirror.md +++ b/docs/sources/articles/registry_mirror.md @@ -50,7 +50,8 @@ port `5000` and mirrors the content at `registry-1.docker.io`: sudo docker run -p 5000:5000 \ -e STANDALONE=false \ -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io registry + -e MIRROR_SOURCE_INDEX=https://index.docker.io \ + registry ## Test it out diff --git a/docs/sources/contributing.md b/docs/sources/contributing.md deleted file mode 100644 index 0a1e4fd282..0000000000 --- a/docs/sources/contributing.md +++ /dev/null @@ -1,7 +0,0 @@ -# Contributing - -## Contents: - - - [Contributing to Docker](contributing/) - - [Setting Up a Dev Environment](devenvironment/) - diff --git a/docs/sources/contributing/contributing.md b/docs/sources/contributing/contributing.md deleted file mode 100644 index 850b01ce12..0000000000 --- a/docs/sources/contributing/contributing.md +++ /dev/null @@ -1,24 +0,0 @@ -page_title: Contribution Guidelines -page_description: Contribution guidelines: create issues, conventions, pull requests -page_keywords: contributing, docker, documentation, help, guideline - -# Contributing to Docker - -Want to hack on Docker? Awesome! - -The repository includes [all the instructions you need to get started]( -https://github.com/docker/docker/blob/master/CONTRIBUTING.md). - -The [developer environment Dockerfile]( -https://github.com/docker/docker/blob/master/Dockerfile) -specifies the tools and versions used to test and build Docker. - -If you're making changes to the documentation, see the [README.md]( -https://github.com/docker/docker/blob/master/docs/README.md). - -The [documentation environment Dockerfile]( -https://github.com/docker/docker/blob/master/docs/Dockerfile) -specifies the tools and versions used to build the Documentation. - -Further interesting details can be found in the [Packaging hints]( -https://github.com/docker/docker/blob/master/project/PACKAGERS.md). diff --git a/docs/sources/contributing/devenvironment.md b/docs/sources/contributing/devenvironment.md deleted file mode 100644 index c4072c9aa2..0000000000 --- a/docs/sources/contributing/devenvironment.md +++ /dev/null @@ -1,169 +0,0 @@ -page_title: Setting Up a Dev Environment -page_description: Guides on how to contribute to docker -page_keywords: Docker, documentation, developers, contributing, dev environment - -# Setting Up a Dev Environment - -To make it easier to contribute to Docker, we provide a standard -development environment. It is important that the same environment be -used for all tests, builds and releases. The standard development -environment defines all build dependencies: system libraries and -binaries, go environment, go dependencies, etc. - -**Things you need:** - - * Docker - * git - * make - -## Install Docker - -Docker's build environment itself is a Docker container, so the first -step is to install Docker on your system. - -You can follow the [install instructions most relevant to your -system](https://docs.docker.com/installation/). Make sure you -have a working, up-to-date docker installation, then continue to the -next step. - -## Install tools used for this tutorial - -Install `git`; honest, it's very good. You can use -other ways to get the Docker source, but they're not anywhere near as -easy. - -Install `make`. This tutorial uses our base Makefile -to kick off the docker containers in a repeatable and consistent way. -Again, you can do it in other ways but you need to do more work. - -## Check out the Source - - $ git clone https://git@github.com/docker/docker - $ cd docker - -To checkout a different revision just use `git checkout` -with the name of branch or revision number. - -## Build the Environment - -This following command builds a development environment using the -`Dockerfile` in the current directory. Essentially, it installs all -the build and runtime dependencies necessary to build and test Docker. -Your first build will take some time to complete. On Linux systems and on Mac -OS X from within the `boot2docker` shell: - - $ make build - -> **Note**: -> On Mac OS X, the Docker make targets such as `build`, `binary`, and `test` -> should **not** be built by the 'root' user. Therefore, you shouldn't use `sudo` when -> running these commands on OS X. -> On Linux, we suggest you add your current user to the `docker` group via -> [these -> instructions](http://docs.docker.com/installation/ubuntulinux/#giving-non-root-access). - -If the build is successful, congratulations! You have produced a clean -build of docker, neatly encapsulated in a standard build environment. - - -## Build the Docker Binary - -To create the Docker binary, run this command: - - $ make binary - -This will create the Docker binary in `./bundles/-dev/binary/`. If you -do not see files in the `./bundles` directory in your host, your `BIND_DIR` -setting is not set quite right. You want to run the following command: - - $ make BIND_DIR=. binary - -If you are on a non-Linux platform, e.g., OSX, you'll want to run `make cross` -or `make BIND_DIR=. cross`. - -### Using your built Docker binary - -The binary is available outside the container in the directory -`./bundles/-dev/binary/`. You can swap your -host docker executable with this binary for live testing - for example, -on ubuntu: - - $ sudo service docker stop ; sudo cp $(which docker) $(which docker)_ ; sudo cp ./bundles/-dev/binary/docker--dev $(which docker);sudo service docker start - -> **Note**: -> Its safer to run the tests below before swapping your hosts docker binary. - -## Run the Tests - -To execute the test cases, run this command: - - $ make test - -If the test are successful then the tail of the output should look -something like this - - --- PASS: TestWriteBroadcaster (0.00 seconds) - === RUN TestRaceWriteBroadcaster - --- PASS: TestRaceWriteBroadcaster (0.00 seconds) - === RUN TestTruncIndex - --- PASS: TestTruncIndex (0.00 seconds) - === RUN TestCompareKernelVersion - --- PASS: TestCompareKernelVersion (0.00 seconds) - === RUN TestHumanSize - --- PASS: TestHumanSize (0.00 seconds) - === RUN TestParseHost - --- PASS: TestParseHost (0.00 seconds) - === RUN TestParseRepositoryTag - --- PASS: TestParseRepositoryTag (0.00 seconds) - === RUN TestGetResolvConf - --- PASS: TestGetResolvConf (0.00 seconds) - === RUN TestParseRelease - --- PASS: TestParseRelease (0.00 seconds) - === RUN TestDependencyGraphCircular - --- PASS: TestDependencyGraphCircular (0.00 seconds) - === RUN TestDependencyGraph - --- PASS: TestDependencyGraph (0.00 seconds) - PASS - ok github.com/docker/docker/utils 0.017s - -If `$TESTFLAGS` is set in the environment, it will pass extra arguments -to `go test`. You can use this to select certain tests to run, e.g., - - $ TESTFLAGS='-test.run \^TestBuild\$' make test - -Only those test cases matching the regular expression inside quotation marks will be tested. - -If the output indicates "FAIL" and you see errors like this: - - server.go:1302 Error: Insertion failed because database is full: database or disk is full - - utils_test.go:179: Error copy: exit status 1 (cp: writing '/tmp/docker-testd5c9-[...]': No space left on device - -Then you likely don't have enough memory available the test suite. 2GB -is recommended. - -## Use Docker - -You can run an interactive session in the newly built container: - - $ make shell - - # type 'exit' or Ctrl-D to exit - -## Build And View The Documentation - -If you want to read the documentation from a local website, or are -making changes to it, you can build the documentation and then serve it -by: - - $ make docs - - # when its done, you can point your browser to http://yourdockerhost:8000 - # type Ctrl-C to exit - -**Need More Help?** - -If you need more help then hop on to the [#docker-dev IRC -channel](irc://chat.freenode.net#docker-dev) or post a message on the -[Docker developer mailing -list](https://groups.google.com/d/forum/docker-dev). diff --git a/docs/sources/faq.md b/docs/sources/faq.md index 8994c377ff..0f64ff9b93 100644 --- a/docs/sources/faq.md +++ b/docs/sources/faq.md @@ -148,7 +148,7 @@ Linux: - Ubuntu 12.04, 13.04 et al - Fedora 19/20+ - RHEL 6.5+ - - Centos 6+ + - CentOS 6+ - Gentoo - ArchLinux - openSUSE 12.3+ diff --git a/docs/sources/installation/MAINTAINERS b/docs/sources/installation/MAINTAINERS deleted file mode 100644 index 6ef08309b0..0000000000 --- a/docs/sources/installation/MAINTAINERS +++ /dev/null @@ -1,3 +0,0 @@ -google.md: Johan Euphrosine (@proppy) -softlayer.md: Phil Jackson (@underscorephil) -joyent.md: Casey Bisson (@misterbisson) diff --git a/docs/sources/installation/centos.md b/docs/sources/installation/centos.md index 06dc8bfee8..862d508988 100644 --- a/docs/sources/installation/centos.md +++ b/docs/sources/installation/centos.md @@ -6,8 +6,8 @@ page_keywords: Docker, Docker documentation, requirements, linux, centos, epel, Docker is supported on the following versions of CentOS: -- [*CentOS 7 (64-bit)*](#installing-docker---centos-7) -- [*CentOS 6.5 (64-bit)*](#installing-docker---centos-6.5) or later +- [*CentOS 7 (64-bit)*](#installing-docker-centos-7) +- [*CentOS 6.5 (64-bit)*](#installing-docker-centos-6.5) or later These instructions are likely work for other binary compatible EL6/EL7 distributions such as Scientific Linux, but they haven't been tested. @@ -46,7 +46,7 @@ start or restart `firewalld` after Docker, you will have to restart the Docker d ## Installing Docker - CentOS-6.5 -For Centos-6.5, the Docker package is part of [Extra Packages +For CentOS-6.5, the Docker package is part of [Extra Packages for Enterprise Linux (EPEL)](https://fedoraproject.org/wiki/EPEL) repository, a community effort to create and maintain additional packages for the RHEL distribution. diff --git a/docs/sources/installation/debian.md b/docs/sources/installation/debian.md index 74acd1d42b..4644a2440d 100644 --- a/docs/sources/installation/debian.md +++ b/docs/sources/installation/debian.md @@ -31,7 +31,7 @@ To verify that everything has worked as expected: Which should download the `ubuntu` image, and then start `bash` in a container. -> **Note**: +> **Note**: > If you want to enable memory and swap accounting see > [this](/installation/ubuntulinux/#memory-and-swap-accounting). @@ -48,18 +48,20 @@ which is officially supported by Docker. ### Installation 1. Install Kernel from wheezy-backports - + Add the following line to your `/etc/apt/sources.list` `deb http://http.debian.net/debian wheezy-backports main` then install the `linux-image-amd64` package (note the use of `-t wheezy-backports`) - + $ sudo apt-get update $ sudo apt-get install -t wheezy-backports linux-image-amd64 -2. Install Docker using the get.docker.com script: +2. Restart your system. This is necessary for Debian to use your new kernel. + +3. Install Docker using the get.docker.com script: `curl -sSL https://get.docker.com/ | sh` @@ -78,7 +80,7 @@ run the `docker` client as a user in the `docker` group then you don't need to add `sudo` to all the client commands. From Docker 0.9.0 you can use the `-G` flag to specify an alternative group. -> **Warning**: +> **Warning**: > The `docker` group (or the group specified with the `-G` flag) is > `root`-equivalent; see [*Docker Daemon Attack Surface*]( > /articles/security/#docker-daemon-attack-surface) details. diff --git a/docs/sources/installation/images/kitematic.png b/docs/sources/installation/images/kitematic.png new file mode 100644 index 0000000000..5bb221ccf7 Binary files /dev/null and b/docs/sources/installation/images/kitematic.png differ diff --git a/docs/sources/installation/mac.md b/docs/sources/installation/mac.md index dd0ed97564..a06233e0ea 100644 --- a/docs/sources/installation/mac.md +++ b/docs/sources/installation/mac.md @@ -1,5 +1,5 @@ -page_title: Installation on Mac OS X -page_description: Instructions for installing Docker on OS X using boot2docker. +page_title: Installation on Mac OS X +page_description: Instructions for installing Docker on OS X using boot2docker. page_keywords: Docker, Docker documentation, requirements, boot2docker, VirtualBox, SSH, Linux, OSX, OS X, Mac # Install Docker on Mac OS X @@ -17,12 +17,20 @@ completely from RAM, is a small ~24MB download, and boots in approximately 5s. Your Mac must be running OS X 10.6 "Snow Leopard" or newer to run Boot2Docker. +## How do you want to work with Docker? + +You can set up Docker using the command line with Boot2Docker and the guide +below. Alternatively, you may want to try Kitematic, +an application that lets you set up Docker and run containers using a graphical +user interface (GUI). + +Download Kitematic ## Learn the key concepts before installing - + In a Docker installation on Linux, your machine is both the localhost and the Docker host. In networking, localhost means your computer. The Docker host is -the machine on which the containers run. +the machine on which the containers run. On a typical Linux installation, the Docker client, the Docker daemon, and any containers run directly on your localhost. This means you can address ports on a @@ -43,7 +51,7 @@ practice, work through the exercises on this page. ## Install Boot2Docker - + 1. Go to the [boot2docker/osx-installer ]( https://github.com/boot2docker/osx-installer/releases/latest) release page. @@ -65,10 +73,10 @@ To run a Docker container, you first start the `boot2docker` VM and then issue `boot2docker` from your Applications folder or from the command line. > **NOTE**: Boot2Docker is designed as a development tool. You should not use -> it in production environments. +> it in production environments. ### From the Applications folder - + When you launch the "Boot2Docker" application from your "Applications" folder, the application: @@ -85,9 +93,9 @@ your setup succeeded is to run the `hello-world` container. $ docker run hello-world Unable to find image 'hello-world:latest' locally - 511136ea3c5a: Pull complete - 31cbccb51277: Pull complete - e45a5af57b00: Pull complete + 511136ea3c5a: Pull complete + 31cbccb51277: Pull complete + e45a5af57b00: Pull complete hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. Status: Downloaded newer image for hello-world:latest Hello from Docker. @@ -108,7 +116,7 @@ your setup succeeded is to run the `hello-world` container. For more examples and ideas, visit: http://docs.docker.com/userguide/ - + A more typical way to start and stop `boot2docker` is using the command line. ### From your command line @@ -121,7 +129,7 @@ Initialize and run `boot2docker` from the command line, do the following: This creates a new virtual machine. You only need to run this command once. -2. Start the `boot2docker` VM. +2. Start the `boot2docker` VM. $ boot2docker start @@ -134,19 +142,19 @@ Initialize and run `boot2docker` from the command line, do the following: export DOCKER_HOST=tcp://192.168.59.103:2376 export DOCKER_CERT_PATH=/Users/mary/.boot2docker/certs/boot2docker-vm export DOCKER_TLS_VERIFY=1 - + The specific paths and address on your machine will be different. 4. To set the environment variables in your shell do the following: - $ $(boot2docker shellinit) - + $ eval "$(boot2docker shellinit)" + You can also set them manually by using the `export` commands `boot2docker` returns. 5. Run the `hello-world` container to verify your setup. - $ docker run hello-world + $ docker run hello-world ## Basic Boot2Docker Exercises @@ -156,7 +164,7 @@ environment initialized. To verify this, run the following commands: $ boot2docker status $ docker version - + Work through this section to try some practical container tasks using `boot2docker` VM. ### Access container ports @@ -164,25 +172,25 @@ Work through this section to try some practical container tasks using `boot2dock 1. Start an NGINX container on the DOCKER_HOST. $ docker run -d -P --name web nginx - + Normally, the `docker run` commands starts a container, runs it, and then exits. The `-d` flag keeps the container running in the background after the `docker run` command completes. The `-P` flag publishes exposed ports from the container to your local host; this lets you access them from your Mac. - + 2. Display your running container with `docker ps` command CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5fb65ff765e9 nginx:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 0.0.0.0:49156->443/tcp, 0.0.0.0:49157->80/tcp web - At this point, you can see `nginx` is running as a daemon. + At this point, you can see `nginx` is running as a daemon. 3. View just the container's ports. $ docker port web 443/tcp -> 0.0.0.0:49156 80/tcp -> 0.0.0.0:49157 - + This tells you that the `web` container's port `80` is mapped to port `49157` on your Docker host. @@ -198,7 +206,7 @@ Work through this section to try some practical container tasks using `boot2dock $ boot2docker ip 192.168.59.103 - + 6. Enter the `http://192.168.59.103:49157` address in your browser: ![Correct Addressing](/installation/images/good_host.png) @@ -209,7 +217,7 @@ Work through this section to try some practical container tasks using `boot2dock $ docker stop web $ docker rm web - + ### Mount a volume on the container When you start `boot2docker`, it automatically shares your `/Users` directory @@ -219,7 +227,7 @@ The next exercise demonstrates how to do this. 1. Change to your user `$HOME` directory. $ cd $HOME - + 2. Make a new `site` directory. $ mkdir site @@ -231,17 +239,17 @@ The next exercise demonstrates how to do this. 4. Create a new `index.html` file. $ echo "my new site" > index.html - + 5. Start a new `nginx` container and replace the `html` folder with your `site` directory. $ docker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx - + 6. Get the `mysite` container's port. $ docker port mysite 80/tcp -> 0.0.0.0:49166 443/tcp -> 0.0.0.0:49165 - + 7. Open the site in a browser: ![My site page](/installation/images/newsite_view.png) @@ -249,7 +257,7 @@ The next exercise demonstrates how to do this. 8. Try adding a page to your `$HOME/site` in real time. $ echo "This is cool" > cool.html - + 9. Open the new page in the browser. ![Cool page](/installation/images/cool_view.png) @@ -259,7 +267,7 @@ The next exercise demonstrates how to do this. $ docker stop mysite $ docker rm mysite -## Upgrade Boot2Docker +## Upgrade Boot2Docker If you running Boot2Docker 1.4.1 or greater, you can upgrade Boot2Docker from the command line. If you are running an older version, you should use the @@ -274,7 +282,7 @@ To upgrade from 1.4.1 or greater, you can do this: 2. Stop the `boot2docker` application. $ boot2docker stop - + 3. Run the upgrade command. $ boot2docker upgrade @@ -292,13 +300,13 @@ To upgrade any version of Boot2Docker, do this: 3. Go to the [boot2docker/osx-installer ]( https://github.com/boot2docker/osx-installer/releases/latest) release page. - + 4. Download Boot2Docker by clicking `Boot2Docker-x.x.x.pkg` in the "Downloads" section. -2. Install Boot2Docker by double-clicking the package. +2. Install Boot2Docker by double-clicking the package. - The installer places Boot2Docker in your "Applications" folder. + The installer places Boot2Docker in your "Applications" folder. ## Learning more and Acknowledgement @@ -312,4 +320,3 @@ Thanks to Chris Jones whose [blog](http://goo.gl/Be6cCk) inspired me to redo this page. Continue with the [Docker User Guide](/userguide/). - diff --git a/docs/sources/installation/ubuntulinux.md b/docs/sources/installation/ubuntulinux.md index 9261734c26..85a37d768d 100644 --- a/docs/sources/installation/ubuntulinux.md +++ b/docs/sources/installation/ubuntulinux.md @@ -1,395 +1,305 @@ -page_title: Installation on Ubuntu -page_description: Instructions for installing Docker on Ubuntu. +page_title: Installation on Ubuntu +page_description: Instructions for installing Docker on Ubuntu. page_keywords: Docker, Docker documentation, requirements, virtualbox, installation, ubuntu -# Ubuntu +#Ubuntu -Docker is supported on the following versions of Ubuntu: +Docker is supported on these Ubuntu operating systems: - - [*Ubuntu Trusty 14.04 (LTS) (64-bit)*](#ubuntu-trusty-1404-lts-64-bit) - - [*Ubuntu Precise 12.04 (LTS) (64-bit)*](#ubuntu-precise-1204-lts-64-bit) - - [*Ubuntu Raring 13.04 and Saucy 13.10 (64 - bit)*](#ubuntu-raring-1304-and-saucy-1310-64-bit) +- Ubuntu Trusty 14.04 (LTS) +- Ubuntu Precise 12.04 (LTS) +- Ubuntu Saucy 13.10 -Please read [*Docker and UFW*](#docker-and-ufw), if you plan to use [UFW -(Uncomplicated Firewall)](https://help.ubuntu.com/community/UFW) +This page instructs you to install using Docker-managed release packages and +installation mechanisms. Using these packages ensures you get the latest release +of Docker. If you wish to install using Ubuntu-managed packages, consult your +Ubuntu documentation. -## Ubuntu Trusty 14.04 (LTS) (64-bit) +##Prerequisites -Ubuntu Trusty comes with a 3.13.0 Linux kernel, and a `docker.io` package which -installs Docker 1.0.1 and all its prerequisites from Ubuntu's repository. +Docker requires a 64-bit installation regardless of your Ubuntu version. +Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version +or a newer maintained version are also acceptable. -> **Note**: -> Ubuntu (and Debian) contain a much older KDE3/GNOME2 package called ``docker``, so the -> Ubuntu-maintained package and executable are named ``docker.io``. +Kernels older than 3.10 lack some of the features required to run Docker +containers. These older versions are known to have bugs which cause data loss +and frequently panic under certain conditions. -### Ubuntu-maintained Package Installation +To check your current kernel version, open a terminal and use `uname -r` to display +your kernel version: -To install the latest Ubuntu package (this is **not** the most recent Docker release): + $ uname -r + 3.11.0-15-generic - $ sudo apt-get update - $ sudo apt-get install docker.io +>**Caution** Some Ubuntu OS versions **require a version higher than 3.10** to +>run Docker, see the prerequisites on this page that apply to your Ubuntu +>version. -Then, to enable tab-completion of Docker commands in BASH, either restart BASH or: +###For Trusty 14.04 - $ source /etc/bash_completion.d/docker* +There are no prerequisites for this version. -> **Note**: -> Since the Ubuntu package is quite dated at this point, you may want to use -> the following section to install the most recent release of Docker. -> If you install the Docker version, you do not need to install ``docker.io`` from Ubuntu. +###For Precise 12.04 (LTS) -### Docker-maintained Package Installation +For Ubuntu Precise, Docker requires the 3.13 kernel version. If your kernel +version is older than 3.13, you must upgrade it. Refer to this table to see +which packages are required for your environment: -If you'd like to try the latest version of Docker: + + +
linux-image-generic-lts-trusty Generic +Linux kernel image. This kernel has AUFS built in. This is required to run +Docker.
linux-headers-generic-lts-trustyAllows packages such as ZFS and VirtualBox guest additions +which depend on them. If you didn't install the headers for your existing +kernel, then you can skip these headers for the"trusty" kernel. If you're +unsure, you should include this package for safety.
xserver-xorg-lts-trusty Optional in non-graphical environments without Unity/Xorg. +Required when running Docker on machine with a graphical environment. -First, check that your APT system can deal with `https` -URLs: the file `/usr/lib/apt/methods/https` -should exist. If it doesn't, you need to install the package -`apt-transport-https`. +

To learn more about the reasons for these packages, read the installation +instructions for backported kernels, specifically the LTS +Enablement Stack — refer to note 5 under each version.

libgl1-mesa-glx-lts-trusty
  - [ -e /usr/lib/apt/methods/https ] || { - apt-get update - apt-get install apt-transport-https - } +To upgrade your kernel and install the additional packages, do the following: -Then, add the Docker repository key to your local keychain. +1. Open a terminal on your Ubuntu host. - $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 +2. Update your package manager. -Add the Docker repository to your apt sources list, update and install -the `lxc-docker` package. + $ sudo apt-get update -*You may receive a warning that the package isn't trusted. Answer yes to -continue installation.* +3. Install both the required and optional packages. - $ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main\ - > /etc/apt/sources.list.d/docker.list" - $ sudo apt-get update - $ sudo apt-get install lxc-docker + $ sudo apt-get install linux-image-generic-lts-trusty -> **Note**: -> -> There is also a simple `curl` script available to help with this process. -> -> $ curl -sSL https://get.docker.com/ubuntu/ | sudo sh + Depending on your environment, you may install more as described in the preceding table. -To verify that everything has worked as expected: +4. Reboot your host. - $ sudo docker run -i -t ubuntu /bin/bash + $ sudo reboot -Which should download the `ubuntu` image, and then start `bash` in a container. +5. After your system reboots, go ahead and [install Docker](#installing-docker-on-ubuntu). -Type `exit` to exit -**Done!**, continue with the [User Guide](/userguide/). +###For Saucy 13.10 (64 bit) +Docker uses AUFS as the default storage backend. If you don't have this +prerequisite installed, Docker's installation process adds it. -## Ubuntu Precise 12.04 (LTS) (64-bit) +##Installing Docker on Ubuntu -This installation path should work at all times. +Make sure you have intalled the prerequisites for your Ubuntu version. Then, +install Docker using the following: -### Dependencies +1. Log into your Ubuntu installation as a user with `sudo` privileges. -**Linux kernel 3.13** +2. Verify that you have `wget` installed. -For Ubuntu Precise, the currently recommended kernel version is 3.13. -Ubuntu Precise installations with older kernels must be upgraded. The -kernel you'll install when following these steps has AUFS built in. -We also include the generic headers to enable packages that depend on them, -like ZFS and the VirtualBox guest additions. If you didn't install the -headers for your "precise" kernel, then you can skip these headers for the -"trusty" kernel. If you're unsure, you should include the headers for safety. + $ which wget -> **Warning**: -> Kernels 3.8 and 3.11 are no longer supported by Canonical. Systems -> running these kernels need to be updated using the instructions below. -> Running Docker on these unsupported systems isn't supported either. -> These old kernels are no longer patched for security vulnerabilities -> and severe bugs which lead to data loss. + If `wget` isn't installed, install it after updating your manager: -Please read the installation instructions for backported kernels at -Ubuntu.org to understand why you also need to install the Xorg packages -when running Docker on a machine with a graphical environment like Unity. -[LTS Enablement Stack](https://wiki.ubuntu.com/Kernel/LTSEnablementStack) refer to note 5 under -each version. + $ sudo apt-get update $ sudo apt-get install wget - # install the backported kernel - $ sudo apt-get update - $ sudo apt-get install linux-image-generic-lts-trusty linux-headers-generic-lts-trusty - - # install the backported kernel and xorg if using Unity/Xorg - $ sudo apt-get install --install-recommends linux-generic-lts-trusty xserver-xorg-lts-trusty libgl1-mesa-glx-lts-trusty +3. Get the latest Docker package. - # reboot - $ sudo reboot + $ wget -qO- https://get.docker.com/ | sh -### Installation + The system prompts you for your `sudo` password. Then, it downloads and + installs Docker and its dependencies. -> **Warning**: -> These instructions have changed for 0.6. If you are upgrading from an -> earlier version, you will need to follow them again. +4. Verify `docker` is installed correctly. -Docker is available as a Debian package, which makes installation easy. -**See the** [*Mirrors*](#mirrors) **section below if you are not -in the United States.** Other sources of the Debian packages may be -faster for you to install. + $ sudo docker run hello-world -First, check that your APT system can deal with `https` -URLs: the file `/usr/lib/apt/methods/https` -should exist. If it doesn't, you need to install the package -`apt-transport-https`. + This command downloads a test image and runs it in a container. - [ -e /usr/lib/apt/methods/https ] || { - apt-get update - apt-get install apt-transport-https - } +## Optional Configurations for Docker on Ubuntu -Then, add the Docker repository key to your local keychain. +This section contains optional procedures for configuring your Ubuntu to work +better with Docker. - $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 +* [Create a docker group](#create-a-docker-group) +* [Adjust memory and swap accounting](#adjust-memory-and-swap-accounting) +* [Enable UFW forwarding](#enable-ufw-forwarding) +* [Configure a DNS server for use by Docker](#configure-a-dns-server-for-docker) -Add the Docker repository to your apt sources list, update and install -the `lxc-docker` package. +### Create a docker group -*You may receive a warning that the package isn't trusted. Answer yes to -continue installation.* +The `docker` daemon binds to a Unix socket instead of a TCP port. By default +that Unix socket is owned by the user `root` and other users can access it with +`sudo`. For this reason, `docker` daemon always runs as the `root` user. - $ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main\ - > /etc/apt/sources.list.d/docker.list" - $ sudo apt-get update - $ sudo apt-get install lxc-docker +To avoid having to use `sudo` when you use the `docker` command, create a Unix +group called `docker` and add users to it. When the `docker` daemon starts, it +makes the ownership of the Unix socket read/writable by the `docker` group. -> **Note**: -> -> There is also a simple `curl` script available to help with this process. -> -> $ curl -sSL https://get.docker.com/ubuntu/ | sudo sh +>**Warning**: The `docker` group is equivalent to the `root` user; For details +>on how this impacts security in your system, see [*Docker Daemon Attack +>Surface*](/articles/security/#docker-daemon-attack-surface) for details. -Now verify that the installation has worked by downloading the -`ubuntu` image and launching a container. +To create the `docker` group and add your user: - $ sudo docker run -i -t ubuntu /bin/bash +1. Log into Ubuntu as a user with `sudo` privileges. -Type `exit` to exit + This procedure assumes you log in as the `ubuntu` user. -**Done!**, continue with the [User Guide](/userguide/). +3. Create the `docker` group and add your user. -## Ubuntu Raring 13.04 and Saucy 13.10 (64 bit) + $ sudo usermod -aG docker ubuntu -These instructions cover both Ubuntu Raring 13.04 and Saucy 13.10. +3. Log out and log back in. -### Dependencies + This ensures your user is running with the correct permissions. -**Optional AUFS filesystem support** +4. Verify your work by running `docker` without `sudo`. -Ubuntu Raring already comes with the 3.8 kernel, so we don't need to -install it. However, not all systems have AUFS filesystem support -enabled. AUFS support is optional as of version 0.7, but it's still -available as a driver and we recommend using it if you can. + $ docker run hello-world -To make sure AUFS is installed, run the following commands: - $ sudo apt-get update - $ sudo apt-get install linux-image-extra-`uname -r` +### Adjust memory and swap accounting -### Installation +When users run Docker, they may see these messages when working with an image: -Docker is available as a Debian package, which makes installation easy. + WARNING: Your kernel does not support cgroup swap limit. WARNING: Your + kernel does not support swap limit capabilities. Limitation discarded. -> **Warning**: -> Please note that these instructions have changed for 0.6. If you are -> upgrading from an earlier version, you will need to follow them again. +To prevent these messages, enable memory and swap accounting on your system. To +enable these on system using GNU GRUB (GNU GRand Unified Bootloader), do the +following. -First add the Docker repository key to your local keychain. +1. Log into Ubuntu as a user with `sudo` privileges. - $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 +2. Edit the `/etc/default/grub` file. -Add the Docker repository to your apt sources list, update and install -the `lxc-docker` package. +3. Set the `GRUB_CMDLINE_LINUX` value as follows: - $ sudo sh -c "echo deb http://get.docker.com/ubuntu docker main\ - > /etc/apt/sources.list.d/docker.list" - $ sudo apt-get update - $ sudo apt-get install lxc-docker + GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" -Now verify that the installation has worked by downloading the -`ubuntu` image and launching a container. +4. Save and close the file. - $ sudo docker run -i -t ubuntu /bin/bash +5. Update GRUB. -Type `exit` to exit + $ sudo update-grub -**Done!**, now continue with the [User Guide](/userguide/). +6. Reboot your system. -### Upgrade -To install the latest version of Docker, use the standard -`apt-get` method: +### Enable UFW forwarding - # update your sources list - $ sudo apt-get update +If you use [UFW (Uncomplicated Firewall)](https://help.ubuntu.com/community/UFW) +on the same host as you run Docker, you'll need to do additional configuration. +Docker uses a bridge to manage container networking. By default, UFW drops all +forwarding traffic. As a result, for Docker to run when UFW is +enabled, you must set UFW's forwarding policy appropriately. - # install the latest - $ sudo apt-get install lxc-docker +Also, UFW's default set of rules denies all incoming traffic. If you want to be able +to reach your containers from another host then you should also allow incoming +connections on the Docker port (default `2375`). -## Giving non-root access +To configure UFW and allow incoming connections on the Docker port: -The `docker` daemon always runs as the `root` user, and since Docker -version 0.5.2, the `docker` daemon binds to a Unix socket instead of a -TCP port. By default that Unix socket is owned by the user `root`, and -so, by default, you can access it with `sudo`. +1. Log into Ubuntu as a user with `sudo` privileges. -Starting in version 0.5.3, if you (or your Docker installer) create a -Unix group called `docker` and add users to it, then the `docker` daemon -will make the ownership of the Unix socket read/writable by the `docker` -group when the daemon starts. The `docker` daemon must always run as the -`root` user, but if you run the `docker` client as a user in the -`docker` group then you don't need to add `sudo` to all the client -commands. From Docker 0.9.0 you can use the `-G` flag to specify an -alternative group. +2. Verify that UFW is installed and enabled. -> **Warning**: -> The `docker` group (or the group specified with the `-G` flag) is -> `root`-equivalent; see [*Docker Daemon Attack Surface*]( -> /articles/security/#docker-daemon-attack-surface) for details. + $ sudo ufw status -**Example:** +3. Open the `/etc/default/ufw` file for editing. - # Add the docker group if it doesn't already exist. - $ sudo groupadd docker + $ sudo nano /etc/default/ufw - # Add the connected user "${USER}" to the docker group. - # Change the user name to match your preferred user. - # You may have to logout and log back in again for - # this to take effect. - $ sudo gpasswd -a ${USER} docker +4. Set the `DEFAULT_FORWARD_POLICY` policy to: - # Restart the Docker daemon. - # If you are in Ubuntu 14.04, use docker.io instead of docker - $ sudo service docker restart + DEFAULT_FORWARD_POLICY="ACCEPT" -## Memory and Swap Accounting +5. Save and close the file. -If you want to enable memory and swap accounting, you must add the -following command-line parameters to your kernel: +6. Reload UFW to use the new setting. - cgroup_enable=memory swapaccount=1 + $ sudo ufw reload -On systems using GRUB (which is the default for Ubuntu), you can add -those parameters by editing `/etc/default/grub` and -extending `GRUB_CMDLINE_LINUX`. Look for the -following line: +7. Allow incoming connections on the Docker port. - GRUB_CMDLINE_LINUX="" + $ sudo ufw allow 2375/tcp -And replace it by the following one: +### Configure a DNS server for use by Docker - GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" +Systems that run Ubuntu or an Ubuntu derivative on the desktop typically use +`127.0.0.1` as the default `nameserver` in `/etc/resolv.conf` file. The +NetworkManager also sets up `dnsmasq` to use the real DNS servers of the +connection and sets up `nameserver 127.0.0.1` in /`etc/resolv.conf`. -Then run `sudo update-grub`, and reboot. +When starting containers on desktop machines with these configurations, Docker +users see this warning: -These parameters will help you get rid of the following warnings: + WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers + can't use it. Using default external servers : [8.8.8.8 8.8.4.4] - WARNING: Your kernel does not support cgroup swap limit. - WARNING: Your kernel does not support swap limit capabilities. Limitation discarded. +The warning occurs because Docker containers can't use the local DNS nameserver. +Instead, Docker defaults to using an external nameserver. -## Troubleshooting +To avoid this warning, you can specify a DNS server for use by Docker +containers. Or, you can disable `dnsmasq` in NetworkManager. Though, disabiling +`dnsmasq` might make DNS resolution slower on some networks. -On Linux Mint, the `cgroup-lite` and `apparmor` packages are not -installed by default. Before Docker will work correctly, you will need -to install this via: +To specify a DNS server for use by Docker: - $ sudo apt-get update && sudo apt-get install cgroup-lite apparmor +1. Log into Ubuntu as a user with `sudo` privileges. -## Docker and UFW +2. Open the `/etc/default/docker` file for editing. -Docker uses a bridge to manage container networking. By default, UFW -drops all forwarding traffic. As a result you will need to enable UFW -forwarding: + $ sudo nano /etc/default/docker - $ sudo nano /etc/default/ufw +3. Add a setting for Docker. - # Change: - # DEFAULT_FORWARD_POLICY="DROP" - # to - DEFAULT_FORWARD_POLICY="ACCEPT" + DOCKER_OPTS="--dns 8.8.8.8" -Then reload UFW: + Replace `8.8.8.8` with a local DNS server such as `192.168.1.1`. You can also + specify multiple DNS servers. Separated them with spaces, for example: - $ sudo ufw reload + --dns 8.8.8.8 --dns 192.168.1.1 -UFW's default set of rules denies all incoming traffic. If you want to -be able to reach your containers from another host then you should allow -incoming connections on the Docker port (default 2375): + >**Warning**: If you're doing this on a laptop which connects to various + >networks, make sure to choose a public DNS server. - $ sudo ufw allow 2375/tcp +4. Save and close the file. -## Docker and local DNS server warnings +5. Restart the Docker daemon. -Systems which are running Ubuntu or an Ubuntu derivative on the desktop -will use 127.0.0.1 as the default nameserver in /etc/resolv.conf. -NetworkManager sets up dnsmasq to use the real DNS servers of the -connection and sets up nameserver 127.0.0.1 in /etc/resolv.conf. + $ sudo restart docker -When starting containers on these desktop machines, users will see a -warning: - WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4] +  +  -This warning is shown because the containers can't use the local DNS -nameserver and Docker will default to using an external nameserver. +**Or, as an alternative to the previous procedure,** disable `dnsmasq` in +NetworkManager (this might slow your network). -This can be worked around by specifying a DNS server to be used by the -Docker daemon for the containers: +1. Open the `/etc/default/docker` file for editing. - $ sudo nano /etc/default/docker - --- - # Add: - DOCKER_OPTS="--dns 8.8.8.8" - # 8.8.8.8 could be replaced with a local DNS server, such as 192.168.1.1 - # multiple DNS servers can be specified: --dns 8.8.8.8 --dns 192.168.1.1 + $ sudo nano /etc/NetworkManager/NetworkManager.conf -The Docker daemon has to be restarted: +2. Comment out the `dns=dsnmasq` line: - $ sudo restart docker + dns=dnsmasq -> **Warning**: -> If you're doing this on a laptop which connects to various networks, -> make sure to choose a public DNS server. +3. Save and close the file. -An alternative solution involves disabling dnsmasq in NetworkManager by -following these steps: +4. Restart both the NetworkManager and Docker. - $ sudo nano /etc/NetworkManager/NetworkManager.conf - ---- - # Change: - dns=dnsmasq - # to - #dns=dnsmasq + $ sudo restart network-manager $ sudo restart docker -NetworkManager and Docker need to be restarted afterwards: - $ sudo restart network-manager - $ sudo restart docker +## Upgrade Docker -> **Warning**: This might make DNS resolution slower on some networks. +To install the latest version of Docker, use the standard `-N` flag with `wget`: -## Mirrors + $ wget -N https://get.docker.com/ | sh -You should `ping get.docker.com` and compare the -latency to the following mirrors, and pick whichever one is best for -you. - -### Yandex - -[Yandex](http://yandex.ru/) in Russia is mirroring the Docker Debian -packages, updating every 6 hours. -Substitute `http://mirror.yandex.ru/mirrors/docker/` for -`http://get.docker.com/ubuntu` in the instructions above. -For example: - - $ sudo sh -c "echo deb http://mirror.yandex.ru/mirrors/docker/ docker main\ - > /etc/apt/sources.list.d/docker.list" - $ sudo apt-get update - $ sudo apt-get install lxc-docker diff --git a/docs/sources/project.md b/docs/sources/project.md new file mode 100644 index 0000000000..fc1fe18ed3 --- /dev/null +++ b/docs/sources/project.md @@ -0,0 +1,19 @@ +# Project + +## Contents: + +-[README first](who-written-for.md) +-[Get the required software](software-required.md) +-[Configure Git for contributing](set-up-git.md) +-[Work with a development container](set-up-dev-env.md) +-[Run tests and test documentation](test-and-docs.md) +-[Understand contribution workflow](make-a-contribution.md) +-[Find an issue](find-an-issue.md) +-[Work on an issue](work-issue.md) +-[Create a pull request](create-pr.md) +-[Participate in the PR Review](review-pr.md) +-[Advanced contributing](advanced-contributing.md) +-[Where to get help](get-help.md) +-[Coding style guide](coding-style.md) +-[Documentation style guide](doc-style.md) + diff --git a/docs/sources/project/advanced-contributing.md b/docs/sources/project/advanced-contributing.md new file mode 100644 index 0000000000..df5756d9d7 --- /dev/null +++ b/docs/sources/project/advanced-contributing.md @@ -0,0 +1,139 @@ +page_title: Advanced contributing +page_description: Explains workflows for refactor and design proposals +page_keywords: contribute, project, design, refactor, proposal + +# Advanced contributing + +In this section, you learn about the more advanced contributions you can make. +They are advanced because they have a more involved workflow or require greater +programming experience. Don't be scared off though, if you like to stretch and +challenge yourself, this is the place for you. + +This section gives generalized instructions for advanced contributions. You'll +read about the workflow but there are not specific descriptions of commands. +Your goal should be to understand the processes described. + +At this point, you should have read and worked through the earlier parts of +the project contributor guide. You should also have + made at least one project contribution. + +## Refactor or cleanup proposal + +A refactor or cleanup proposal changes Docker's internal structure without +altering the external behavior. To make this type of proposal: + +1. Fork `docker/docker`. + +2. Make your changes in a feature branch. + +3. Sync and rebase with `master` as you work. + +3. Run the full test suite. + +4. Submit your code through a pull request (PR). + + The PR's title should have the format: + + **Cleanup:** _short title_ + + If your changes required logic changes, note that in your request. + +5. Work through Docker's review process until merge. + + +## Design proposal + +A design proposal solves a problem or adds a feature to the Docker software. +The process for submitting design proposals requires two pull requests, one +for the design and one for the implementation. + +![Simple process](/project/images/proposal.png) + +The important thing to notice is that both the design pull request and the +implementation pull request go through a review. In other words, there is +considerable time commitment in a design proposal; so, you might want to pair +with someone on design work. + +The following provides greater detail on the process: + +1. Come up with an idea. + + Ideas usually come from limitations users feel working with a product. So, + take some time to really use Docker. Try it on different platforms; explore + how it works with different web applications. Go to some community events + and find out what other users want. + +2. Review existing issues and proposals to make sure no other user is proposing a similar idea. + + The design proposals are all online in our GitHub pull requests. + +3. Talk to the community about your idea. + + We have lots of community forums + where you can get feedback on your idea. Float your idea in a forum or two + to get some commentary going on it. + +4. Fork `docker/docker` and clone the repo to your local host. + +5. Create a new Markdown file in the area you wish to change. + + For example, if you want to redesign our daemon create a new file under the + `daemon/` folder. + +6. Name the file descriptively, for example `redesign-daemon-proposal.md`. + +7. Write a proposal for your change into the file. + + This is a Markdown file that describes your idea. Your proposal + should include information like: + + * Why is this changed needed or what are the use cases? + * What are the requirements this change should meet? + * What are some ways to design/implement this feature? + * Which design/implementation do you think is best and why? + * What are the risks or limitations of your proposal? + + This is your chance to convince people your idea is sound. + +8. Submit your proposal in a pull request to `docker/docker`. + + The title should have the format: + + **Proposal:** _short title_ + + The body of the pull request should include a brief summary of your change + and then say something like "_See the file for a complete description_". + +9. Refine your proposal through review. + + The maintainers and the community review your proposal. You'll need to + answer questions and sometimes explain or defend your approach. This is + chance for everyone to both teach and learn. + +10. Pull request accepted. + + Your request may also be rejected. Not every idea is a good fit for Docker. + Let's assume though your proposal succeeded. + +11. Implement your idea. + + Implementation uses all the standard practices of any contribution. + + * fork `docker/docker` + * create a feature branch + * sync frequently back to master + * test as you go and full test before a PR + + If you run into issues, the community is there to help. + +12. When you have a complete implementation, submit a pull request back to `docker/docker`. + +13. Review and iterate on your code. + + If you are making a large code change, you can expect greater scrutiny + during this phase. + +14. Acceptance and merge! + diff --git a/docs/sources/project/coding-style.md b/docs/sources/project/coding-style.md new file mode 100644 index 0000000000..e5b6f5fe9c --- /dev/null +++ b/docs/sources/project/coding-style.md @@ -0,0 +1,93 @@ +page_title: Coding Style Checklist +page_description: List of guidelines for coding Docker contributions +page_keywords: change, commit, squash, request, pull request, test, unit test, integration tests, Go, gofmt, LGTM + +# Coding Style Checklist + +This checklist summarizes the material you experienced working through [make a +code contribution](/project/make-a-contribution) and [advanced +contributing](/project/advanced-contributing). The checklist applies to code +that is program code or code that is documentation code. + +## Change and commit code + +* Fork the `docker/docker` repository. + +* Make changes on your fork in a feature branch. Name your branch `XXXX-something` + where `XXXX` is the issue number you are working on. + +* Run `gofmt -s -w file.go` on each changed file before + committing your changes. Most editors have plug-ins that do this automatically. + +* Update the documentation when creating or modifying features. + +* Commits that fix or close an issue should reference them in the commit message + `Closes #XXXX` or `Fixes #XXXX`. Mentions help by automatically closing the + issue on a merge. + +* After every commit, run the test suite and ensure it is passing. + +* Sync and rebase frequently as you code to keep up with `docker` master. + +* Set your `git` signature and make sure you sign each commit. + +* Do not add yourself to the `AUTHORS` file. This file is autogenerated from the + Git history. + +## Tests and testing + +* Submit unit tests for your changes. + +* Make use of the builtin Go test framework built. + +* Use existing Docker test files (`name_test.go`) for inspiration. + +* Run the full test suite on your + branch before submitting a pull request. + +* Run `make docs` to build the documentation and then check it locally. + +* Use an online grammar + checker or similar to test you documentation changes for clarity, + concision, and correctness. + +## Pull requests + +* Sync and cleanly rebase on top of Docker's `master` without multiple branches + mixed into the PR. + +* Before the pull request, squash your commits into logical units of work using + `git rebase -i` and `git push -f`. + +* Include documentation changes in the same commit so that a revert would + remove all traces of the feature or fix. + +* Reference each issue in your pull request description (`#XXXX`) + +## Respond to pull requests reviews + +* Docker maintainers use LGTM (**l**ooks-**g**ood-**t**o-**m**e) in PR comments + to indicate acceptance. + +* Code review comments may be added to your pull request. Discuss, then make + the suggested modifications and push additional commits to your feature + branch. + +* Incorporate changes on your feature branch and push to your fork. This + automatically updates your open pull request. + +* Post a comment after pushing to alert reviewers to PR changes; pushing a + change does not send notifications. + +* A change requires LGTMs from an absolute majority maintainers of an + affected component. For example, if you change `docs/` and `registry/` code, + an absolute majority of the `docs/` and the `registry/` maintainers must + approve your PR. + +## Merges after pull requests + +* After a merge, [a master build](https://master.dockerproject.com/) is + available almost immediately. + +* If you made a documentation change, you can see it at + [docs.master.dockerproject.com](http://docs.master.dockerproject.com/). diff --git a/docs/sources/project/create-pr.md b/docs/sources/project/create-pr.md new file mode 100644 index 0000000000..84de397090 --- /dev/null +++ b/docs/sources/project/create-pr.md @@ -0,0 +1,127 @@ +page_title: Create a pull request (PR) +page_description: Basic workflow for Docker contributions +page_keywords: contribute, pull request, review, workflow, white-belt, black-belt, squash, commit + +# Create a pull request (PR) + +A pull request (PR) sends your changes to the Docker maintainers for review. You +create a pull request on GitHub. A pull request "pulls" changes from your forked +repository into the `docker/docker` repository. + +You can see the +list of active pull requests to Docker on GitHub. + +## Check Your Work + +Before you create a pull request, check your work. + +1. In a terminal window, go to the root of your `docker-fork` repository. + + $ cd ~/repos/docker-fork + +2. Checkout your feature branch. + + $ git checkout 11038-fix-rhel-link + Already on '11038-fix-rhel-link' + +3. Run the full test suite on your branch. + + $ make test + + All the tests should pass. If they don't, find out why and correct the + situation. + +4. Optionally, if modified the documentation, build the documentation: + + $ make docs + +5. Commit and push any changes that result from your checks. + +## Rebase your branch + +Always rebase and squash your commits before making a pull request. + +1. Fetch any of the last minute changes from `docker/docker`. + + $ git fetch upstream master + From github.com:docker/docker + * branch master -> FETCH_HEAD + +3. Start an interactive rebase. + + $ git rebase -i upstream/master + +4. Rebase opens an editor with a list of commits. + + pick 1a79f55 Tweak some of the other text for grammar + pick 53e4983 Fix a link + pick 3ce07bb Add a new line about RHEL + + If you run into trouble, `git --rebase abort` removes any changes and gets + you back to where you started. + +4. Squash the `pick` keyword with `squash` on all but the first commit. + + pick 1a79f55 Tweak some of the other text for grammar + squash 53e4983 Fix a link + squash 3ce07bb Add a new line about RHEL + + After closing the file, `git` opens your editor again to edit the commit + message. + +5. Edit and save your commit message. + + `git commit -s` + + Make sure your message includes + +/* GitHub label styles */ +.gh-label { + display: inline-block; + padding: 3px 4px; + font-size: 11px; + font-weight: bold; + line-height: 1; + color: #fff; + border-radius: 2px; + box-shadow: inset 0 -1px 0 rgba(0,0,0,0.12); +} + +.gh-label.black-belt { background-color: #000000; color: #ffffff; } +.gh-label.bug { background-color: #fc2929; color: #ffffff; } +.gh-label.improvement { background-color: #bfe5bf; color: #2a332a; } +.gh-label.project-doc { background-color: #207de5; color: #ffffff; } +.gh-label.white-belt { background-color: #ffffff; color: #333333; } + + + + +# Find and claim an issue + +On this page, you choose what you want to work on. As a contributor you can work +on whatever you want. If you are new to contributing, you should start by +working with our known issues. + +## Understand the issue types + +An existing issue is something reported by a Docker user. As issues come in, +our maintainers triage them. Triage is its own topic. For now, it is important +for you to know that triage includes ranking issues according to difficulty. + +Triaged issues have either a white-belt +or black-belt label. +A white-belt issue is considered +an easier issue. Issues can have more than one label, for example, +bug, +improvement, +project/doc, and so forth. +These other labels are there for filtering purposes but you might also find +them helpful. + + +## Claim a white-belt issue + +In this section, you find and claim an open white-belt issue. + + +1. Go to the `docker/docker` repository. + +2. Click on the "Issues" link. + + A list of the open issues appears. + + ![Open issues](/project/images/issue_list.png) + +3. Look for the white-belt items on the list. + +4. Click on the "labels" dropdown and select white-belt. + + The system filters to show only open white-belt issues. + +5. Open an issue that interests you. + + The comments on the issues can tell you both the problem and the potential + solution. + +6. Make sure that no other user has chosen to work on the issue. + + We don't allow external contributors to assign issues to themselves, so you + need to read the comments to find if a user claimed an issue by saying: + + - "I'd love to give this a try~" + - "I'll work on this!" + - "I'll take this." + + The community is very good about claiming issues explicitly. + +7. When you find an open issue that both interests you and is unclaimed, claim it yourself by adding a comment. + + ![Easy issue](/project/images/easy_issue.png) + + This example uses issue 11038. Your issue # will be different depending on + what you claimed. + +8. Make a note of the issue number; you'll need it later. + +## Sync your fork and create a new branch + +If you have followed along in this guide, you forked the `docker/docker` +repository. Maybe that was an hour ago or a few days ago. In any case, before +you start working on your issue, sync your repository with the upstream +`docker/docker` master. Syncing ensures your repository has the latest +changes. + +To sync your repository: + +1. Open a terminal on your local host. + +2. Change directory to the `docker-fork` root. + + $ cd ~/repos/docker-fork + +3. Checkout the master branch. + + $ git checkout master + Switched to branch 'master' + Your branch is up-to-date with 'origin/master'. + + Recall that `origin/master` is a branch on your remote GitHub repository. + +4. Make sure you have the upstream remote `docker/docker` by listing them. + + $ git remote -v + origin https://github.com/moxiegirl/docker.git (fetch) + origin https://github.com/moxiegirl/docker.git (push) + upstream https://github.com/docker/docker.git (fetch) + upstream https://github.com/docker/docker.git ( + + If the `upstream` is missing, add it. + + $ git remote add upstream https://github.com/docker/docker.git + +5. Fetch all the changes from the `upstream/master` branch. + + $ git fetch upstream/master + remote: Counting objects: 141, done. + remote: Compressing objects: 100% (29/29), done. + remote: Total 141 (delta 52), reused 46 (delta 46), pack-reused 66 + Receiving objects: 100% (141/141), 112.43 KiB | 0 bytes/s, done. + Resolving deltas: 100% (79/79), done. + From github.com:docker/docker + 9ffdf1e..01d09e4 docs -> upstream/docs + 05ba127..ac2521b master -> upstream/master + + This command says get all the changes from the `master` branch belonging to + the `upstream` remote. + +7. Rebase your local master with the `upstream/master`. + + $ git rebase upstream/master + First, rewinding head to replay your work on top of it... + Fast-forwarded master to upstream/master. + + This command writes all the commits from the upstream branch into your local + branch. + +8. Check the status of your local branch. + + $ git status + On branch master + Your branch is ahead of 'origin/master' by 38 commits. + (use "git push" to publish your local commits) + nothing to commit, working directory clean + + Your local repository now has any changes from the `upstream` remote. You + need to push the changes to your own remote fork which is `origin/master`. + +9. Push the rebased master to `origin/master`. + + $ git push origin + Username for 'https://github.com': moxiegirl + Password for 'https://moxiegirl@github.com': + Counting objects: 223, done. + Compressing objects: 100% (38/38), done. + Writing objects: 100% (69/69), 8.76 KiB | 0 bytes/s, done. + Total 69 (delta 53), reused 47 (delta 31) + To https://github.com/moxiegirl/docker.git + 8e107a9..5035fa1 master -> master + +9. Create a new feature branch to work on your issue. + + Your branch name should have the format `XXXX-descriptive` where `XXXX` is + the issue number you are working on. For example: + + $ git checkout -b 11038-fix-rhel-link + Switched to a new branch '11038-fix-rhel-link' + + Your branch should be up-to-date with the upstream/master. Why? Because you + branched off a freshly synced master. Let's check this anyway in the next + step. + +9. Rebase your branch from upstream/master. + + $ git rebase upstream/master + Current branch 11038-fix-rhel-link is up to date. + + At this point, your local branch, your remote repository, and the Docker + repository all have identical code. You are ready to make changesfor your + issues. + + +## Where to go next + +At this point, you know what you want to work on and you have a branch to do +your work in. Go onto the next section to learn [how to work on your +changes](/project/work-issue/). diff --git a/docs/sources/project/get-help.md b/docs/sources/project/get-help.md new file mode 100644 index 0000000000..9c98549c9d --- /dev/null +++ b/docs/sources/project/get-help.md @@ -0,0 +1,147 @@ +page_title: Where to chat or get help +page_description: Describes Docker's communication channels +page_keywords: IRC, Google group, Twitter, blog, Stackoverflow + + + +# Where to chat or get help + +There are several communications channels you can use to chat with Docker +community members and developers. + + + + + + + + + + + + + + + + + + + +
Internet Relay Chat (IRC) + +

+ IRC a direct line to our most knowledgeable Docker users. + The #docker and #docker-dev group on + irc.freenode.net. IRC was first created in 1988. + So, it is a rich chat protocol but it can overwhelm new users. You can search + our chat archives. +

+ Read our IRC quickstart guide below for an easy way to get started. +
Google Groups + There are two groups. + Docker-user + is for people using Docker containers. + The docker-dev + group is for contributors and other people contributing to the Docker + project. +
Twitter + You can follow Docker's twitter + to get updates on our products. You can also tweet us questions or just + share blogs or stories. +
Stack Overflow + Stack Overflow has over 7000K Docker questions listed. We regularly + monitor Docker questions + and so do many other knowledgeable Docker users. +
+ + +## IRC Quickstart + +IRC can also be overwhelming for new users. This quickstart shows you +the easiest way to connect to IRC. + +1. In your browser open http://webchat.freenode.net + + ![Login screen](/project/images/irc_connect.png) + + +2. Fill out the form. + + + + + + + + + + + + + + +
NicknameThe short name you want to be known as in IRC.
Channels#docker
reCAPTCHAUse the value provided.
+ +3. Click "Connect". + + The system connects you to chat. You'll see a lot of text. At the bottom of + the display is a command line. Just above the command line the system asks + you to register. + + ![Login screen](/project/images/irc_after_login.png) + + +4. In the command line, register your nickname. + + /msg NickServ REGISTER password youremail@example.com + + ![Login screen](/project/images/register_nic.png) + + The IRC system sends an email to the address you + enter. The email contains instructions for completing your registration. + +5. Open your mail client and look for the email. + + ![Login screen](/project/images/register_email.png) + +6. Back in the browser, complete the registration according to the email. + + /msg NickServ VERIFY REGISTER moxiegirl_ acljtppywjnr + +7. Join the `#docker` group using the following command. + + /j #docker + + You can also join the `#docker-dev` group. + + /j #docker-dev + +8. To ask questions to the channel just type messages in the command line. + + ![Login screen](/project/images/irc_chat.png) + +9. To quit, close the browser window. + + +### Tips and learning more about IRC + +Next time you return to log into chat, you'll need to re-enter your password +on the command line using this command: + + /msg NickServ identify + +If you forget or lose your password see the FAQ on +freenode.net to learn how to recover it. + +This quickstart was meant to get you up and into IRC very quickly. If you find +IRC useful there is a lot more to learn. Drupal, another open source project, +actually has +written a lot of good documentation about using IRC for their project +(thanks Drupal!). diff --git a/docs/sources/project/glossary.md b/docs/sources/project/glossary.md new file mode 100644 index 0000000000..5324cda153 --- /dev/null +++ b/docs/sources/project/glossary.md @@ -0,0 +1,7 @@ +page_title: Glossary +page_description: tbd +page_keywords: tbd + +## Glossary + +TBD \ No newline at end of file diff --git a/docs/sources/project/images/box.png b/docs/sources/project/images/box.png new file mode 100755 index 0000000000..642385ae68 Binary files /dev/null and b/docs/sources/project/images/box.png differ diff --git a/docs/sources/project/images/branch-sig.png b/docs/sources/project/images/branch-sig.png new file mode 100644 index 0000000000..88be6b1a8b Binary files /dev/null and b/docs/sources/project/images/branch-sig.png differ diff --git a/docs/sources/project/images/checked.png b/docs/sources/project/images/checked.png new file mode 100755 index 0000000000..93ab2be9b3 Binary files /dev/null and b/docs/sources/project/images/checked.png differ diff --git a/docs/sources/project/images/commits_expected.png b/docs/sources/project/images/commits_expected.png new file mode 100644 index 0000000000..d3d8b1e3cf Binary files /dev/null and b/docs/sources/project/images/commits_expected.png differ diff --git a/docs/sources/project/images/contributor-edit.png b/docs/sources/project/images/contributor-edit.png new file mode 100644 index 0000000000..52737d7b46 Binary files /dev/null and b/docs/sources/project/images/contributor-edit.png differ diff --git a/docs/sources/project/images/copy_url.png b/docs/sources/project/images/copy_url.png new file mode 100644 index 0000000000..a715019ed8 Binary files /dev/null and b/docs/sources/project/images/copy_url.png differ diff --git a/docs/sources/project/images/easy_issue.png b/docs/sources/project/images/easy_issue.png new file mode 100644 index 0000000000..ac2ea6879c Binary files /dev/null and b/docs/sources/project/images/easy_issue.png differ diff --git a/docs/sources/project/images/existing_issue.png b/docs/sources/project/images/existing_issue.png new file mode 100644 index 0000000000..6757e60bb7 Binary files /dev/null and b/docs/sources/project/images/existing_issue.png differ diff --git a/docs/sources/project/images/existing_issue.snagproj b/docs/sources/project/images/existing_issue.snagproj new file mode 100644 index 0000000000..05ae2b0ccf Binary files /dev/null and b/docs/sources/project/images/existing_issue.snagproj differ diff --git a/docs/sources/project/images/fixes_num.png b/docs/sources/project/images/fixes_num.png new file mode 100644 index 0000000000..df52f27fd9 Binary files /dev/null and b/docs/sources/project/images/fixes_num.png differ diff --git a/docs/sources/project/images/fork_docker.png b/docs/sources/project/images/fork_docker.png new file mode 100644 index 0000000000..f7c557cd4f Binary files /dev/null and b/docs/sources/project/images/fork_docker.png differ diff --git a/docs/sources/project/images/fresh_container.png b/docs/sources/project/images/fresh_container.png new file mode 100644 index 0000000000..7f69f2d3a5 Binary files /dev/null and b/docs/sources/project/images/fresh_container.png differ diff --git a/docs/sources/project/images/give_try.png b/docs/sources/project/images/give_try.png new file mode 100644 index 0000000000..c049527616 Binary files /dev/null and b/docs/sources/project/images/give_try.png differ diff --git a/docs/sources/project/images/gordon.jpeg b/docs/sources/project/images/gordon.jpeg new file mode 100644 index 0000000000..8a0df7d463 Binary files /dev/null and b/docs/sources/project/images/gordon.jpeg differ diff --git a/docs/sources/project/images/in_room.png b/docs/sources/project/images/in_room.png new file mode 100644 index 0000000000..4fdec81b9c Binary files /dev/null and b/docs/sources/project/images/in_room.png differ diff --git a/docs/sources/project/images/irc_after_login.png b/docs/sources/project/images/irc_after_login.png new file mode 100644 index 0000000000..79496c806d Binary files /dev/null and b/docs/sources/project/images/irc_after_login.png differ diff --git a/docs/sources/project/images/irc_chat.png b/docs/sources/project/images/irc_chat.png new file mode 100644 index 0000000000..1ab9548067 Binary files /dev/null and b/docs/sources/project/images/irc_chat.png differ diff --git a/docs/sources/project/images/irc_connect.png b/docs/sources/project/images/irc_connect.png new file mode 100644 index 0000000000..f411aabcac Binary files /dev/null and b/docs/sources/project/images/irc_connect.png differ diff --git a/docs/sources/project/images/irc_login.png b/docs/sources/project/images/irc_login.png new file mode 100644 index 0000000000..a7a1dc7eb4 Binary files /dev/null and b/docs/sources/project/images/irc_login.png differ diff --git a/docs/sources/project/images/issue_list.png b/docs/sources/project/images/issue_list.png new file mode 100644 index 0000000000..c0aefdb422 Binary files /dev/null and b/docs/sources/project/images/issue_list.png differ diff --git a/docs/sources/project/images/latest_commits.png b/docs/sources/project/images/latest_commits.png new file mode 100644 index 0000000000..791683a5c5 Binary files /dev/null and b/docs/sources/project/images/latest_commits.png differ diff --git a/docs/sources/project/images/list_example.png b/docs/sources/project/images/list_example.png new file mode 100644 index 0000000000..a306e6e7dd Binary files /dev/null and b/docs/sources/project/images/list_example.png differ diff --git a/docs/sources/project/images/locate_branch.png b/docs/sources/project/images/locate_branch.png new file mode 100644 index 0000000000..8fa02ec454 Binary files /dev/null and b/docs/sources/project/images/locate_branch.png differ diff --git a/docs/sources/project/images/proposal.png b/docs/sources/project/images/proposal.png new file mode 100644 index 0000000000..250781a70d Binary files /dev/null and b/docs/sources/project/images/proposal.png differ diff --git a/docs/sources/project/images/proposal.snagproj b/docs/sources/project/images/proposal.snagproj new file mode 100644 index 0000000000..c9ad49d0e7 Binary files /dev/null and b/docs/sources/project/images/proposal.snagproj differ diff --git a/docs/sources/project/images/pull_request_made.png b/docs/sources/project/images/pull_request_made.png new file mode 100644 index 0000000000..a00535bed1 Binary files /dev/null and b/docs/sources/project/images/pull_request_made.png differ diff --git a/docs/sources/project/images/red_notice.png b/docs/sources/project/images/red_notice.png new file mode 100644 index 0000000000..8839723a37 Binary files /dev/null and b/docs/sources/project/images/red_notice.png differ diff --git a/docs/sources/project/images/register_email.png b/docs/sources/project/images/register_email.png new file mode 100644 index 0000000000..8873411e80 Binary files /dev/null and b/docs/sources/project/images/register_email.png differ diff --git a/docs/sources/project/images/register_nic.png b/docs/sources/project/images/register_nic.png new file mode 100644 index 0000000000..16cf05a396 Binary files /dev/null and b/docs/sources/project/images/register_nic.png differ diff --git a/docs/sources/project/images/three_running.png b/docs/sources/project/images/three_running.png new file mode 100644 index 0000000000..a85dc7471e Binary files /dev/null and b/docs/sources/project/images/three_running.png differ diff --git a/docs/sources/project/images/three_terms.png b/docs/sources/project/images/three_terms.png new file mode 100644 index 0000000000..7caa6ac6e3 Binary files /dev/null and b/docs/sources/project/images/three_terms.png differ diff --git a/docs/sources/project/images/to_from_pr.png b/docs/sources/project/images/to_from_pr.png new file mode 100644 index 0000000000..8dd6638e1b Binary files /dev/null and b/docs/sources/project/images/to_from_pr.png differ diff --git a/docs/sources/project/make-a-contribution.md b/docs/sources/project/make-a-contribution.md new file mode 100644 index 0000000000..b6fc4f34fa --- /dev/null +++ b/docs/sources/project/make-a-contribution.md @@ -0,0 +1,35 @@ +page_title: Understand how to contribute +page_description: Explains basic workflow for Docker contributions +page_keywords: contribute, maintainers, review, workflow, process + +# Understand how to contribute + +Contributing is a process where you work with Docker maintainers and the +community to improve Docker. The maintainers are experienced contributors +who specialize in one or more Docker components. Maintainers play a big role +in reviewing contributions. + +There is a formal process for contributing. We try to keep our contribution +process simple so you'll want to contribute frequently. + + +## The basic contribution workflow + +In this guide, you work through Docker's basic contribution workflow by fixing a +single *white-belt* issue in the `docker/docker` repository. The workflow +for fixing simple issues looks like this: + +![Simple process](/project/images/existing_issue.png) + +All Docker repositories have code and documentation. You use this same workflow +for either content type. For example, you can find and fix doc or code issues. +Also, you can propose a new Docker feature or propose a new Docker tutorial. + +Some workflow stages do have slight differences for code or documentation +contributions. When you reach that point in the flow, we make sure to tell you. + + +## Where to go next + +Now that you know a little about the contribution process, go to the next section +to [find an issue you want to work on](/project/find-an-issue/). diff --git a/docs/sources/project/review-pr.md b/docs/sources/project/review-pr.md new file mode 100644 index 0000000000..44ad84f2a0 --- /dev/null +++ b/docs/sources/project/review-pr.md @@ -0,0 +1,125 @@ +page_title: Participate in the PR Review +page_description: Basic workflow for Docker contributions +page_keywords: contribute, pull request, review, workflow, white-belt, black-belt, squash, commit + + +# Participate in the PR Review + +Creating a pull request is nearly the end of the contribution process. At this +point, your code is reviewed both by our continuous integration (CI) systems and +by our maintainers. + +The CI system is an automated system. The maintainers are human beings that also +work on Docker. You need to understand and work with both the "bots" and the +"beings" to review your contribution. + + +## How we proces your review + +First to review your pull request is Gordon. Gordon is fast. He checks your +pull request (PR) for common problems like a missing signature. If Gordon finds a +problem, he'll send an email through your GitHub user account: + +![Gordon](/project/images/gordon.jpeg) + +Our build bot system starts building your changes while Gordon sends any emails. + +The build system double-checks your work by compiling your code with Docker's master +code. Building includes running the same tests you ran locally. If you forgot +to run tests or missed something in fixing problems, the automated build is our +safety check. + +After Gordon and the bots, the "beings" review your work. Docker maintainers look +at your pull request and comment on it. The shortest comment you might see is +`LGTM` which means **l**ooks-**g**ood-**t**o-**m**e. If you get an `LGTM`, that +is a good thing, you passed that review. + +For complex changes, maintainers may ask you questions or ask you to change +something about your submission. All maintainer comments on a PR go to the +email address associated with your GitHub account. Any GitHub user who +"participates" in a PR receives an email to. Participating means creating or +commenting on a PR. + +Our maintainers are very experienced Docker users and open source contributors. +So, they value your time and will try to work efficiently with you by keeping +their comments specific and brief. If they ask you to make a change, you'll +need to update your pull request with additional changes. + +## Update an Existing Pull Request + +To update your existing pull request: + +1. Change one or more files in your local `docker-fork` repository. + +2. Commit the change with the `git commit --amend` command. + + $ git commit --amend + + Git opens an editor containing your last commit message. + +3. Adjust your last comment to reflect this new change. + + Added a new sentence per Anaud's suggestion + + Signed-off-by: Mary Anthony + + # Please enter the commit message for your changes. Lines starting + # with '#' will be ignored, and an empty message aborts the commit. + # On branch 11038-fix-rhel-link + # Your branch is up-to-date with 'origin/11038-fix-rhel-link'. + # + # Changes to be committed: + # modified: docs/sources/installation/mac.md + # modified: docs/sources/installation/rhel.md + +4. Push to your origin. + + $ git push origin + +5. Open your browser to your pull request on GitHub. + + You should see your pull request now contains your newly pushed code. + +6. Add a comment to your pull request. + + GitHub only notifies PR participants when you comment. For example, you can + mention that you updated your PR. Your comment alerts the maintainers that + you made an update. + +A change requires LGTMs from an absolute majority of an affected component's +maintainers. For example, if you change `docs/` and `registry/` code, an +absolute majority of the `docs/` and the `registry/` maintainers must approve +your PR. Once you get approval, we merge your pull request into Docker's +`master` code branch. + +## After the merge + +It can take time to see a merged pull request in Docker's official release. +A master build is available almost immediately though. Docker builds and +updates its development binaries after each merge to `master`. + +1. Browse to https://master.dockerproject.com/. + +2. Look for the binary appropriate to your system. + +3. Download and run the binary. + + You might want to run the binary in a container though. This + will keep your local host environment clean. + +4. View any documentation changes at docs.master.dockerproject.com. + +Once you've verified everything merged, feel free to delete your feature branch +from your fork. For information on how to do this, + +see the GitHub help on deleting branches. + +## Where to go next + +At this point, you have completed all the basic tasks in our contributors guide. +If you enjoyed contributing, let us know by completing another +white-belt +issue or two. We really appreciate the help. + +If you are very experienced and want to make a major change, go on to +[learn about advanced contributing](/project/advanced-contributing). diff --git a/docs/sources/project/set-up-dev-env.md b/docs/sources/project/set-up-dev-env.md new file mode 100644 index 0000000000..637eef6f58 --- /dev/null +++ b/docs/sources/project/set-up-dev-env.md @@ -0,0 +1,411 @@ +page_title: Work with a development container +page_description: How to use Docker's development environment +page_keywords: development, inception, container, image Dockerfile, dependencies, Go, artifacts + +# Work with a development container + +In this section, you learn to develop like a member of Docker's core team. +The `docker` repository includes a `Dockerfile` at its root. This file defines +Docker's development environment. The `Dockerfile` lists the environment's +dependencies: system libraries and binaries, go environment, go dependencies, +etc. + +Docker's development environment is itself, ultimately a Docker container. +You use the `docker` repository and its `Dockerfile` to create a Docker image, +run a Docker container, and develop code in the container. Docker itself builds, +tests, and releases new Docker versions using this container. + +If you followed the procedures that +set up the prerequisites, you should have a fork of the `docker/docker` +repository. You also created a branch called `dry-run-test`. In this section, +you continue working with your fork on this branch. + +## Clean your host of Docker artifacts + +Docker developers run the latest stable release of the Docker software; Or +Boot2docker and Docker if their machine is Mac OS X. They clean their local +hosts of unnecessary Docker artifacts such as stopped containers or unused +images. Cleaning unnecessary artifacts isn't strictly necessary but it is +good practice, so it is included here. + +To remove unnecessary artifacts. + +1. Verify that you have no unnecessary containers running on your host. + + $ docker ps + + You should see something similar to the following: + + + + + + + + + + + +
CONTAINER IDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
+ + There are no running containers on this host. If you have running but unused + containers, stop and then remove them with the `docker stop` and `docker rm` + commands. + +2. Verify that your host has no dangling images. + + $ docker images + + You should see something similar to the following: + + + + + + + + + +
REPOSITORYTAGIMAGE IDCREATEDVIRTUAL SIZE
+ + This host has no images. You may have one or more _dangling_ images. A + dangling image is not used by a running container and is not an ancestor of + another image on your system. A fast way to remove dangling containers is + the following: + + $ docker rmi -f $(docker images -q -a -f dangling=true) + + This command uses `docker images` to lists all images (`-a` flag) by numeric + IDs (`-q` flag) and filter them to find dangling images (`-f + dangling=true`). Then, the `docker rmi` command forcibly (`-f` flag) removes + the resulting list. To remove just one image, use the `docker rmi ID` + command. + + +## Build an image + +If you followed the last procedure, your host is clean of unnecessary images +and containers. In this section, you build an image from the Docker development +environment. + +1. Open a terminal. + + Mac users, use `boot2docker status` to make sure Boot2Docker is running. You + may need to run `eval "$(boot2docker shellinit)"` to initialize your shell + environment. + +3. Change into the root of your forked repository. + + $ cd ~/repos/docker-fork + +4. Ensure you are on your `dry-run-test` branch. + + $ git checkout dry-run-test + +5. Compile your development environment container into an image. + + $ docker build -t dry-run-test . + + The `docker build` command returns informational message as it runs. The + first build may take a few minutes to create an image. Using the + instructions in the `Dockerfile`, the build may need to download source and + other images. A successful build returns a final status message similar to + the following: + + Successfully built 676815d59283 + +6. List your Docker images again. + + $ docker images + + You should see something similar to this: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
REPOSTITORYTAGIMAGE IDCREATEDVIRTUAL SIZE
dry-run-testlatest663fbee70028About a minute ago
ubuntutrusty2d24f826cb162 days ago188.3 MB
ubuntutrusty-20150218.12d24f826cb162 days ago188.3 MB
ubuntu14.042d24f826cb162 days ago188.3 MB
ubuntu14.04.22d24f826cb162 days ago188.3 MB
ubuntulatest2d24f826cb162 days ago188.3 MB
+ + Locate your new `dry-run-test` image in the list. You should also see a + number of `ubuntu` images. The build process creates these. They are the + ancestors of your new Docker development image. When you next rebuild your + image, the build process reuses these ancestors images if they exist. + + Keeping the ancestor images improves the build performance. When you rebuild + the child image, the build process uses the local ancestors rather than + retrieving them from the Hub. The build process gets new ancestors only if + DockerHub has updated versions. + +## Start a container and run a test + +At this point, you have created a new Docker development environment image. Now, +you'll use this image to create a Docker container to develop in. Then, you'll +build and run a `docker` binary in your container. + +1. Open two additional terminals on your host. + + At this point, you'll have about three terminals open. + + ![Multiple terminals](/project/images/three_terms.png) + + Mac OSX users, make sure you run `eval "$(boot2docker shellinit)"` in any new + terminals. + +2. In a terminal, create a new container from your `dry-run-test` image. + + $ docker run --privileged --rm -ti dry-run-test /bin/bash + root@5f8630b873fe:/go/src/github.com/docker/docker# + + The command creates a container from your `dry-run-test` image. It opens an + interactive terminal (`-ti`) running a `/bin/bash shell`. The + `--privileged` flag gives the container access to kernel features and device + access. It is this flag that allows you to run a container in a container. + Finally, the `-rm` flag instructs Docker to remove the container when you + exit the `/bin/bash` shell. + + The container includes the source of your image repository in the + `/go/src/github.com/docker/docker` directory. Try listing the contents to + verify they are the same as that of your `docker-fork` repo. + + ![List example](/project/images/list_example.png) + + +3. Investigate your container bit. + + If you do a `go version` you'll find the `go` language is part of the + container. + + root@31ed86e9ddcf:/go/src/github.com/docker/docker# go version + go version go1.4.2 linux/amd64 + + Similarly, if you do a `docker version` you find the container + has no `docker` binary. + + root@31ed86e9ddcf:/go/src/github.com/docker/docker# docker version + bash: docker: command not found + + You will create one in the next steps. + +4. From the `/go/src/github.com/docker/docker` directory make a `docker` binary with the `make.sh` script. + + root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh binary + + You only call `hack/make.sh` to build a binary _inside_ a Docker + development container as you are now. On your host, you'll use `make` + commands (more about this later). + + As it makes the binary, the `make.sh` script reports the build's progress. + When the command completes successfully, you should see the following + output: + + ---> Making bundle: ubuntu (in bundles/1.5.0-dev/ubuntu) + Created package {:path=>"lxc-docker-1.5.0-dev_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb"} + Created package {:path=>"lxc-docker_1.5.0~dev~git20150223.181106.0.1ab0d23_amd64.deb"} + +5. List all the contents of the `binary` directory. + + root@5f8630b873fe:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary/ + docker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 + + You should see that `binary` directory, just as it sounds, contains the + made binaries. + + +6. Copy the `docker` binary to the `/usr/bin` of your container. + + root@5f8630b873fe:/go/src/github.com/docker/docker# cp bundles/1.5.0-dev/binary/docker /usr/bin + +7. Inside your container, check your Docker version. + + root@5f8630b873fe:/go/src/github.com/docker/docker# docker --version + Docker version 1.5.0-dev, build 6e728fb + + Inside the container you are running a development version. This is version + on the current branch it reflects the value of the `VERSION` file at the + root of your `docker-fork` repository. + +8. Start a `docker` daemon running inside your container. + + root@5f8630b873fe:/go/src/github.com/docker/docker# docker -dD + + The `-dD` flag starts the daemon in debug mode; You'll find this useful + when debugging your code. + +9. Bring up one of the terminals on your local host. + + +10. List your containers and look for the container running the `dry-run-test` image. + + $ docker ps + + + + + + + + + + + + + + + + + + + + +
CONTAINER IDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
474f07652525dry-run-test:latest"hack/dind /bin/bash14 minutes agoUp 14 minutestender_shockley
+ + In this example, the container's name is `tender_shockley`; yours will be + different. + +11. From the terminal, start another shell on your Docker development container. + + $ docker exec -it tender_shockley bash + + At this point, you have two terminals both with a shell open into your + development container. One terminal is running a debug session. The other + terminal is displaying a `bash` prompt. + +12. At the prompt, test the Docker client by running the `hello-world` container. + + root@9337c96e017a:/go/src/github.com/docker/docker# docker run hello-world + + You should see the image load and return. Meanwhile, you + can see the calls made via the debug session in your other terminal. + + ![List example](/project/images/three_running.png) + + +## Restart a container with your source + +At this point, you have experienced the "Docker inception" technique. That is, +you have: + +* built a Docker image from the Docker repository +* created and started a Docker development container from that image +* built a Docker binary inside of your Docker development container +* launched a `docker` daemon using your newly compiled binary +* called the `docker` client to run a `hello-world` container inside + your development container + +When you really get to developing code though, you'll want to iterate code +changes and builds inside the container. For that you need to mount your local +Docker repository source into your Docker container. Try that now. + +1. If you haven't already, exit out of BASH shells in your running Docker +container. + + If you have followed this guide exactly, exiting out your BASH shells stops + the running container. You can use the `docker ps` command to verify the + development container is stopped. All of your terminals should be at the + local host prompt. + +2. Choose a terminal and make sure you are in your `docker-fork` repository. + + $ pwd + /Users/mary/go/src/github.com/moxiegirl/docker-fork + + Your location will be different because it reflects your environment. + +3. Create a container using `dry-run-test` but this time mount your repository onto the `/go` directory inside the container. + + $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash + + When you pass `pwd`, `docker` resolves it to your current directory. + +4. From inside the container, list your `binary` directory. + + root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary + ls: cannot access binary: No such file or directory + + Your `dry-run-test` image does not retain any of the changes you made inside + the container. This is the expected behavior for a container. + +5. In a fresh terminal on your local host, change to the `docker-fork` root. + + $ cd ~/repos/docker-fork/ + +6. Create a fresh binary but this time use the `make` command. + + $ make BINDDIR=. binary + + The `BINDDIR` flag is only necessary on Mac OS X but it won't hurt to pass + it on Linux command line. The `make` command, like the `make.sh` script + inside the container, reports its progress. When the make succeeds, it + returns the location of the new binary. + + +7. Back in the terminal running the container, list your `binary` directory. + + root@074626fc4b43:/go/src/github.com/docker/docker# ls bundles/1.5.0-dev/binary + docker docker-1.5.0-dev docker-1.5.0-dev.md5 docker-1.5.0-dev.sha256 + + The compiled binaries created from your repository on your local host are + now available inside your running Docker development container. + +8. Repeat the steps you ran in the previous procedure. + + * copy the binary inside the development container using + `cp bundles/1.5.0-dev/binary/docker /usr/bin` + * start `docker -dD` to launch the Docker daemon inside the container + * run `docker ps` on local host to get the development container's name + * connect to your running container `docker exec -it container_name bash` + * use the `docker run hello-world` command to create and run a container + inside your development container + +## Where to go next + +Congratulations, you have successfully achieved Docker inception. At this point, +you've set up your development environment and verified almost all the essential +processes you need to contribute. Of course, before you start contributing, +[you'll need to learn one more piece of the development environment, the test framework](/project/test-and-docs/). diff --git a/docs/sources/project/set-up-git.md b/docs/sources/project/set-up-git.md new file mode 100644 index 0000000000..ba42c81006 --- /dev/null +++ b/docs/sources/project/set-up-git.md @@ -0,0 +1,238 @@ +page_title: Configure Git for contributing +page_description: Describes how to set up your local machine and repository +page_keywords: GitHub account, repository, clone, fork, branch, upstream, Git, Go, make, + +# Configure Git for contributing + +Work through this page to configure Git and a repository you'll use throughout +the Contributor Guide. The work you do further in the guide, depends on the work +you do here. + +## Fork and clone the Docker code + +Before contributing, you first fork the Docker code repository. A fork copies +a repository at a particular point in time. GitHub tracks for you where a fork +originates. + +As you make contributions, you change your fork's code. When you are ready, +you make a pull request back to the original Docker repository. If you aren't +familiar with this workflow, don't worry, this guide walks you through all the +steps. + +To fork and clone Docker: + +1. Open a browser and log into GitHub with your account. + +2. Go to the docker/docker repository. + +3. Click the "Fork" button in the upper right corner of the GitHub interface. + + ![Branch Signature](/project/images/fork_docker.png) + + GitHub forks the repository to your GitHub account. The original + `docker/docker` repository becomes a new fork `YOUR_ACCOUNT/docker` under + your account. + +4. Copy your fork's clone URL from GitHub. + + GitHub allows you to use HTTPS or SSH protocols for clones. You can use the + `git` command line or clients like Subversion to clone a repository. + + ![Copy clone URL](/project/images/copy_url.png) + + This guide assume you are using the HTTPS protocol and the `git` command + line. If you are comfortable with SSH and some other tool, feel free to use + that instead. You'll need to convert what you see in the guide to what is + appropriate to your tool. + +5. Open a terminal window on your local host and change to your home directory. + + $ cd ~ + +6. Create a `repos` directory. + + $ mkdir repos + +7. Change into your `repos` directory. + + $ cd repos + +5. Clone the fork to your local host into a repository called `docker-fork`. + + $ git clone https://github.com/moxiegirl/docker.git docker-fork + + Naming your local repo `docker-fork` should help make these instructions + easier to follow; experienced coders don't typically change the name. + +6. Change directory into your new `docker-fork` directory. + + $ cd docker-fork + + Take a moment to familiarize yourself with the repository's contents. List + the contents. + +## Set your signature and an upstream remote + +When you contribute to Docker, you must certify you agree with the +Developer Certificate of Origin. +You indicate your agreement by signing your `git` commits like this: + + Signed-off-by: Pat Smith + +To create a signature, you configure your username and email address in Git. +You can set these globally or locally on just your `docker-fork` repository. +You must sign with your real name. We don't accept anonymous contributions or +contributions through pseudonyms. + +As you change code in your fork, you'll want to keep it in sync with the changes +others make in the `docker/docker` repository. To make syncing easier, you'll +also add a _remote_ called `upstream` that points to `docker/docker`. A remote +is just another a project version hosted on the internet or network. + +To configure your username, email, and add a remote: + +1. Change to the root of your `docker-fork` repository. + + $ cd docker-fork + +2. Set your `user.name` for the repository. + + $ git config --local user.name "FirstName LastName" + +3. Set your `user.email` for the repository. + + $ git config --local user.email "emailname@mycompany.com" + +4. Set your local repo to track changes upstream, on the `docker` repository. + + $ git remote add upstream https://github.com/docker/docker.git + +7. Check the result in your `git` configuration. + + $ git config --local -l + core.repositoryformatversion=0 + core.filemode=true + core.bare=false + core.logallrefupdates=true + remote.origin.url=https://github.com/moxiegirl/docker.git + remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* + branch.master.remote=origin + branch.master.merge=refs/heads/master + user.name=Mary Anthony + user.email=mary@docker.com + remote.upstream.url=https://github.com/docker/docker.git + remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* + + To list just the remotes use: + + $ git remote -v + origin https://github.com/moxiegirl/docker.git (fetch) + origin https://github.com/moxiegirl/docker.git (push) + upstream https://github.com/docker/docker.git (fetch) + upstream https://github.com/docker/docker.git (push) + +## Create and push a branch + +As you change code in your fork, you make your changes on a repository branch. +The branch name should reflect what you are working on. In this section, you +create a branch, make a change, and push it up to your fork. + +This branch is just for testing your config for this guide. The changes arepart +of a dry run so the branch name is going to be dry-run-test. To create an push +the branch to your fork on GitHub: + +1. Open a terminal and go to the root of your `docker-fork`. + + $ cd docker-fork + +2. Create a `dry-run-test` branch. + + $ git checkout -b dry-run-test + + This command creates the branch and switches the repository to it. + +3. Verify you are in your new branch. + + $ git branch + * dry-run-test + master + + The current branch has an * (asterisk) marker. So, these results shows you + are on the right branch. + +4. Create a `TEST.md` file in the repository's root. + + $ touch TEST.md + +5. Edit the file and add your email and location. + + ![Add your information](/project/images/contributor-edit.png) + + You can use any text editor you are comfortable with. + +6. Close and save the file. + +7. Check the status of your branch. + + $ git status + On branch dry-run-test + Untracked files: + (use "git add ..." to include in what will be committed) + + TEST.md + + nothing added to commit but untracked files present (use "git add" to track) + + You've only changed the one file. It is untracked so far by git. + +8. Add your file. + + $ git add TEST.md + + That is the only _staged_ file. Stage is fancy word for work that Git is + tracking. + +9. Sign and commit your change. + + $ git commit -s -m "Making a dry run test." + [dry-run-test 6e728fb] Making a dry run test + 1 file changed, 1 insertion(+) + create mode 100644 TEST.md + + Commit messages should have a short summary sentence of no more than 50 + characters. Optionally, you can also include a more detailed explanation + after the summary. Separate the summary from any explanation with an empty + line. + +8. Push your changes to GitHub. + + $ git push --set-upstream origin dry-run-test + Username for 'https://github.com': moxiegirl + Password for 'https://moxiegirl@github.com': + + Git prompts you for your GitHub username and password. Then, the command + returns a result. + + Counting objects: 13, done. + Compressing objects: 100% (2/2), done. + Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done. + Total 3 (delta 1), reused 0 (delta 0) + To https://github.com/moxiegirl/docker.git + * [new branch] dry-run-test -> dry-run-test + Branch dry-run-test set up to track remote branch dry-run-test from origin. + +9. Open your browser to Github. + +10. Navigate to your Docker fork. + +11. Make sure the `dry-run-test` branch exists, that it has your commit, and the +commit is signed. + + ![Branch Signature](/project/images/branch-sig.png) + +## Where to go next + +Congratulations, you have finished configuring both your local host environment +and Git for contributing. In the next section you'll [learn how to set up and +work in a Docker development container](/project/set-up-dev-env/). diff --git a/docs/sources/project/software-required.md b/docs/sources/project/software-required.md new file mode 100644 index 0000000000..476cbbc2ca --- /dev/null +++ b/docs/sources/project/software-required.md @@ -0,0 +1,91 @@ +page_title: Get the required software +page_description: Describes the software required to contribute to Docker +page_keywords: GitHub account, repository, Docker, Git, Go, make, + +# Get the required software + +Before you begin contributing you must have: + +* a GitHub account +* `git` +* `make` +* `docker` + +You'll notice that `go`, the language that Docker is written in, is not listed. +That's because you don't need it installed; Docker's development environment +provides it for you. You'll learn more about the development environment later. + +### Get a GitHub account + +To contribute to the Docker project, you will need a GitHub account. A free account is +fine. All the Docker project repositories are public and visible to everyone. + +You should also have some experience using both the GitHub application and `git` +on the command line. + +### Install git + +Install `git` on your local system. You can check if `git` is on already on your +system and properly installed with the following command: + + $ git --version + + +This documentation is written using `git` version 2.2.2. Your version may be +different depending on your OS. + +### Install make + +Install `make`. You can check if `make` is on your system with the following +command: + + $ make -v + +This documentation is written using GNU Make 3.81. Your version may be different +depending on your OS. + +### Install or upgrade Docker + +If you haven't already, install the Docker software using the +instructions for your operating system. +If you have an existing installation, check your version and make sure you have +the latest Docker. + +To check if `docker` is already installed on Linux: + + $ docker --version + Docker version 1.5.0, build a8a31ef + +On Mac OS X or Windows, you should have installed Boot2Docker which includes +Docker. You'll need to verify both Boot2Docker and then Docker. This +documentation was written on OS X using the following versions. + + $ boot2docker version + Boot2Docker-cli version: v1.5.0 + Git commit: ccd9032 + + $ docker --version + Docker version 1.5.0, build a8a31ef + +## Linux users and sudo + +This guide assumes you have added your user to the `docker` group on your system. +To check, list the group's contents: + + $ getent group docker + docker:x:999:ubuntu + +If the command returns no matches, you have two choices. You can preface this +guide's `docker` commands with `sudo` as you work. Alternatively, you can add +your user to the `docker` group as follows: + + $ sudo usermod -aG docker ubuntu + +You must log out and back in for this modification to take effect. + + +## Where to go next + +In the next section, you'll [learn how to set up and configure Git for +contributing to Docker](/project/set-up-git/). diff --git a/docs/sources/project/test-and-docs.md b/docs/sources/project/test-and-docs.md new file mode 100644 index 0000000000..d586ea2c3c --- /dev/null +++ b/docs/sources/project/test-and-docs.md @@ -0,0 +1,296 @@ +page_title: Run tests and test documentation +page_description: Describes Docker's testing infrastructure +page_keywords: make test, make docs, Go tests, gofmt, contributing, running tests + +# Run tests and test documentation + +Contributing includes testing your changes. If you change the Docker code, you +may need to add a new test or modify an existing one. Your contribution could +even be adding tests to Docker. For this reason, you need to know a little +about Docker's test infrastructure. + +Many contributors contribute documentation only. Or, a contributor makes a code +contribution that changes how Docker behaves and that change needs +documentation. For these reasons, you also need to know how to build, view, and +test the Docker documentation. + +In this section, you run tests in the `dry-run-test` branch of your Docker +fork. If you have followed along in this guide, you already have this branch. +If you don't have this branch, you can create it or simply use another of your +branches. + +## Understand testing at Docker + +Docker tests use the Go language's test framework. In this framework, files +whose names end in `_test.go` contain test code; you'll find test files like +this throughout the Docker repo. Use these files for inspiration when writing +your own tests. For information on Go's test framework, see Go's testing package +documentation and the go test help. + +You are responsible for _unit testing_ your contribution when you add new or +change existing Docker code. A unit test is a piece of code that invokes a +single, small piece of code ( _unit of work_ ) to verify the unit works as +expected. + +Depending on your contribution, you may need to add _integration tests_. These +are tests that combine two or more work units into one component. These work +units each have unit tests and then, together, integration tests that test the +interface between the components. The `integration` and `integration-cli` +directories in the Docker repository contain integration test code. + +Testing is its own speciality. If you aren't familiar with testing techniques, +there is a lot of information available to you on the Web. For now, you should +understand that, the Docker maintainers may ask you to write a new test or +change an existing one. + +### Run tests on your local host + +Before submitting any code change, you should run the entire Docker test suite. +The `Makefile` contains a target for the entire test suite. The target's name +is simply `test`. The make file contains several targets for testing: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TargetWhat this target does
testRun all the tests.
test-unitRun just the unit tests.
test-integrationRun just integration tests.
test-integration-cliRun the test for the integration command line interface.
test-docker-pyRun the tests for Docker API client.
docs-testRuns the documentation test build.
+ +Run the entire test suite on your current repository: + +1. Open a terminal on your local host. + +2. Change to the root your Docker repository. + + $ cd docker-fork + +3. Make sure you are in your development branch. + + $ git checkout dry-run-test + +4. Run the `make test` command. + + $ make test + + This command does several things, it creates a container temporarily for + testing. Inside that container, the `make`: + + * creates a new binary + * cross-compiles all the binaries for the various operating systems + * runs the all the tests in the system + + It can take several minutes to run all the tests. When they complete + successfully, you see the output concludes with something like this: + + + [PASSED]: top - sleep process should be listed in privileged mode + [PASSED]: version - verify that it works and that the output is properly formatted + PASS + coverage: 70.8% of statements + ---> Making bundle: test-docker-py (in bundles/1.5.0-dev/test-docker-py) + +++ exec docker --daemon --debug --host unix:///go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.sock --storage-driver vfs --exec-driver native --pidfile /go/src/github.com/docker/docker/bundles/1.5.0-dev/test-docker-py/docker.pid + ................................................................. + ---------------------------------------------------------------------- + Ran 65 tests in 89.266s + + +### Run test targets inside the development container + +If you are working inside a Docker development container, you use the +`hack/make.sh` script to run tests. The `hack/make.sh` script doesn't +have a single target that runs all the tests. Instead, you provide a single +commmand line with multiple targets that does the same thing. + +Try this now. + +1. Open a terminal and change to the `docker-fork` root. + +2. Start a Docker development image. + + If you are following along with this guide, you should have a + `dry-run-test` image. + + $ docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash + +3. Run the tests using the `hack/make.sh` script. + + root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit test-integration test-integration-cli test-docker-py + + The tests run just as they did within your local host. + + +Of course, you can also run a subset of these targets too. For example, to run +just the unit tests: + + root@5f8630b873fe:/go/src/github.com/docker/docker# hack/make.sh dynbinary binary cross test-unit + +Most test targets require that you build these precursor targets first: +`dynbinary binary cross` + + +## Running individual or multiple named tests + +You can use the `TESTFLAGS` environment variable to run a single test. The +flag's value is passed as arguments to the `go test` command. For example, from +your local host you can run the `TestBuild` test with this command: + + $ TESTFLAGS='-test.run ^TestBuild$' make test + +To run the same test inside your Docker development container, you do this: + + root@5f8630b873fe:/go/src/github.com/docker/docker# TESTFLAGS='-run ^TestBuild$' hack/make.sh + +## If test under Boot2Docker fail do to space errors + +Running the tests requires about 2GB of memory. If you are running your +container on bare metal, that is you are not running with Boot2Docker, your +Docker development container is able to take the memory it requires directly +from your local host. + +If you are running Docker using Boot2Docker, the VM uses 2048MB by default. +This means you can exceed the memory of your VM running tests in a Boot2Docker +environment. When the test suite runs out of memory, it returns errors similar +to the following: + + server.go:1302 Error: Insertion failed because database is full: database or + disk is full + + utils_test.go:179: Error copy: exit status 1 (cp: writing + '/tmp/docker-testd5c9-[...]': No space left on device + +To increase the memory on your VM, you need to reinitialize the Boot2Docker VM +with new memory settings. + +1. Stop all running containers. + +2. View the current memory setting. + + $ boot2docker info + { + "Name": "boot2docker-vm", + "UUID": "491736fd-4075-4be7-a6f5-1d4cdcf2cc74", + "Iso": "/Users/mary/.boot2docker/boot2docker.iso", + "State": "running", + "CPUs": 8, + "Memory": 2048, + "VRAM": 8, + "CfgFile": "/Users/mary/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vbox", + "BaseFolder": "/Users/mary/VirtualBox VMs/boot2docker-vm", + "OSType": "", + "Flag": 0, + "BootOrder": null, + "DockerPort": 0, + "SSHPort": 2022, + "SerialFile": "/Users/mary/.boot2docker/boot2docker-vm.sock" + } + + +3. Delete your existing `boot2docker` profile. + + $ boot2docker delete + +4. Reinitialize `boot2docker` and specify a higher memory. + + $ boot2docker init -m 5555 + +5. Verify the memory was reset. + + $ boot2docker info + +6. Restart your container and try your test again. + + +## Build and test the documentation + +The Docker documentation source files are under `docs/sources`. The content is +written using extended Markdown. We use the static generator MkDocs to build Docker's +documentation. Of course, you don't need to install this generator +to build the documentation, it is included with container. + +You should always check your documentation for grammar and spelling. The best +way to do this is with an online grammar checker. + +When you change a documentation source file, you should test your change +locally to make sure your content is there and any links work correctly. You +can build the documentation from the local host. The build starts a container +and loads the documentation into a server. As long as this container runs, you +can browse the docs. + +1. In a terminal, change to the root of your `docker-fork` repository. + + $ cd ~/repos/dry-run-test + +2. Make sure you are in your feature branch. + + $ git status + On branch dry-run-test + Your branch is up-to-date with 'origin/dry-run-test'. + nothing to commit, working directory clean + +3. Build the documentation. + + $ make docs + + When the build completes, you'll see a final output message similar to the + following: + + Successfully built ee7fe7553123 + docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 "docker-docs:dry-run-test" mkdocs serve + Running at: http://0.0.0.0:8000/ + Live reload enabled. + Hold ctrl+c to quit. + +4. Enter the URL in your browser. + + If you are running Boot2Docker, replace the default localhost address + (0.0.0.0) with your DOCKERHOST value. You can get this value at any time by + entering `boot2docker ip` at the command line. + +5. Once in the documentation, look for the red notice to verify you are seeing the correct build. + + ![Beta documentation](/project/images/red_notice.png) + +6. Navigate to your new or changed document. + +7. Review both the content and the links. + +8. Return to your terminal and exit out of the running documentation container. + + +## Where to go next + +Congratulations, you have successfully completed the basics you need to +understand the Docker test framework. In the next steps, you use what you have +learned so far to [contribute to Docker by working on an +issue](/project/make-a-contribution/). diff --git a/docs/sources/project/who-written-for.md b/docs/sources/project/who-written-for.md new file mode 100644 index 0000000000..e3b761a460 --- /dev/null +++ b/docs/sources/project/who-written-for.md @@ -0,0 +1,57 @@ +page_title: README first +page_description: Introduction to project contribution at Docker +page_keywords: Gordon, introduction, turtle, machine, libcontainer, how to + +# README first + +This section of the documentation contains a guide for Docker users who want to +contribute code or documentation to the Docker project. As a community, we +share rules of behavior and interaction. Make sure you are familiar with the community guidelines before continuing. + +## Where and what you can contribute + +The Docker project consists of not just one but several repositories on GitHub. +So, in addition to the `docker/docker` repository, there is the +`docker/libcontainer` repo, the `docker/machine` repo, and several more. +Contribute to any of these and you contribute to the Docker project. + +Not all Docker repositories use the Go language. Also, each repository has its +own focus area. So, if you are an experienced contributor, think about +contributing to a Docker repository that has a language or a focus area you are +familiar with. + +If you are new to the open source community, to Docker, or to formal +programming, you should start out contributing to the `docker/docker` +repository. Why? Because this guide is written for that repository specifically. + +Finally, code or documentation isn't the only way to contribute. You can report +an issue, add to discussions in our community channel, write a blog post, or +take a usability test. You can even propose your own type of contribution. +Right now we don't have a lot written about this yet, so just email + if this type of contributing interests you. + +## A turtle is involved + +![Gordon](/project/images/gordon.jpeg) + +Enough said. + +## How to use this guide + +This is written for the distracted, the overworked, the sloppy reader with fair +`git` skills and a failing memory for the GitHub GUI. The guide attempts to +explain how to use the Docker environment as precisely, predictably, and +procedurally as possible. + +Users who are new to the Docker development environment should start by setting +up their environment. Then, they should try a simple code change. After that, +you should find something to work on or propose at totally new change. + +If you are a programming prodigy, you still may find this documentation useful. +Please feel free to skim past information you find obvious or boring. + +## How to get started + +Start by [getting the software you need to contribute](/project/software-required/). diff --git a/docs/sources/project/work-issue.md b/docs/sources/project/work-issue.md new file mode 100644 index 0000000000..68d2ed750f --- /dev/null +++ b/docs/sources/project/work-issue.md @@ -0,0 +1,203 @@ +page_title: Work on your issue +page_description: Basic workflow for Docker contributions +page_keywords: contribute, pull request, review, workflow, white-belt, black-belt, squash, commit + + +# Work on your issue + +The work you do for your issue depends on the specific issue you picked. +This section gives you a step-by-step workflow. Where appropriate, it provides +command examples. + +However, this is a generalized workflow, depending on your issue you may repeat +steps or even skip some. How much time the work takes depends on you --- you +could spend days or 30 minutes of your time. + +## How to work on your local branch + +Follow this workflow as you work: + +1. Review the appropriate style guide. + + If you are changing code, review the coding style guide. Changing documentation? Review the + documentation style guide. + +2. Make changes in your feature branch. + + Your feature branch you created in the last section. Here you use the + development container. If you are making a code change, you can mount your + source into a development container and iterate that way. For documentation + alone, you can work on your local host. + + Review if you forgot the details + of working with a container. + + +3. Test your changes as you work. + + If you have followed along with the guide, you know the `make test` target + runs the entire test suite and `make docs` builds the documentation. If you + forgot the other test targets, see the documentation for testing both code and + documentation. + +4. For code changes, add unit tests if appropriate. + + If you add new functionality or change existing functionality, you should + add a unit test also. Use the existing test files for inspiration. Aren't + sure if you need tests? Skip this step; you can add them later in the + process if necessary. + +5. Format your source files correctly. + + + + + + + + + + + + + + + + + + +
File typeHow to format
.go +

+ Format .go files using the gofmt command. + For example, if you edited the `docker.go` file you would format the file + like this: +

+

$ gofmt -s -w file.go

+

+ Most file editors have a plugin to format for you. Check your editor's + documentation. +

+
.md and non-.go filesWrap lines to 80 characters.
+ +6. List your changes. + + $ git status + On branch 11038-fix-rhel-link + Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git checkout -- ..." to discard changes in working directory) + + modified: docs/sources/installation/mac.md + modified: docs/sources/installation/rhel.md + + The `status` command lists what changed in the repository. Make sure you see + the changes you expect. + +7. Add your change to Git. + + $ git add docs/sources/installation/mac.md + $ git add docs/sources/installation/rhel.md + + +8. Commit your changes making sure you use the `-s` flag to sign your work. + + $ git commit -s -m "Fixing RHEL link" + +9. Push your change to your repository. + + $ git push origin + Username for 'https://github.com': moxiegirl + Password for 'https://moxiegirl@github.com': + Counting objects: 60, done. + Compressing objects: 100% (7/7), done. + Writing objects: 100% (7/7), 582 bytes | 0 bytes/s, done. + Total 7 (delta 6), reused 0 (delta 0) + To https://github.com/moxiegirl/docker.git + * [new branch] 11038-fix-rhel-link -> 11038-fix-rhel-link + Branch 11038-fix-rhel-link set up to track remote branch 11038-fix-rhel-link from origin. + + The first time you push a change, you must specify the branch. Later, you can just do this: + + git push origin + +## Review your branch on GitHub + +After you push a new branch, you should verify it on GitHub: + +1. Open your browser to GitHub. + +2. Go to your Docker fork. + +3. Select your branch from the dropdown. + + ![Find branch](/project/images/locate_branch.png) + +4. Use the "Compare" button to compare the differences between your branch and master. + + Depending how long you've been working on your branch, your branch maybe + behind Docker's upstream repository. + +5. Review the commits. + + Make sure your branch only shows the work you've done. + +## Pull and rebase frequently + +You should pull and rebase frequently as you work. + +1. Return to the terminal on your local machine. + +2. Make sure you are in your branch. + + $ git branch 11038-fix-rhel-link + +3. Fetch all the changes from the `upstream/master` branch. + + $ git fetch upstream/master + + This command says get all the changes from the `master` branch belonging to + the `upstream` remote. + +4. Rebase your local master with Docker's `upstream/master` branch. + + $ git rebase -i upstream/master + + This command starts an interactive rebase to merge code from Docker's + `upstream/master` branch into your local branch. If you aren't familiar or + comfortable with rebase, you can learn more about rebasing on the web. + +5. Rebase opens an editor with a list of commits. + + pick 1a79f55 Tweak some of the other text for grammar + pick 53e4983 Fix a link + pick 3ce07bb Add a new line about RHEL + + If you run into trouble, `git --rebase abort` removes any changes and gets + you back to where you started. + +6. Squash the `pick` keyword with `squash` on all but the first commit. + + pick 1a79f55 Tweak some of the other text for grammar + squash 53e4983 Fix a link + squash 3ce07bb Add a new line about RHEL + + After closing the file, `git` opens your editor again to edit the commit + message. + +7. Edit and save your commit message. + + Make sure you include your signature. + +8. Push any changes to your fork on GitHub. + + $ git push origin 11038-fix-rhel-link + + +## Where to go next + +At this point, you should understand how to work on an issue. In the next +section, you [learn how to make a pull request](/project/create-pr/). diff --git a/docs/sources/reference/api/docker_remote_api.md b/docs/sources/reference/api/docker_remote_api.md index 4f844f4549..122546cf75 100644 --- a/docs/sources/reference/api/docker_remote_api.md +++ b/docs/sources/reference/api/docker_remote_api.md @@ -57,6 +57,25 @@ This endpoint now returns `Os`, `Arch` and `KernelVersion`. **New!** You can set ulimit settings to be used within the container. +`GET /info` + +**New!** +This endpoint now returns `SystemTime`, `HttpProxy`,`HttpsProxy` and `NoProxy`. + +`GET /images/json` + +**New!** +Added a `RepoDigests` field to include image digest information. + +`POST /build` + +**New!** +Builds can now set resource constraints for all containers created for the build. + +**New!** +(`CgroupParent`) can be passed in the host config to setup container cgroups under a specific cgroup. + + ## v1.17 ### Full Documentation @@ -65,15 +84,32 @@ You can set ulimit settings to be used within the container. ### What's new +The build supports `LABEL` command. Use this to add metadata +to an image. For example you could add data describing the content of an image. + +`LABEL "com.example.vendor"="ACME Incorporated"` + +**New!** `POST /containers/(id)/attach` and `POST /exec/(id)/start` **New!** Docker client now hints potential proxies about connection hijacking using HTTP Upgrade headers. +`POST /containers/create` + +**New!** +You can set labels on container create describing the container. + +`GET /containers/json` + +**New!** +The endpoint returns the labels associated with the containers (`Labels`). + `GET /containers/(id)/json` **New!** This endpoint now returns the list current execs associated with the container (`ExecIDs`). +This endpoint now returns the container labels (`Config.Labels`). `POST /containers/(id)/rename` @@ -92,6 +128,12 @@ root filesystem as read only. **New!** This endpoint returns a live stream of a container's resource usage statistics. +`GET /images/json` + +**New!** +This endpoint now returns the labels associated with each image (`Labels`). + + ## v1.16 ### Full Documentation diff --git a/docs/sources/reference/api/docker_remote_api_v1.18.md b/docs/sources/reference/api/docker_remote_api_v1.18.md index d5d39fb6f2..3ebddb7d13 100644 --- a/docs/sources/reference/api/docker_remote_api_v1.18.md +++ b/docs/sources/reference/api/docker_remote_api_v1.18.md @@ -113,10 +113,6 @@ Create a container "Hostname": "", "Domainname": "", "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, @@ -129,6 +125,11 @@ Create a container ], "Entrypoint": "", "Image": "ubuntu", + "Labels": { + "com.example.vendor": "Acme", + "com.example.license": "GPL", + "com.example.version": "1.0" + }, "Volumes": { "/tmp": {} }, @@ -143,6 +144,10 @@ Create a container "Binds": ["/tmp:/tmp"], "Links": ["redis3:redis"], "LxcConf": {"lxc.utsname":"docker"}, + "Memory": 0, + "MemorySwap": 0, + "CpuShares": 512, + "CpusetCpus": "0,1", "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }, "PublishAllPorts": false, "Privileged": false, @@ -156,7 +161,9 @@ Create a container "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 }, "NetworkMode": "bridge", "Devices": [], - "Ulimits": [{}] + "Ulimits": [{}], + "LogConfig": { "Type": "json-file", Config: {} }, + "CgroupParent": "" } } @@ -182,7 +189,8 @@ Json Parameters: always use this with `memory`, and make the value larger than `memory`. - **CpuShares** - An integer value containing the CPU Shares for container (ie. the relative weight vs othercontainers). - **CpuSet** - String value containg the cgroups Cpuset to use. +- **Cpuset** - The same as CpusetCpus, but deprecated, please don't use. +- **CpusetCpus** - String value containg the cgroups CpusetCpus to use. - **AttachStdin** - Boolean value, attaches to stdin. - **AttachStdout** - Boolean value, attaches to stdout. - **AttachStderr** - Boolean value, attaches to stderr. @@ -190,12 +198,13 @@ Json Parameters: - **OpenStdin** - Boolean value, opens stdin, - **StdinOnce** - Boolean value, close stdin after the 1 attached client disconnects. - **Env** - A list of environment variables in the form of `VAR=value` +- **Labels** - Adds a map of labels that to a container. To specify a map: `{"key":"value"[,"key2":"value2"]}` - **Cmd** - Command to run specified as a string or an array of strings. - **Entrypoint** - Set the entrypoint for the container a a string or an array of strings - **Image** - String value containing the image name to use for the container - **Volumes** – An object mapping mountpoint paths (strings) inside the - container to empty objects. + container to empty objects. - **WorkingDir** - A string value containing the working dir for commands to run in. - **NetworkDisabled** - Boolean value, when true disables neworking for the @@ -248,6 +257,11 @@ Json Parameters: - **Ulimits** - A list of ulimits to be set in the container, specified as `{ "Name": , "Soft": , "Hard": }`, for example: `Ulimits: { "Name": "nofile", "Soft": 1024, "Hard", 2048 }}` + - **LogConfig** - Logging configuration to container, format + `{ "Type": "", "Config": {"key1": "val1"}} + Available types: `json-file`, `none`. + `json-file` logging driver. + - **CgroupParent** - Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist. Query Parameters: @@ -292,8 +306,6 @@ Return low-level information on the container `id` "-c", "exit 9" ], - "CpuShares": 0, - "Cpuset": "", "Domainname": "", "Entrypoint": null, "Env": [ @@ -302,9 +314,12 @@ Return low-level information on the container `id` "ExposedPorts": null, "Hostname": "ba033ac44011", "Image": "ubuntu", + "Labels": { + "com.example.vendor": "Acme", + "com.example.license": "GPL", + "com.example.version": "1.0" + }, "MacAddress": "", - "Memory": 0, - "MemorySwap": 0, "NetworkDisabled": false, "OnBuild": null, "OpenStdin": false, @@ -324,6 +339,8 @@ Return low-level information on the container `id` "CapAdd": null, "CapDrop": null, "ContainerIDFile": "", + "CpusetCpus": "", + "CpuShares": 0, "Devices": [], "Dns": null, "DnsSearch": null, @@ -331,6 +348,8 @@ Return low-level information on the container `id` "IpcMode": "", "Links": null, "LxcConf": [], + "Memory": 0, + "MemorySwap": 0, "NetworkMode": "bridge", "PortBindings": {}, "Privileged": false, @@ -340,6 +359,7 @@ Return low-level information on the container `id` "MaximumRetryCount": 2, "Name": "on-failure" }, + "LogConfig": { "Type": "json-file", Config: {} }, "SecurityOpt": null, "VolumesFrom": null, "Ulimits": [{}] @@ -436,6 +456,9 @@ Status Codes: Get stdout and stderr logs from the container ``id`` +> **Note**: +> This endpoint works only for containers with `json-file` logging driver. + **Example request**: GET /containers/4fa6e0f0c678/logs?stderr=1&stdout=1×tamps=1&follow=1&tail=10 HTTP/1.1 @@ -495,6 +518,12 @@ Inspect changes on container `id`'s filesystem } ] +Values for `Kind`: + +- `0`: Modify +- `1`: Add +- `2`: Delete + Status Codes: - **200** – no error @@ -1036,6 +1065,45 @@ Status Codes: } ] +**Example request, with digest information**: + + GET /images/json?digests=1 HTTP/1.1 + +**Example response, with digest information**: + + HTTP/1.1 200 OK + Content-Type: application/json + + [ + { + "Created": 1420064636, + "Id": "4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125", + "ParentId": "ea13149945cb6b1e746bf28032f02e9b5a793523481a0a18645fc77ad53c4ea2", + "RepoDigests": [ + "localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf" + ], + "RepoTags": [ + "localhost:5000/test/busybox:latest", + "playdate:latest" + ], + "Size": 0, + "VirtualSize": 2429728 + } + ] + +The response shows a single image `Id` associated with two repositories +(`RepoTags`): `localhost:5000/test/busybox`: and `playdate`. A caller can use +either of the `RepoTags` values `localhost:5000/test/busybox:latest` or +`playdate:latest` to reference the image. + +You can also use `RepoDigests` values to reference an image. In this response, +the array has only one reference and that is to the +`localhost:5000/test/busybox` repository; the `playdate` repository has no +digest. You can reference this digest using the value: +`localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d...` + +See the `docker run` and `docker build` commands for examples of digest and tag +references on the command line. Query Parameters: @@ -1090,6 +1158,10 @@ Query Parameters: - **pull** - attempt to pull the image even if an older image exists locally - **rm** - remove intermediate containers after a successful build (default behavior) - **forcerm** - always remove intermediate containers (includes rm) +- **memory** - set memory limit for build +- **memswap** - Total memory (memory + swap), `-1` to disable swap +- **cpushares** - CPU shares (relative weight) +- **cpusetcpus** - CPUs in which to allow exection, e.g., `0-3`, `0,1` Request Headers: @@ -1167,8 +1239,6 @@ Return low-level information on the image `name` { "Hostname": "", "User": "", - "Memory": 0, - "MemorySwap": 0, "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, @@ -1180,6 +1250,11 @@ Return low-level information on the image `name` "Cmd": ["/bin/bash"], "Dns": null, "Image": "ubuntu", + "Labels": { + "com.example.vendor": "Acme", + "com.example.license": "GPL", + "com.example.version": "1.0" + }, "Volumes": null, "VolumesFrom": "", "WorkingDir": "" @@ -1446,6 +1521,7 @@ Display system-wide information "Debug":false, "NFd": 11, "NGoroutines":21, + "SystemTime": "2015-03-10T11:11:23.730591467-07:00" "NEventsListener":0, "InitPath":"/usr/bin/docker", "InitSha1":"", @@ -1455,6 +1531,9 @@ Display system-wide information "IPv4Forwarding":true, "Labels":["storage=ssd"], "DockerRootDir": "/var/lib/docker", + "HttpProxy": "http://test:test@localhost:8080" + "HttpsProxy": "https://test:test@localhost:8080" + "NoProxy": "9.81.1.160" "OperatingSystem": "Boot2Docker", } @@ -1530,10 +1609,6 @@ Create a new image from a container's changes "Hostname": "", "Domainname": "", "User": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 512, - "Cpuset": "0,1", "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, @@ -1887,10 +1962,6 @@ Return low-level information about the exec command `id`. "Hostname" : "8f177a186b97", "Domainname" : "", "User" : "", - "Memory" : 0, - "MemorySwap" : 0, - "CpuShares" : 0, - "Cpuset" : "", "AttachStdin" : false, "AttachStdout" : false, "AttachStderr" : false, diff --git a/docs/sources/reference/builder.md b/docs/sources/reference/builder.md index 6ee41f5b76..6955d31e0e 100644 --- a/docs/sources/reference/builder.md +++ b/docs/sources/reference/builder.md @@ -192,6 +192,10 @@ Or FROM : +Or + + FROM @ + The `FROM` instruction sets the [*Base Image*](/terms/image/#base-image) for subsequent instructions. As such, a valid `Dockerfile` must have `FROM` as its first instruction. The image can be any valid image – it is especially easy @@ -204,8 +208,9 @@ to start by **pulling an image** from the [*Public Repositories*]( multiple images. Simply make a note of the last image ID output by the commit before each new `FROM` command. -If no `tag` is given to the `FROM` instruction, `latest` is assumed. If the -used tag does not exist, an error will be returned. +The `tag` or `digest` values are optional. If you omit either of them, the builder +assumes a `latest` by default. The builder returns an error if it cannot match +the `tag` value. ## MAINTAINER @@ -328,6 +333,36 @@ default specified in `CMD`. > the result; `CMD` does not execute anything at build time, but specifies > the intended command for the image. +## LABEL + + LABEL = = = ... + +The `LABEL` instruction adds metadata to an image. A `LABEL` is a +key-value pair. To include spaces within a `LABEL` value, use quotes and +blackslashes as you would in command-line parsing. + + LABEL "com.example.vendor"="ACME Incorporated" + +An image can have more than one label. To specify multiple labels, separate each +key-value pair by an EOL. + + LABEL com.example.label-without-value + LABEL com.example.label-with-value="foo" + LABEL version="1.0" + LABEL description="This text illustrates \ + that label-values can span multiple lines." + +Docker recommends combining labels in a single `LABEL` instruction where +possible. Each `LABEL` instruction produces a new layer which can result in an +inefficient image if you use many labels. This example results in four image +layers. + +Labels are additive including `LABEL`s in `FROM` images. As the system +encounters and then applies a new label, new `key`s override any previous labels +with identical keys. + +To view an image's labels, use the `docker inspect` command. + ## EXPOSE EXPOSE [...] @@ -777,14 +812,28 @@ If you then run `docker stop test`, the container will not exit cleanly - the VOLUME ["/data"] -The `VOLUME` instruction will create a mount point with the specified name -and mark it as holding externally mounted volumes from native host or other +The `VOLUME` instruction creates a mount point with the specified name +and marks it as holding externally mounted volumes from native host or other containers. The value can be a JSON array, `VOLUME ["/var/log/"]`, or a plain string with multiple arguments, such as `VOLUME /var/log` or `VOLUME /var/log /var/db`. For more information/examples and mounting instructions via the -Docker client, refer to [*Share Directories via Volumes*](/userguide/dockervolumes/#volume) +Docker client, refer to +[*Share Directories via Volumes*](/userguide/dockervolumes/#volume) documentation. +The `docker run` command initializes the newly created volume with any data +that exists at the specified location within the base image. For example, +consider the following Dockerfile snippet: + + FROM ubuntu + RUN mkdir /myvol + RUN echo "hello world" > /myvol/greating + VOLUME /myvol + +This Dockerfile results in an image that causes `docker run`, to +create a new mount point at `/myvol` and copy the `greating` file +into the newly created volume. + > **Note**: > The list is parsed as a JSON array, which means that > you must use double-quotes (") around words not single-quotes ('). @@ -893,6 +942,7 @@ For example you might add something like this: FROM ubuntu MAINTAINER Victor Vieux + LABEL Description="This image is used to start the foobar executable" Vendor="ACME Products" Version="1.0" RUN apt-get update && apt-get install -y inotify-tools nginx apache2 openssh-server # Firefox over VNC diff --git a/docs/sources/reference/commandline/cli.md b/docs/sources/reference/commandline/cli.md index eb61872dae..322f5f401e 100644 --- a/docs/sources/reference/commandline/cli.md +++ b/docs/sources/reference/commandline/cli.md @@ -2,7 +2,7 @@ page_title: Command Line Interface page_description: Docker's CLI command description and usage page_keywords: Docker, Docker documentation, CLI, command line -# Command Line +# Docker Command Line {{ include "no-remote-sudo.md" }} @@ -74,7 +74,6 @@ expect an integer, and they can only be specified once. A self-sufficient runtime for linux containers. Options: - --api-enable-cors=false Enable CORS headers in the remote API --api-cors-header="" Set CORS headers in the remote API -b, --bridge="" Attach containers to a network bridge --bip="" Specify network bridge IP @@ -98,6 +97,7 @@ expect an integer, and they can only be specified once. --ipv6=false Enable IPv6 networking -l, --log-level="info" Set the logging level --label=[] Set key=value labels to the daemon + --log-driver="json-file" Container's logging driver (json-file/none) --mtu=0 Set the containers network MTU -p, --pidfile="/var/run/docker.pid" Path to use for daemon PID file --registry-mirror=[] Preferred Docker registry mirror @@ -508,13 +508,17 @@ is returned by the `docker attach` command to its caller too: Build a new image from the source code at PATH - -f, --file="" Name of the Dockerfile(Default is 'Dockerfile') + -f, --file="" Name of the Dockerfile (Default is 'PATH/Dockerfile') --force-rm=false Always remove intermediate containers --no-cache=false Do not use cache when building the image --pull=false Always attempt to pull a newer version of the image -q, --quiet=false Suppress the verbose output generated by the containers --rm=true Remove intermediate containers after a successful build -t, --tag="" Repository name (and optionally a tag) for the image + -m, --memory="" Memory limit for all build containers + --memory-swap="" Total memory (memory + swap), `-1` to disable swap + -c, --cpu-shares CPU Shares (relative weight) + --cpuset-cpus="" CPUs in which to allow exection, e.g. `0-3`, `0,1` Builds Docker images from a Dockerfile and a "context". A build's context is the files located in the specified `PATH` or `URL`. The build process can @@ -538,6 +542,29 @@ If you use STDIN or specify a `URL`, the system places the contents into a file called `Dockerfile`, and any `-f`, `--file` option is ignored. In this scenario, there is no context. +### Return code + +On a successful build, a return code of success `0` will be returned. +When the build fails, a non-zero failure code will be returned. + +There should be informational output of the reason for failure output +to `STDERR`: + +``` +$ docker build -t fail . +Sending build context to Docker daemon 2.048 kB +Sending build context to Docker daemon +Step 0 : FROM busybox + ---> 4986bf8c1536 +Step 1 : RUN exit 13 + ---> Running in e26670ec7a0a +INFO[0000] The command [/bin/sh -c exit 13] returned a non-zero code: 13 +$ echo $? +1 +``` + +### .dockerignore file + If a file named `.dockerignore` exists in the root of `PATH` then it is interpreted as a newline-separated list of exclusion patterns. Exclusion patterns match files or directories relative to `PATH` that @@ -727,8 +754,7 @@ If this behavior is undesired, set the 'p' option to false. The `--change` option will apply `Dockerfile` instructions to the image that is created. -Supported `Dockerfile` instructions: `CMD`, `ENTRYPOINT`, `ENV`, `EXPOSE`, -`ONBUILD`, `USER`, `VOLUME`, `WORKDIR` +Supported `Dockerfile` instructions: `ADD`|`CMD`|`ENTRYPOINT`|`ENV`|`EXPOSE`|`FROM`|`MAINTAINER`|`RUN`|`USER`|`LABEL`|`VOLUME`|`WORKDIR`|`COPY` #### Commit a container @@ -757,12 +783,14 @@ Supported `Dockerfile` instructions: `CMD`, `ENTRYPOINT`, `ENV`, `EXPOSE`, ## cp -Copy files/folders from a container's filesystem to the host -path. Paths are relative to the root of the filesystem. +Copy files or folders from a container's filesystem to the directory on the +host. Use '-' to write the data as a tar file to `STDOUT`. `CONTAINER:PATH` is +relative to the root of the container's filesystem. - Usage: docker cp CONTAINER:PATH HOSTPATH + Usage: docker cp CONTAINER:PATH HOSTDIR|- + + Copy files/folders from the PATH to the HOSTDIR. - Copy files/folders from the PATH to the HOSTPATH ## create @@ -777,8 +805,9 @@ Creates a new container. -c, --cpu-shares=0 CPU shares (relative weight) --cap-add=[] Add Linux capabilities --cap-drop=[] Drop Linux capabilities + --cgroup-parent="" Optional parent cgroup for the container --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) + --cpuset-cpus="" CPUs in which to allow execution (0-3, 0,1) --device=[] Add a host device to the container --dns=[] Set custom DNS servers --dns-search=[] Set custom DNS search domains @@ -789,7 +818,10 @@ Creates a new container. -h, --hostname="" Container host name -i, --interactive=false Keep STDIN open even if not attached --ipc="" IPC namespace to use + -l, --label=[] Set metadata on the container (e.g., --label=com.example.key=value) + --label-file=[] Read in a line delimited file of labels --link=[] Add link to another container + --log-driver="" Logging driver for container --lxc-conf=[] Add custom lxc options -m, --memory="" Memory limit --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) @@ -799,8 +831,8 @@ Creates a new container. -p, --publish=[] Publish a container's port(s) to the host --privileged=false Give extended privileges to this container --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits - --security-opt=[] Security Options + --restart="no" Restart policy (no, on-failure[:max-retry], always) + --security-opt=[] Security options -t, --tty=false Allocate a pseudo-TTY -u, --user="" Username or UID -v, --volume=[] Bind mount a volume @@ -817,7 +849,8 @@ container at any point. This is useful when you want to set up a container configuration ahead of time so that it is ready to start when you need it. -Please see the [run command](#run) section for more details. +Please see the [run command](#run) section and the [Docker run reference]( +/reference/run/) for more details. #### Examples @@ -917,9 +950,10 @@ Using multiple filters will be handled as a *AND*; for example container 588a23dac085 *AND* the event type is *start* Current filters: - * event - * image - * container + +* container +* event +* image #### Examples @@ -988,6 +1022,12 @@ You'll need two shells for this example. $ sudo docker events --filter 'container=7805c1d35632' --filter 'event=stop' 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop + $ sudo docker events --filter 'container=container_1' --filter 'container=container_2' + 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die + 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop + 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die + 2014-09-03T15:49:29.999999999Z07:00 7805c1d35632: (from redis:2.8) stop + ## exec Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...] @@ -1000,8 +1040,8 @@ You'll need two shells for this example. The `docker exec` command runs a new command in a running container. -The command started using `docker exec` will only run while the container's primary -process (`PID 1`) is running, and will not be restarted if the container is restarted. +The command started using `docker exec` only runs while the container's primary +process (`PID 1`) is running, and it is not restarted if the container is restarted. If the container is paused, then the `docker exec` command will fail with an error: @@ -1032,14 +1072,22 @@ This will create a new Bash session in the container `ubuntu_bash`. ## export - Usage: docker export CONTAINER + Usage: docker export [OPTIONS] CONTAINER - Export the contents of a filesystem as a tar archive to STDOUT + Export the contents of a filesystem to a tar archive (streamed to STDOUT by default) -For example: + -o, --output="" Write to a file, instead of STDOUT + + Produces a tarred repository to the standard output stream. + + For example: $ sudo docker export red_panda > latest.tar + Or + + $ sudo docker export --output="latest.tar" red_panda + > **Note:** > `docker export` does not export the contents of volumes associated with the > container. If a volume is mounted on top of an existing directory in the @@ -1076,7 +1124,9 @@ To see how the `docker:latest` image was built: List images -a, --all=false Show all images (default hides intermediate images) + --digests=false Show digests -f, --filter=[] Filter output based on conditions provided + --help=false Print usage --no-trunc=false Don't truncate output -q, --quiet=false Only show numeric IDs @@ -1125,6 +1175,22 @@ uses up the `VIRTUAL SIZE` listed only once. tryout latest 2629d1fa0b81b222fca63371ca16cbf6a0772d07759ff80e8d1369b926940074 23 hours ago 131.5 MB 5ed6274db6ceb2397844896966ea239290555e74ef307030ebb01ff91b1914df 24 hours ago 1.089 GB +#### Listing image digests + +Images that use the v2 or later format have a content-addressable identifier +called a `digest`. As long as the input used to generate the image is +unchanged, the digest value is predictable. To list image digest values, use +the `--digests` flag: + + $ sudo docker images --digests | head + REPOSITORY TAG DIGEST IMAGE ID CREATED VIRTUAL SIZE + localhost:5000/test/busybox sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf 4986bf8c1536 9 weeks ago 2.43 MB + +When pushing or pulling to a 2.0 registry, the `push` or `pull` command +output includes the image digest. You can `pull` using a digest value. You can +also reference by digest in `create`, `run`, and `rmi` commands, as well as the +`FROM` image reference in a Dockerfile. + #### Filtering The filtering flag (`-f` or `--filter`) format is of "key=value". If there is more @@ -1132,6 +1198,7 @@ than one filter, then pass multiple flags (e.g., `--filter "foo=bar" --filter "b Current filters: * dangling (boolean - true or false) + * label (`label=` or `label==`) ##### Untagged images @@ -1238,9 +1305,13 @@ For example: Debug mode (client): true Fds: 10 Goroutines: 9 + System Time: Tue Mar 10 18:38:57 UTC 2015 EventsListeners: 0 Init Path: /usr/bin/docker Docker Root Dir: /var/lib/docker + Http Proxy: http://test:test@localhost:8080 + Https Proxy: https://test:test@localhost:8080 + No Proxy: 9.81.1.160 Username: svendowideit Registry: [https://index.docker.io/v1/] Labels: @@ -1388,6 +1459,9 @@ For example: -t, --timestamps=false Show timestamps --tail="all" Number of lines to show from the end of the logs +NOTE: this command is available only for containers with `json-file` logging +driver. + The `docker logs` command batch-retrieves logs present at the time of execution. The `docker logs --follow` command will continue streaming the new output from @@ -1522,6 +1596,10 @@ use `docker pull`: $ sudo docker pull debian:testing # will pull the image named debian:testing and any intermediate # layers it is based on. + $ sudo docker pull debian@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf + # will pull the image from the debian repository with the digest + # sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf + # and any intermediate layers it is based on. # (Typically the empty `scratch` image, a MAINTAINER layer, # and the un-tarred base). $ sudo docker pull --all-tags centos @@ -1593,9 +1671,9 @@ deleted. #### Removing tagged images -Images can be removed either by their short or long IDs, or their image -names. If an image has more than one name, each of them needs to be -removed before the image is removed. +You can remove an image using its short or long ID, its tag, or its digest. If +an image has one or more tag or digest reference, you must remove all of them +before the image is removed. $ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE @@ -1604,21 +1682,35 @@ removed before the image is removed. test2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) $ sudo docker rmi fd484f19954f - Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories + Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories, use -f to force 2013/12/11 05:47:16 Error: failed to remove one or more images $ sudo docker rmi test1 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 + Untagged: test1:latest $ sudo docker rmi test2 - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 + Untagged: test2:latest $ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB) $ sudo docker rmi test - Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 + Untagged: test:latest Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8 +An image pulled by digest has no tag associated with it: + + $ sudo docker images --digests + REPOSITORY TAG DIGEST IMAGE ID CREATED VIRTUAL SIZE + localhost:5000/test/busybox sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf 4986bf8c1536 9 weeks ago 2.43 MB + +To remove an image using its digest: + + $ sudo docker rmi localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf + Untagged: localhost:5000/test/busybox@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf + Deleted: 4986bf8c15363d1c5d15512d5266f8777bfba4974ac56e3270e7760f6f0a8125 + Deleted: ea13149945cb6b1e746bf28032f02e9b5a793523481a0a18645fc77ad53c4ea2 + Deleted: df7546f9f060a2268024c8a230d8639878585defcc1bc6f79d2728a13957871b + ## run Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] @@ -1631,7 +1723,7 @@ removed before the image is removed. --cap-add=[] Add Linux capabilities --cap-drop=[] Drop Linux capabilities --cidfile="" Write the container ID to the file - --cpuset="" CPUs in which to allow execution (0-3, 0,1) + --cpuset-cpus="" CPUs in which to allow execution (0-3, 0,1) -d, --detach=false Run container in background and print container ID --device=[] Add a host device to the container --dns=[] Set custom DNS servers @@ -1645,8 +1737,11 @@ removed before the image is removed. -i, --interactive=false Keep STDIN open even if not attached --ipc="" IPC namespace to use --link=[] Add link to another container + --log-driver="" Logging driver for container --lxc-conf=[] Add custom lxc options -m, --memory="" Memory limit + -l, --label=[] Set metadata on the container (e.g., --label=com.example.key=value) + --label-file=[] Read in a file of labels (EOL delimited) --mac-address="" Container MAC address (e.g. 92:d0:c6:0a:29:33) --memory-swap="" Total memory (memory + swap), '-1' to disable swap --name="" Assign a name to the container @@ -1656,7 +1751,7 @@ removed before the image is removed. --pid="" PID namespace to use --privileged=false Give extended privileges to this container --read-only=false Mount the container's root filesystem as read only - --restart="" Restart policy to apply when a container exits + --restart="no" Restart policy (no, on-failure[:max-retry], always) --rm=false Automatically remove the container when it exits --security-opt=[] Security Options --sig-proxy=true Proxy received signals to the process @@ -1817,8 +1912,39 @@ An example of a file passed with `--env-file` $ sudo docker run --name console -t -i ubuntu bash -This will create and run a new container with the container name being -`console`. +A label is a a `key=value` pair that applies metadata to a container. To label a container with two labels: + + $ sudo docker run -l my-label --label com.example.foo=bar ubuntu bash + +The `my-label` key doesn't specify a value so the label defaults to an empty +string(`""`). To add multiple labels, repeat the label flag (`-l` or `--label`). + +The `key=value` must be unique to avoid overwriting the label value. If you +specify labels with identical keys but different values, each subsequent value +overwrites the previous. Docker uses the last `key=value` you supply. + +Use the `--label-file` flag to load multiple labels from a file. Delimit each +label in the file with an EOL mark. The example below loads labels from a +labels file in the current directory: + + $ sudo docker run --label-file ./labels ubuntu bash + +The label-file format is similar to the format for loading environment +variables. (Unlike environment variables, labels are not visislbe to processes +running inside a container.) The following example illustrates a label-file +format: + + com.example.label1="a label" + + # this is a comment + com.example.label2=another\ label + com.example.label3 + +You can load multiple label-files by supplying multiple `--label-file` flags. + +For additional information on working with labels, see [*Labels - custom +metadata in Docker*](/userguide/labels-custom-metadata/) in the Docker User +Guide. $ sudo docker run --link /redis:redis --name console ubuntu bash @@ -1928,41 +2054,56 @@ application change: #### Restart Policies -Using the `--restart` flag on Docker run you can specify a restart policy for -how a container should or should not be restarted on exit. +Use Docker's `--restart` to specify a container's *restart policy*. A restart +policy controls whether the Docker daemon restarts a container after exit. +Docker supports the following restart policies: -An ever increasing delay (double the previous delay, starting at 100 milliseconds) -is added before each restart to prevent flooding the server. This means the daemaon -will wait for 100 mS, then 200 mS, 400, 800, 1600, and so on until either the -`on-failure` limit is hit, or when you `docker stop` or even `docker rm -f` -the container. - -When a restart policy is active on a container, it will be shown in `docker ps` -as either `Up` or `Restarting` in `docker ps`. It can also be useful to use -`docker events` to see the restart policy in effect. - -** no ** - Do not restart the container when it exits. - -** on-failure ** - Restart the container only if it exits with a non zero exit status. - -** always ** - Always restart the container regardless of the exit status. - -You can also specify the maximum amount of times Docker will try to -restart the container when using the ** on-failure ** policy. The -default is that Docker will try forever to restart the container. + + + + + + + + + + + + + + + + + + + + + +
PolicyResult
no + Do not automatically restart the container when it exits. This is the + default. +
+ + on-failure[:max-retries] + + + Restart only if the container exits with a non-zero exit status. + Optionally, limit the number of restart retries the Docker + daemon attempts. +
always + Always restart the container regardless of the exit status. + When you specify always, the Docker daemon will try to restart + the container indefinitely. +
$ sudo docker run --restart=always redis -This will run the `redis` container with a restart policy of ** always ** so that if -the container exits, Docker will restart it. +This will run the `redis` container with a restart policy of **always** +so that if the container exits, Docker will restart it. - $ sudo docker run --restart=on-failure:10 redis - -This will run the `redis` container with a restart policy of ** -on-failure ** and a maximum restart count of 10. If the `redis` -container exits with a non-zero exit status more than 10 times in a row -Docker will abort trying to restart the container. Providing a maximum -restart limit is only valid for the ** on-failure ** policy. +More detailed information on restart policies can be found in the +[Restart Policies (--restart)](/reference/run/#restart-policies-restart) section +of the Docker run reference page. ### Adding entries to a container hosts file @@ -2063,7 +2204,7 @@ more details on finding shared images from the command line. Usage: docker start [OPTIONS] CONTAINER [CONTAINER...] - Restart a stopped container + Start one or more stopped containers -a, --attach=false Attach STDOUT/STDERR and forward signals -i, --interactive=false Attach container's STDIN diff --git a/docs/sources/reference/run.md b/docs/sources/reference/run.md index 8faf9ad77c..052a35823e 100644 --- a/docs/sources/reference/run.md +++ b/docs/sources/reference/run.md @@ -2,6 +2,12 @@ page_title: Docker run reference page_description: Configure containers at runtime page_keywords: docker, run, configure, runtime + + # Docker run reference **Docker runs processes in isolated containers**. When an operator @@ -18,7 +24,7 @@ other `docker` command. The basic `docker run` command takes this form: - $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] + $ sudo docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] To learn how to interpret the types of `[OPTIONS]`, see [*Option types*](/reference/commandline/cli/#option-types). @@ -50,8 +56,9 @@ following options. - [Container Identification](#container-identification) - [Name (--name)](#name-name) - [PID Equivalent](#pid-equivalent) - - [IPC Settings](#ipc-settings) + - [IPC Settings (--ipc)](#ipc-settings-ipc) - [Network Settings](#network-settings) + - [Restart Policies (--restart)](#restart-policies-restart) - [Clean Up (--rm)](#clean-up-rm) - [Runtime Constraints on CPU and Memory](#runtime-constraints-on-cpu-and-memory) - [Runtime Privilege, Linux Capabilities, and LXC Configuration](#runtime-privilege-linux-capabilities-and-lxc-configuration) @@ -126,16 +133,23 @@ programs might write out their process ID to a file (you've seen them as PID files): --cidfile="": Write the container ID to the file - + ### Image[:tag] While not strictly a means of identifying a container, you can specify a version of an image you'd like to run the container with by adding `image[:tag]` to the command. For example, `docker run ubuntu:14.04`. -## PID Settings +### Image[@digest] + +Images using the v2 or later image format have a content-addressable identifier +called a digest. As long as the input used to generate the image is unchanged, +the digest value is predictable and referenceable. + +## PID Settings (--pid) --pid="" : Set the PID (Process) Namespace mode for the container, 'host': use the host's PID namespace inside the container + By default, all containers have the PID namespace enabled. PID namespace provides separation of processes. The PID Namespace removes the @@ -153,13 +167,16 @@ within the container. This command would allow you to use `strace` inside the container on pid 1234 on the host. -## IPC Settings +## IPC Settings (--ipc) + --ipc="" : Set the IPC mode for the container, - 'container:': reuses another container's IPC namespace - 'host': use the host's IPC namespace inside the container + 'container:': reuses another container's IPC namespace + 'host': use the host's IPC namespace inside the container + By default, all containers have the IPC namespace enabled. -IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. +IPC (POSIX/SysV IPC) namespace provides separation of named shared memory +segments, semaphores and message queues. Shared memory segments are used to accelerate inter-process communication at memory speed, rather than through pipes or through the network stack. Shared @@ -173,10 +190,10 @@ of the containers. --dns=[] : Set custom dns servers for the container --net="bridge" : Set the Network mode for the container - 'bridge': creates a new network stack for the container on the docker bridge - 'none': no networking for this container - 'container:': reuses another container network stack - 'host': use the host network stack inside the container + 'bridge': creates a new network stack for the container on the docker bridge + 'none': no networking for this container + 'container:': reuses another container network stack + 'host': use the host network stack inside the container --add-host="" : Add a line to /etc/hosts (host:IP) --mac-address="" : Sets the container's Ethernet device's MAC address @@ -195,10 +212,41 @@ explicitly by providing a MAC via the `--mac-address` parameter (format: Supported networking modes are: -* none - no networking in the container -* bridge - (default) connect the container to the bridge via veth interfaces -* host - use the host's network stack inside the container. Note: This gives the container full access to local system services such as D-bus and is therefore considered insecure. -* container - use another container's network stack + + + + + + + + + + + + + + + + + + + + + + + + + +
ModeDescription
none + No networking in the container. +
bridge (default) + Connect the container to the bridge via veth interfaces. +
host + Use the host's network stack inside the container. +
container:<name|id> + Use the network stack of another container, specified via + its *name* or *id*. +
#### Mode: none @@ -226,6 +274,9 @@ container. The container's hostname will match the hostname on the host system. Publishing ports and linking to other containers will not work when sharing the host's network stack. +> **Note**: `--net="host"` gives the container full access to local system +> services such as D-bus and is therefore considered insecure. + #### Mode: container With the networking mode set to `container` a container will share the @@ -256,6 +307,99 @@ container itself as well as `localhost` and a few other common things. The ::1 localhost ip6-localhost ip6-loopback 86.75.30.9 db-static +## Restart policies (--restart) + +Using the `--restart` flag on Docker run you can specify a restart policy for +how a container should or should not be restarted on exit. + +When a restart policy is active on a container, it will be shown as either `Up` +or `Restarting` in [`docker ps`](/reference/commandline/cli/#ps). It can also be +useful to use [`docker events`](/reference/commandline/cli/#events) to see the +restart policy in effect. + +Docker supports the following restart policies: + + + + + + + + + + + + + + + + + + + + + + +
PolicyResult
no + Do not automatically restart the container when it exits. This is the + default. +
+ + on-failure[:max-retries] + + + Restart only if the container exits with a non-zero exit status. + Optionally, limit the number of restart retries the Docker + daemon attempts. +
always + Always restart the container regardless of the exit status. + When you specify always, the Docker daemon will try to restart + the container indefinitely. +
+ +An ever increasing delay (double the previous delay, starting at 100 +milliseconds) is added before each restart to prevent flooding the server. +This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, +and so on until either the `on-failure` limit is hit, or when you `docker stop` +or `docker rm -f` the container. + +If a container is succesfully restarted (the container is started and runs +for at least 10 seconds), the delay is reset to its default value of 100 ms. + +You can specify the maximum amount of times Docker will try to restart the +container when using the **on-failure** policy. The default is that Docker +will try forever to restart the container. The number of (attempted) restarts +for a container can be obtained via [`docker inspect`]( +/reference/commandline/cli/#inspect). For example, to get the number of restarts +for container "my-container"; + + $ sudo docker inspect -f "{{ .RestartCount }}" my-container + # 2 + +Or, to get the last time the container was (re)started; + + $ docker inspect -f "{{ .State.StartedAt }}" my-container + # 2015-03-04T23:47:07.691840179Z + +You cannot set any restart policy in combination with +["clean up (--rm)"](#clean-up-rm). Setting both `--restart` and `--rm` +results in an error. + +###Examples + + $ sudo docker run --restart=always redis + +This will run the `redis` container with a restart policy of **always** +so that if the container exits, Docker will restart it. + + $ sudo docker run --restart=on-failure:10 redis + +This will run the `redis` container with a restart policy of **on-failure** +and a maximum restart count of 10. If the `redis` container exits with a +non-zero exit status more than 10 times in a row Docker will abort trying to +restart the container. Providing a maximum restart limit is only valid for the +**on-failure** policy. + ## Clean up (--rm) By default a container's file system persists even after the container @@ -311,42 +455,92 @@ container: -m="": Memory limit (format: , where unit = b, k, m or g) -memory-swap="": Total memory limit (memory + swap, format: , where unit = b, k, m or g) - -c=0 : CPU shares (relative weight) + -c, --cpu-shares=0 CPU shares (relative weight) + +### Memory constraints We have four ways to set memory usage: - - memory=inf, memory-swap=inf (not specify any of them) - There is no memory limit, you can use as much as you want. - - memory=L + + + Option + Result + + + + + + memory=inf, memory-swap=inf (default) + + + There is no memory limit for the container. The container can use + as much memory as needed. + + + + memory=L<inf, memory-swap=inf + + (specify memory and set memory-swap as -1) The container is + not allowed to use more than L bytes of memory, but can use as much swap + as is needed (if the host supports swap memory). + + + + memory=L<inf, memory-swap=2*L + + (specify memory without memory-swap) The container is not allowed to + use more than L bytes of memory, swap *plus* memory usage is double + of that. + + + + + memory=L<inf, memory-swap=S<inf, L<=S + + + (specify both memory and memory-swap) The container is not allowed to + use more than L bytes of memory, swap *plus* memory usage is limited + by S. + + + + - - memory=L you can use `--lxc-conf` to set a container's IP address, but this will not be > reflected in the `/etc/hosts` file. +## Logging drivers (--log-driver) + +You can specify a different logging driver for the container than for the daemon. + +### Logging driver: none + +Disables any logging for the container. `docker logs` won't be available with +this driver. + +### Log driver: json-file + +Default logging driver for Docker. Writes JSON messages to file. `docker logs` +command is available only for this logging driver + ## Overriding Dockerfile image defaults When a developer builds an image from a [*Dockerfile*](/reference/builder) @@ -476,7 +681,7 @@ Dockerfile instruction and how the operator can override that setting. Recall the optional `COMMAND` in the Docker commandline: - $ sudo docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...] + $ sudo docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] This command is optional because the person who created the `IMAGE` may have already provided a default `COMMAND` using the Dockerfile `CMD` @@ -541,10 +746,11 @@ developer, the operator has three choices: start the server container with `-P` or `-p,` or start the client container with `--link`. If the operator uses `-P` or `-p` then Docker will make the exposed port -accessible on the host and the ports will be available to any client -that can reach the host. When using `-P`, Docker will bind the exposed -ports to a random port on the host between 49153 and 65535. To find the -mapping between the host ports and the exposed ports, use `docker port`. +accessible on the host and the ports will be available to any client that can +reach the host. When using `-P`, Docker will bind the exposed port to a random +port on the host within an *ephemeral port range* defined by +`/proc/sys/net/ipv4/ip_local_port_range`. To find the mapping between the host +ports and the exposed ports, use `docker port`. If the operator uses `--link` when starting the new client container, then the client container can access the exposed port via a private @@ -556,34 +762,32 @@ client container to help indicate which interface and port to use. When a new container is created, Docker will set the following environment variables automatically: - - - - +
Variable Value
+ + + - - + - - - + + - - + - - - + + +
VariableValue
HOME + HOME Set based on the value of USER
HOSTNAME +
HOSTNAME The hostname associated with the container
PATH + PATH Includes popular directories, such as :
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM - xterm if the container is allocated a psuedo-TTY -
TERMxterm if the container is allocated a psuedo-TTY
@@ -619,7 +823,7 @@ container running Redis: # The redis-name container exposed port 6379 $ sudo docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name # Note that there are no public ports exposed since we didn᾿t use -p or -P @@ -661,7 +865,7 @@ If you restart the source container (`servicename` in this case), the recipient container's `/etc/hosts` entry will be automatically updated. > **Note**: -> Unlike host entries in the `/ets/hosts` file, IP addresses stored in the +> Unlike host entries in the `/etc/hosts` file, IP addresses stored in the > environment variables are not automatically updated if the source container is > restarted. We recommend using the host entries in `/etc/hosts` to resolve the > IP address of linked containers. diff --git a/docs/sources/release-notes.md b/docs/sources/release-notes.md index e6c0ec5d49..37ae6761aa 100644 --- a/docs/sources/release-notes.md +++ b/docs/sources/release-notes.md @@ -15,6 +15,9 @@ For a complete list of patches, fixes, and other improvements, see the *New Features* +* [1.6] The Docker daemon will no longer ignore unknown commands + while processing a `Dockerfile`. Instead it will generate an error and halt + processing. * The Docker daemon has now supports for IPv6 networking between containers and on the `docker0` bridge. For more information see the [IPv6 networking reference](/articles/networking/#ipv6). @@ -22,7 +25,7 @@ For a complete list of patches, fixes, and other improvements, see the container to writing to volumes [PR# 10093](https://github.com/docker/docker/pull/10093). * A new `docker stats CONTAINERID` command has been added to allow users to view a continuously updating stream of container resource usage statistics. See the - [`stats` command line reference](/reference/commandline/cli/#stats) and the + [`stats` command line reference](/reference/commandline/cli/#stats) and the [container `stats` API reference](/reference/api/docker_remote_api_v1.17/#get-container-stats-based-on-resource-usage). **Note**: this feature is only enabled for the `libcontainer` exec-driver at this point. * Users can now specify the file to use as the `Dockerfile` by running diff --git a/docs/sources/static_files/contributors.png b/docs/sources/static_files/contributors.png new file mode 100644 index 0000000000..63c0a0c09b Binary files /dev/null and b/docs/sources/static_files/contributors.png differ diff --git a/docs/sources/userguide/dockerizing.md b/docs/sources/userguide/dockerizing.md index 6f56a56955..cc7bc8e1c1 100644 --- a/docs/sources/userguide/dockerizing.md +++ b/docs/sources/userguide/dockerizing.md @@ -101,7 +101,7 @@ Again we can do this with the `docker run` command: $ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done" 1e5535038e285177d5214659a068137486f96ee5c2e85a4ac52dc83f2ebe4147 -Wait what? Where's our "Hello world" Let's look at what we've run here. +Wait, what? Where's our "hello world" output? Let's look at what we've run here. It should look pretty familiar. We ran `docker run` but this time we specified a flag: `-d`. The `-d` flag tells Docker to run the container and put it in the background, to daemonize it. @@ -187,7 +187,7 @@ Excellent. Our container has been stopped. # Next steps -Now we've seen how simple it is to get started with Docker let's learn how to +Now we've seen how simple it is to get started with Docker. Let's learn how to do some more advanced tasks. Go to [Working With Containers](/userguide/usingdocker). diff --git a/docs/sources/userguide/dockerlinks.md b/docs/sources/userguide/dockerlinks.md index 0a88092ae0..79ba17900e 100644 --- a/docs/sources/userguide/dockerlinks.md +++ b/docs/sources/userguide/dockerlinks.md @@ -25,10 +25,10 @@ container that ran a Python Flask application: > Docker can have a variety of network configurations. You can see more > information on Docker networking [here](/articles/networking/). -When that container was created, the `-P` flag was used to automatically map any -network ports inside it to a random high port from the range 49153 -to 65535 on our Docker host. Next, when `docker ps` was run, you saw that -port 5000 in the container was bound to port 49155 on the host. +When that container was created, the `-P` flag was used to automatically map +any network port inside it to a random high port within an *ephemeral port +range* on your Docker host. Next, when `docker ps` was run, you saw that port +5000 in the container was bound to port 49155 on the host. $ sudo docker ps nostalgic_morse CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/docs/sources/userguide/dockervolumes.md b/docs/sources/userguide/dockervolumes.md index 5a75f5b366..d533224656 100644 --- a/docs/sources/userguide/dockervolumes.md +++ b/docs/sources/userguide/dockervolumes.md @@ -21,19 +21,21 @@ Docker. A *data volume* is a specially-designated directory within one or more containers that bypasses the [*Union File -System*](/terms/layer/#union-file-system) to provide several useful features for -persistent or shared data: +System*](/terms/layer/#union-file-system). Data volumes provide several +useful features for persistent or shared data: -- Volumes are initialized when a container is created -- Data volumes can be shared and reused between containers -- Changes to a data volume are made directly -- Changes to a data volume will not be included when you update an image -- Data volumes persist even if the container itself is deleted +- Volumes are initialized when a container is created. If the container's + base image contains data at the specified mount point, that data is + copied into the new volume. +- Data volumes can be shared and reused among containers. +- Changes to a data volume are made directly. +- Changes to a data volume will not be included when you update an image. +- Data volumes persist even if the container itself is deleted. Data volumes are designed to persist data, independent of the container's life -cycle. Docker will therefore *never* automatically delete volumes when you remove -a container, nor will it "garbage collect" volumes that are no longer referenced -by a container. +cycle. Docker therefore *never* automatically delete volumes when you remove +a container, nor will it "garbage collect" volumes that are no longer +referenced by a container. ### Adding a data volume @@ -122,7 +124,7 @@ Let's create a new named container with a volume to share. While this container doesn't run an application, it reuses the `training/postgres` image so that all containers are using layers in common, saving disk space. - $ sudo docker create -v /dbdata --name dbdata training/postgres + $ sudo docker create -v /dbdata --name dbdata training/postgres /bin/true You can then use the `--volumes-from` flag to mount the `/dbdata` volume in another container. @@ -132,6 +134,11 @@ And another: $ sudo docker run -d --volumes-from dbdata --name db2 training/postgres +In this case, if the `postgres` image contained a directory called `/dbdata` +then mounting the volumes from the `dbdata` container hides the +`/dbdata` files from the `postgres` image. The result is only the files +from the `dbdata` container are visible. + You can use multiple `--volumes-from` parameters to bring together multiple data volumes from multiple containers. diff --git a/docs/sources/userguide/labels-custom-metadata.md b/docs/sources/userguide/labels-custom-metadata.md new file mode 100644 index 0000000000..7cf25c0609 --- /dev/null +++ b/docs/sources/userguide/labels-custom-metadata.md @@ -0,0 +1,190 @@ +page_title: Apply custom metadata +page_description: Learn how to work with custom metadata in Docker, using labels. +page_keywords: Usage, user guide, labels, metadata, docker, documentation, examples, annotating + +# Apply custom metadata + +You can apply metadata to your images, containers, or daemons via +labels. Metadata can serve a wide range of uses. Use labels to add notes or +licensing information to an image or to identify a host. + +A label is a `` / `` pair. Docker stores the label values as +*strings*. You can specify multiple labels but each `` / `` must be +unique to avoid overwriting. If you specify the same `key` several times but with +different values, newer labels overwrite previous labels. Docker uses +the last `key=value` you supply. + +>**Note:** Support for daemon-labels was added in Docker 1.4.1. Labels on +>containers and images are new in Docker 1.6.0 + +## Label keys (namespaces) + +Docker puts no hard restrictions on the label `key` you. However, labels with +simple keys can conflict. For example, you can categorize your images by using a +chip "architecture" label: + + LABEL architecture="amd64" + + LABEL architecture="ARMv7" + +But a user can label images by building architectural style: + + LABEL architecture="Art Nouveau" + +To prevent naming conflicts, Docker namespaces label keys using a reverse domain +notation. Use the following guidelines to name your keys: + +- All (third-party) tools should prefix their keys with the + reverse DNS notation of a domain controlled by the author. For + example, `com.example.some-label`. + +- The `com.docker.*`, `io.docker.*` and `com.dockerproject.*` namespaces are + reserved for Docker's internal use. + +- Keys should only consist of lower-cased alphanumeric characters, + dots and dashes (for example, `[a-z0-9-.]`) + +- Keys should start *and* end with an alpha numeric character + +- Keys may not contain consecutive dots or dashes. + +- Keys *without* namespace (dots) are reserved for CLI use. This allows end- + users to add metadata to their containers and images without having to type + cumbersome namespaces on the command-line. + + +These are guidelines and Docker does not *enforce* them. Failing following these +guidelines can result in conflicting labels. If you're building a tool that uses +labels, you *should* use namespaces for your label keys. + + +## Store structured data in labels + +Label values can contain any data type that can be stored as a string. For +example, consider this JSON: + + + { + "Description": "A containerized foobar", + "Usage": "docker run --rm example/foobar [args]", + "License": "GPL", + "Version": "0.0.1-beta", + "aBoolean": true, + "aNumber" : 0.01234, + "aNestedArray": ["a", "b", "c"] + } + +You can store this struct in a label by serializing it to a string first: + + LABEL com.example.image-specs="{\"Description\":\"A containerized foobar\",\"Usage\":\"docker run --rm example\\/foobar [args]\",\"License\":\"GPL\",\"Version\":\"0.0.1-beta\",\"aBoolean\":true,\"aNumber\":0.01234,\"aNestedArray\":[\"a\",\"b\",\"c\"]}" + +While it is *possible* to store structured data in label values, Docker treats +this data as a 'regular' string. This means that Docker doesn't offer ways to +query (filter) based on nested properties. If your tool needs to filter on +nested properties, the tool itself should implement this. + + +## Add labels to images; the `LABEL` instruction + +Adding labels to an image: + + + LABEL [.][=] ... + +The `LABEL` instruction adds a label to your image, optionally setting its value. +Use surrounding quotes or backslashes for labels that contain +white space character: + + LABEL vendor=ACME\ Incorporated + LABEL com.example.version.is-beta + LABEL com.example.version="0.0.1-beta" + LABEL com.example.release-date="2015-02-12" + +The `LABEL` instruction supports setting multiple labels in a single instruction +using this notation: + + LABEL com.example.version="0.0.1-beta" com.example.release-date="2015-02-12" + +Wrapping is allowed by using a backslash (`\`) as continuation marker: + + LABEL vendor=ACME\ Incorporated \ + com.example.is-beta \ + com.example.version="0.0.1-beta" \ + com.example.release-date="2015-02-12" + +Docker recommends you add multiple labels in a single `LABEL` instruction. Using +individual instructions for each label can result in an inefficient image. This +is because each `LABEL` instruction in a Dockerfile produces a new IMAGE layer. + +You can view the labels via the `docker inspect` command: + + $ docker inspect 4fa6e0f0c678 + + ... + "Labels": { + "vendor": "ACME Incorporated", + "com.example.is-beta": "", + "com.example.version": "0.0.1-beta", + "com.example.release-date": "2015-02-12" + } + ... + + $ docker inspect -f "{{json .Labels }}" 4fa6e0f0c678 + + {"Vendor":"ACME Incorporated","com.example.is-beta":"","com.example.version":"0.0.1-beta","com.example.release-date":"2015-02-12"} + + +## Query labels + +Besides storing metadata, you can filter images and containers by label. To list all +running containers that the `com.example.is-beta` label: + + # List all running containers that have a `com.example.is-beta` label + $ docker ps --filter "label=com.example.is-beta" + +List all running containers with a `color` label of `blue`: + + $ docker ps --filter "label=color=blue" + +List all images with `vendor` `ACME`: + + $ docker images --filter "label=vendor=ACME" + + +## Daemon labels + + + docker -d \ + --dns 8.8.8.8 \ + --dns 8.8.4.4 \ + -H unix:///var/run/docker.sock \ + --label com.example.environment="production" \ + --label com.example.storage="ssd" + +These labels appear as part of the `docker info` output for the daemon: + + docker -D info + Containers: 12 + Images: 672 + Storage Driver: aufs + Root Dir: /var/lib/docker/aufs + Backing Filesystem: extfs + Dirs: 697 + Execution Driver: native-0.2 + Kernel Version: 3.13.0-32-generic + Operating System: Ubuntu 14.04.1 LTS + CPUs: 1 + Total Memory: 994.1 MiB + Name: docker.example.com + ID: RC3P:JTCT:32YS:XYSB:YUBG:VFED:AAJZ:W3YW:76XO:D7NN:TEVU:UCRW + Debug mode (server): false + Debug mode (client): true + Fds: 11 + Goroutines: 14 + EventsListeners: 0 + Init Path: /usr/bin/docker + Docker Root Dir: /var/lib/docker + WARNING: No swap limit support + Labels: + com.example.environment=production + com.example.storage=ssd diff --git a/docs/sources/userguide/usingdocker.md b/docs/sources/userguide/usingdocker.md index 12a6b6fb2f..8d57def4ed 100644 --- a/docs/sources/userguide/usingdocker.md +++ b/docs/sources/userguide/usingdocker.md @@ -154,11 +154,11 @@ ports exposed in our image to our host. In this case Docker has exposed port 5000 (the default Python Flask port) on port 49155. -Network port bindings are very configurable in Docker. In our last -example the `-P` flag is a shortcut for `-p 5000` that maps port 5000 -inside the container to a high port (from the range 49153 to 65535) on -the local Docker host. We can also bind Docker containers to specific -ports using the `-p` flag, for example: +Network port bindings are very configurable in Docker. In our last example the +`-P` flag is a shortcut for `-p 5000` that maps port 5000 inside the container +to a high port (from *ephemeral port range* which typically ranges from 32768 +to 61000) on the local Docker host. We can also bind Docker containers to +specific ports using the `-p` flag, for example: $ sudo docker run -d -p 5000:5000 training/webapp python app.py diff --git a/engine/MAINTAINERS b/engine/MAINTAINERS deleted file mode 100644 index aee10c8421..0000000000 --- a/engine/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Solomon Hykes (@shykes) diff --git a/engine/env.go b/engine/env.go index f370e95ed0..a671f13c6b 100644 --- a/engine/env.go +++ b/engine/env.go @@ -7,6 +7,7 @@ import ( "io" "strconv" "strings" + "time" "github.com/docker/docker/utils" ) @@ -69,6 +70,15 @@ func (env *Env) SetBool(key string, value bool) { } } +func (env *Env) GetTime(key string) (time.Time, error) { + t, err := time.Parse(time.RFC3339Nano, env.Get(key)) + return t, err +} + +func (env *Env) SetTime(key string, t time.Time) { + env.Set(key, t.Format(time.RFC3339Nano)) +} + func (env *Env) GetInt(key string) int { return int(env.GetInt64(key)) } diff --git a/engine/env_test.go b/engine/env_test.go index b0caca9cbd..2ed99d0fea 100644 --- a/engine/env_test.go +++ b/engine/env_test.go @@ -4,6 +4,7 @@ import ( "bytes" "encoding/json" "testing" + "time" "github.com/docker/docker/pkg/testutils" ) @@ -94,6 +95,27 @@ func TestSetenvBool(t *testing.T) { } } +func TestSetenvTime(t *testing.T) { + job := mkJob(t, "dummy") + + now := time.Now() + job.SetenvTime("foo", now) + if val, err := job.GetenvTime("foo"); err != nil { + t.Fatalf("GetenvTime failed to parse: %v", err) + } else { + nowStr := now.Format(time.RFC3339) + valStr := val.Format(time.RFC3339) + if nowStr != valStr { + t.Fatalf("GetenvTime returns incorrect value: %s, Expected: %s", valStr, nowStr) + } + } + + job.Setenv("bar", "Obviously I'm not a date") + if val, err := job.GetenvTime("bar"); err == nil { + t.Fatalf("GetenvTime was supposed to fail, instead returned: %s", val) + } +} + func TestSetenvInt(t *testing.T) { job := mkJob(t, "dummy") diff --git a/engine/job.go b/engine/job.go index 6c11b13446..4b2befb425 100644 --- a/engine/job.go +++ b/engine/job.go @@ -145,6 +145,14 @@ func (job *Job) SetenvBool(key string, value bool) { job.env.SetBool(key, value) } +func (job *Job) GetenvTime(key string) (value time.Time, err error) { + return job.env.GetTime(key) +} + +func (job *Job) SetenvTime(key string, value time.Time) { + job.env.SetTime(key, value) +} + func (job *Job) GetenvSubEnv(key string) *Env { return job.env.GetSubEnv(key) } diff --git a/engine/job_test.go b/engine/job_test.go index 67e723988e..9f8c76095c 100644 --- a/engine/job_test.go +++ b/engine/job_test.go @@ -58,7 +58,7 @@ func TestJobStderrString(t *testing.T) { eng := New() // FIXME: test multiple combinations of output and status eng.Register("say_something_in_stderr", func(job *Job) Status { - job.Errorf("Warning, something might happen\nHere it comes!\nOh no...\nSomething happened\n") + job.Errorf("Something might happen\nHere it comes!\nOh no...\nSomething happened\n") return StatusOK }) diff --git a/engine/streams.go b/engine/streams.go index ec703c96fa..216fb8980a 100644 --- a/engine/streams.go +++ b/engine/streams.go @@ -5,7 +5,9 @@ import ( "fmt" "io" "io/ioutil" + "strings" "sync" + "unicode" ) type Output struct { @@ -16,25 +18,25 @@ type Output struct { } // Tail returns the n last lines of a buffer -// stripped out of the last \n, if any +// stripped out of trailing white spaces, if any. +// // if n <= 0, returns an empty string func Tail(buffer *bytes.Buffer, n int) string { if n <= 0 { return "" } - bytes := buffer.Bytes() - if len(bytes) > 0 && bytes[len(bytes)-1] == '\n' { - bytes = bytes[:len(bytes)-1] - } - for i := buffer.Len() - 2; i >= 0; i-- { - if bytes[i] == '\n' { + s := strings.TrimRightFunc(buffer.String(), unicode.IsSpace) + i := len(s) - 1 + for ; i >= 0 && n > 0; i-- { + if s[i] == '\n' { n-- if n == 0 { - return string(bytes[i+1:]) + break } } } - return string(bytes) + // when i == -1, return the whole string which is s[0:] + return s[i+1:] } // NewOutput returns a new Output object with no destinations attached. diff --git a/engine/streams_test.go b/engine/streams_test.go index 5cfd5d0e6c..476a721baf 100644 --- a/engine/streams_test.go +++ b/engine/streams_test.go @@ -111,6 +111,11 @@ func TestTail(t *testing.T) { "Two\nThree", "One\nTwo\nThree", } + tests["One\nTwo\n\n\n"] = []string{ + "", + "Two", + "One\nTwo", + } for input, outputs := range tests { for n, expectedOutput := range outputs { output := Tail(bytes.NewBufferString(input), n) diff --git a/events/events.go b/events/events.go index 0951f7099d..559bf687e7 100644 --- a/events/events.go +++ b/events/events.go @@ -1,7 +1,10 @@ package events import ( + "bytes" "encoding/json" + "io" + "strings" "sync" "time" @@ -112,11 +115,23 @@ func writeEvent(job *engine.Job, event *utils.JSONMessage, eventFilters filters. if v == field { return false } + if strings.Contains(field, ":") { + image := strings.Split(field, ":") + if image[0] == v { + return false + } + } } return true } - if isFiltered(event.Status, eventFilters["event"]) || isFiltered(event.From, eventFilters["image"]) || isFiltered(event.ID, eventFilters["container"]) { + //incoming container filter can be name,id or partial id, convert and replace as a full container id + for i, cn := range eventFilters["container"] { + eventFilters["container"][i] = GetContainerId(job.Eng, cn) + } + + if isFiltered(event.Status, eventFilters["event"]) || isFiltered(event.From, eventFilters["image"]) || + isFiltered(event.ID, eventFilters["container"]) { return nil } @@ -196,3 +211,20 @@ func (e *Events) unsubscribe(l listener) bool { e.mu.Unlock() return false } + +func GetContainerId(eng *engine.Engine, name string) string { + var buf bytes.Buffer + job := eng.Job("container_inspect", name) + + var outStream io.Writer + + outStream = &buf + job.Stdout.Set(outStream) + + if err := job.Run(); err != nil { + return "" + } + var out struct{ ID string } + json.NewDecoder(&buf).Decode(&out) + return out.ID +} diff --git a/graph/MAINTAINERS b/graph/MAINTAINERS deleted file mode 100644 index e409454b5e..0000000000 --- a/graph/MAINTAINERS +++ /dev/null @@ -1,5 +0,0 @@ -Solomon Hykes (@shykes) -Victor Vieux (@vieux) -Michael Crosby (@crosbymichael) -Cristian Staretu (@unclejack) -Tibor Vass (@tiborvass) diff --git a/graph/graph.go b/graph/graph.go index e8917a9c7d..ecb52a0c5a 100644 --- a/graph/graph.go +++ b/graph/graph.go @@ -18,6 +18,7 @@ import ( "github.com/docker/docker/image" "github.com/docker/docker/pkg/archive" "github.com/docker/docker/pkg/common" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/pkg/truncindex" "github.com/docker/docker/runconfig" "github.com/docker/docker/utils" @@ -210,9 +211,17 @@ func (graph *Graph) TempLayerArchive(id string, sf *utils.StreamFormatter, outpu if err != nil { return nil, err } - progress := utils.ProgressReader(a, 0, output, sf, false, common.TruncateID(id), "Buffering to disk") - defer progress.Close() - return archive.NewTempArchive(progress, tmp) + progressReader := progressreader.New(progressreader.Config{ + In: a, + Out: output, + Formatter: sf, + Size: 0, + NewLines: false, + ID: common.TruncateID(id), + Action: "Buffering to disk", + }) + defer progressReader.Close() + return archive.NewTempArchive(progressReader, tmp) } // Mktemp creates a temporary sub-directory inside the graph's filesystem. diff --git a/graph/history.go b/graph/history.go index 356340673f..7f5063e912 100644 --- a/graph/history.go +++ b/graph/history.go @@ -5,6 +5,7 @@ import ( "github.com/docker/docker/engine" "github.com/docker/docker/image" + "github.com/docker/docker/utils" ) func (s *TagStore) CmdHistory(job *engine.Job) engine.Status { @@ -24,7 +25,7 @@ func (s *TagStore) CmdHistory(job *engine.Job) engine.Status { if _, exists := lookupMap[id]; !exists { lookupMap[id] = []string{} } - lookupMap[id] = append(lookupMap[id], name+":"+tag) + lookupMap[id] = append(lookupMap[id], utils.ImageReference(name, tag)) } } diff --git a/graph/import.go b/graph/import.go index 41f4b4f3f1..44b1ecbd57 100644 --- a/graph/import.go +++ b/graph/import.go @@ -9,6 +9,7 @@ import ( log "github.com/Sirupsen/logrus" "github.com/docker/docker/engine" "github.com/docker/docker/pkg/archive" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/runconfig" "github.com/docker/docker/utils" ) @@ -48,7 +49,15 @@ func (s *TagStore) CmdImport(job *engine.Job) engine.Status { if err != nil { return job.Error(err) } - progressReader := utils.ProgressReader(resp.Body, int(resp.ContentLength), job.Stdout, sf, true, "", "Importing") + progressReader := progressreader.New(progressreader.Config{ + In: resp.Body, + Out: job.Stdout, + Formatter: sf, + Size: int(resp.ContentLength), + NewLines: true, + ID: "", + Action: "Importing", + }) defer progressReader.Close() archive = progressReader } @@ -79,7 +88,7 @@ func (s *TagStore) CmdImport(job *engine.Job) engine.Status { job.Stdout.Write(sf.FormatStatus("", img.ID)) logID := img.ID if tag != "" { - logID += ":" + tag + logID = utils.ImageReference(logID, tag) } if err = job.Eng.Job("log", "import", logID, "").Run(); err != nil { log.Errorf("Error logging event 'import' for %s: %s", logID, err) diff --git a/graph/list.go b/graph/list.go index 49d4072be5..9f7bccdfaa 100644 --- a/graph/list.go +++ b/graph/list.go @@ -1,7 +1,6 @@ package graph import ( - "fmt" "log" "path" "strings" @@ -9,15 +8,20 @@ import ( "github.com/docker/docker/engine" "github.com/docker/docker/image" "github.com/docker/docker/pkg/parsers/filters" + "github.com/docker/docker/utils" ) -var acceptedImageFilterTags = map[string]struct{}{"dangling": {}} +var acceptedImageFilterTags = map[string]struct{}{ + "dangling": {}, + "label": {}, +} func (s *TagStore) CmdImages(job *engine.Job) engine.Status { var ( allImages map[string]*image.Image err error filt_tagged = true + filt_label = false ) imageFilters, err := filters.FromParam(job.Getenv("filters")) @@ -38,6 +42,8 @@ func (s *TagStore) CmdImages(job *engine.Job) engine.Status { } } + _, filt_label = imageFilters["label"] + if job.GetenvBool("all") && filt_tagged { allImages, err = s.graph.Map() } else { @@ -48,34 +54,51 @@ func (s *TagStore) CmdImages(job *engine.Job) engine.Status { } lookup := make(map[string]*engine.Env) s.Lock() - for name, repository := range s.Repositories { + for repoName, repository := range s.Repositories { if job.Getenv("filter") != "" { - if match, _ := path.Match(job.Getenv("filter"), name); !match { + if match, _ := path.Match(job.Getenv("filter"), repoName); !match { continue } } - for tag, id := range repository { + for ref, id := range repository { + imgRef := utils.ImageReference(repoName, ref) image, err := s.graph.Get(id) if err != nil { - log.Printf("Warning: couldn't load %s from %s/%s: %s", id, name, tag, err) + log.Printf("Warning: couldn't load %s from %s: %s", id, imgRef, err) continue } if out, exists := lookup[id]; exists { if filt_tagged { - out.SetList("RepoTags", append(out.GetList("RepoTags"), fmt.Sprintf("%s:%s", name, tag))) + if utils.DigestReference(ref) { + out.SetList("RepoDigests", append(out.GetList("RepoDigests"), imgRef)) + } else { // Tag Ref. + out.SetList("RepoTags", append(out.GetList("RepoTags"), imgRef)) + } } } else { // get the boolean list for if only the untagged images are requested delete(allImages, id) + if !imageFilters.MatchKVList("label", image.ContainerConfig.Labels) { + continue + } if filt_tagged { out := &engine.Env{} out.SetJson("ParentId", image.Parent) - out.SetList("RepoTags", []string{fmt.Sprintf("%s:%s", name, tag)}) out.SetJson("Id", image.ID) out.SetInt64("Created", image.Created.Unix()) out.SetInt64("Size", image.Size) out.SetInt64("VirtualSize", image.GetParentsSize(0)+image.Size) + out.SetJson("Labels", image.ContainerConfig.Labels) + + if utils.DigestReference(ref) { + out.SetList("RepoTags", []string{}) + out.SetList("RepoDigests", []string{imgRef}) + } else { + out.SetList("RepoTags", []string{imgRef}) + out.SetList("RepoDigests", []string{}) + } + lookup[id] = out } } @@ -90,15 +113,20 @@ func (s *TagStore) CmdImages(job *engine.Job) engine.Status { } // Display images which aren't part of a repository/tag - if job.Getenv("filter") == "" { + if job.Getenv("filter") == "" || filt_label { for _, image := range allImages { + if !imageFilters.MatchKVList("label", image.ContainerConfig.Labels) { + continue + } out := &engine.Env{} out.SetJson("ParentId", image.Parent) out.SetList("RepoTags", []string{":"}) + out.SetList("RepoDigests", []string{"@"}) out.SetJson("Id", image.ID) out.SetInt64("Created", image.Created.Unix()) out.SetInt64("Size", image.Size) out.SetInt64("VirtualSize", image.GetParentsSize(0)+image.Size) + out.SetJson("Labels", image.ContainerConfig.Labels) outs.Add(out) } } diff --git a/graph/manifest.go b/graph/manifest.go index 4b9f9a631b..3b1d825576 100644 --- a/graph/manifest.go +++ b/graph/manifest.go @@ -4,119 +4,21 @@ import ( "bytes" "encoding/json" "fmt" - "io" - "io/ioutil" log "github.com/Sirupsen/logrus" + "github.com/docker/distribution/digest" "github.com/docker/docker/engine" - "github.com/docker/docker/pkg/tarsum" "github.com/docker/docker/registry" - "github.com/docker/docker/runconfig" + "github.com/docker/docker/utils" "github.com/docker/libtrust" ) -func (s *TagStore) newManifest(localName, remoteName, tag string) ([]byte, error) { - manifest := ®istry.ManifestData{ - Name: remoteName, - Tag: tag, - SchemaVersion: 1, - } - localRepo, err := s.Get(localName) - if err != nil { - return nil, err - } - if localRepo == nil { - return nil, fmt.Errorf("Repo does not exist: %s", localName) - } - - // Get the top-most layer id which the tag points to - layerId, exists := localRepo[tag] - if !exists { - return nil, fmt.Errorf("Tag does not exist for %s: %s", localName, tag) - } - layersSeen := make(map[string]bool) - - layer, err := s.graph.Get(layerId) - if err != nil { - return nil, err - } - manifest.Architecture = layer.Architecture - manifest.FSLayers = make([]*registry.FSLayer, 0, 4) - manifest.History = make([]*registry.ManifestHistory, 0, 4) - var metadata runconfig.Config - if layer.Config != nil { - metadata = *layer.Config - } - - for ; layer != nil; layer, err = layer.GetParent() { - if err != nil { - return nil, err - } - - if layersSeen[layer.ID] { - break - } - if layer.Config != nil && metadata.Image != layer.ID { - err = runconfig.Merge(&metadata, layer.Config) - if err != nil { - return nil, err - } - } - - checksum, err := layer.GetCheckSum(s.graph.ImageRoot(layer.ID)) - if err != nil { - return nil, fmt.Errorf("Error getting image checksum: %s", err) - } - if tarsum.VersionLabelForChecksum(checksum) != tarsum.Version1.String() { - archive, err := layer.TarLayer() - if err != nil { - return nil, err - } - - defer archive.Close() - - tarSum, err := tarsum.NewTarSum(archive, true, tarsum.Version1) - if err != nil { - return nil, err - } - if _, err := io.Copy(ioutil.Discard, tarSum); err != nil { - return nil, err - } - - checksum = tarSum.Sum(nil) - - // Save checksum value - if err := layer.SaveCheckSum(s.graph.ImageRoot(layer.ID), checksum); err != nil { - return nil, err - } - } - - jsonData, err := layer.RawJson() - if err != nil { - return nil, fmt.Errorf("Cannot retrieve the path for {%s}: %s", layer.ID, err) - } - - manifest.FSLayers = append(manifest.FSLayers, ®istry.FSLayer{BlobSum: checksum}) - - layersSeen[layer.ID] = true - - manifest.History = append(manifest.History, ®istry.ManifestHistory{V1Compatibility: string(jsonData)}) - } - - manifestBytes, err := json.MarshalIndent(manifest, "", " ") - if err != nil { - return nil, err - } - - return manifestBytes, nil -} - // loadManifest loads a manifest from a byte array and verifies its content. // The signature must be verified or an error is returned. If the manifest // contains no signatures by a trusted key for the name in the manifest, the // image is not considered verified. The parsed manifest object and a boolean // for whether the manifest is verified is returned. -func (s *TagStore) loadManifest(eng *engine.Engine, manifestBytes []byte) (*registry.ManifestData, bool, error) { +func (s *TagStore) loadManifest(eng *engine.Engine, manifestBytes []byte, dgst, ref string) (*registry.ManifestData, bool, error) { sig, err := libtrust.ParsePrettySignature(manifestBytes, "signatures") if err != nil { return nil, false, fmt.Errorf("error parsing payload: %s", err) @@ -132,6 +34,31 @@ func (s *TagStore) loadManifest(eng *engine.Engine, manifestBytes []byte) (*regi return nil, false, fmt.Errorf("error retrieving payload: %s", err) } + var manifestDigest digest.Digest + + if dgst != "" { + manifestDigest, err = digest.ParseDigest(dgst) + if err != nil { + return nil, false, fmt.Errorf("invalid manifest digest from registry: %s", err) + } + + dgstVerifier, err := digest.NewDigestVerifier(manifestDigest) + if err != nil { + return nil, false, fmt.Errorf("unable to verify manifest digest from registry: %s", err) + } + + dgstVerifier.Write(payload) + + if !dgstVerifier.Verified() { + computedDigest, _ := digest.FromBytes(payload) + return nil, false, fmt.Errorf("unable to verify manifest digest: registry has %q, computed %q", manifestDigest, computedDigest) + } + } + + if utils.DigestReference(ref) && ref != manifestDigest.String() { + return nil, false, fmt.Errorf("mismatching image manifest digest: got %q, expected %q", manifestDigest, ref) + } + var manifest registry.ManifestData if err := json.Unmarshal(payload, &manifest); err != nil { return nil, false, fmt.Errorf("error unmarshalling manifest: %s", err) diff --git a/graph/manifest_test.go b/graph/manifest_test.go index 6084ace776..9137041827 100644 --- a/graph/manifest_test.go +++ b/graph/manifest_test.go @@ -2,11 +2,16 @@ package graph import ( "encoding/json" + "fmt" + "io" + "io/ioutil" "os" "testing" "github.com/docker/docker/image" + "github.com/docker/docker/pkg/tarsum" "github.com/docker/docker/registry" + "github.com/docker/docker/runconfig" "github.com/docker/docker/utils" ) @@ -17,6 +22,102 @@ const ( testManifestTag = "manifesttest" ) +func (s *TagStore) newManifest(localName, remoteName, tag string) ([]byte, error) { + manifest := ®istry.ManifestData{ + Name: remoteName, + Tag: tag, + SchemaVersion: 1, + } + localRepo, err := s.Get(localName) + if err != nil { + return nil, err + } + if localRepo == nil { + return nil, fmt.Errorf("Repo does not exist: %s", localName) + } + + // Get the top-most layer id which the tag points to + layerId, exists := localRepo[tag] + if !exists { + return nil, fmt.Errorf("Tag does not exist for %s: %s", localName, tag) + } + layersSeen := make(map[string]bool) + + layer, err := s.graph.Get(layerId) + if err != nil { + return nil, err + } + manifest.Architecture = layer.Architecture + manifest.FSLayers = make([]*registry.FSLayer, 0, 4) + manifest.History = make([]*registry.ManifestHistory, 0, 4) + var metadata runconfig.Config + if layer.Config != nil { + metadata = *layer.Config + } + + for ; layer != nil; layer, err = layer.GetParent() { + if err != nil { + return nil, err + } + + if layersSeen[layer.ID] { + break + } + if layer.Config != nil && metadata.Image != layer.ID { + err = runconfig.Merge(&metadata, layer.Config) + if err != nil { + return nil, err + } + } + + checksum, err := layer.GetCheckSum(s.graph.ImageRoot(layer.ID)) + if err != nil { + return nil, fmt.Errorf("Error getting image checksum: %s", err) + } + if tarsum.VersionLabelForChecksum(checksum) != tarsum.Version1.String() { + archive, err := layer.TarLayer() + if err != nil { + return nil, err + } + + defer archive.Close() + + tarSum, err := tarsum.NewTarSum(archive, true, tarsum.Version1) + if err != nil { + return nil, err + } + if _, err := io.Copy(ioutil.Discard, tarSum); err != nil { + return nil, err + } + + checksum = tarSum.Sum(nil) + + // Save checksum value + if err := layer.SaveCheckSum(s.graph.ImageRoot(layer.ID), checksum); err != nil { + return nil, err + } + } + + jsonData, err := layer.RawJson() + if err != nil { + return nil, fmt.Errorf("Cannot retrieve the path for {%s}: %s", layer.ID, err) + } + + manifest.FSLayers = append(manifest.FSLayers, ®istry.FSLayer{BlobSum: checksum}) + + layersSeen[layer.ID] = true + + manifest.History = append(manifest.History, ®istry.ManifestHistory{V1Compatibility: string(jsonData)}) + } + + manifestBytes, err := json.MarshalIndent(manifest, "", " ") + if err != nil { + return nil, err + } + + return manifestBytes, nil +} + func TestManifestTarsumCache(t *testing.T) { tmp, err := utils.TestDirectory("") if err != nil { diff --git a/graph/pull.go b/graph/pull.go index bbf887fb75..c01152a248 100644 --- a/graph/pull.go +++ b/graph/pull.go @@ -11,17 +11,18 @@ import ( "time" log "github.com/Sirupsen/logrus" + "github.com/docker/distribution/digest" "github.com/docker/docker/engine" "github.com/docker/docker/image" "github.com/docker/docker/pkg/common" - "github.com/docker/docker/pkg/tarsum" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/registry" "github.com/docker/docker/utils" ) func (s *TagStore) CmdPull(job *engine.Job) engine.Status { if n := len(job.Args); n != 1 && n != 2 { - return job.Errorf("Usage: %s IMAGE [TAG]", job.Name) + return job.Errorf("Usage: %s IMAGE [TAG|DIGEST]", job.Name) } var ( @@ -45,7 +46,7 @@ func (s *TagStore) CmdPull(job *engine.Job) engine.Status { job.GetenvJson("authConfig", authConfig) job.GetenvJson("metaHeaders", &metaHeaders) - c, err := s.poolAdd("pull", repoInfo.LocalName+":"+tag) + c, err := s.poolAdd("pull", utils.ImageReference(repoInfo.LocalName, tag)) if err != nil { if c != nil { // Another pull of the same repository is already taking place; just wait for it to finish @@ -55,7 +56,7 @@ func (s *TagStore) CmdPull(job *engine.Job) engine.Status { } return job.Error(err) } - defer s.poolRemove("pull", repoInfo.LocalName+":"+tag) + defer s.poolRemove("pull", utils.ImageReference(repoInfo.LocalName, tag)) log.Debugf("pulling image from host %q with remote name %q", repoInfo.Index.Name, repoInfo.RemoteName) endpoint, err := repoInfo.GetEndpoint() @@ -70,7 +71,7 @@ func (s *TagStore) CmdPull(job *engine.Job) engine.Status { logName := repoInfo.LocalName if tag != "" { - logName += ":" + tag + logName = utils.ImageReference(logName, tag) } if len(repoInfo.Index.Mirrors) == 0 && ((repoInfo.Official && repoInfo.Index.Official) || endpoint.Version == registry.APIVersion2) { @@ -112,7 +113,7 @@ func (s *TagStore) pullRepository(r *registry.Session, out io.Writer, repoInfo * repoData, err := r.GetRepositoryData(repoInfo.RemoteName) if err != nil { if strings.Contains(err.Error(), "HTTP code: 404") { - return fmt.Errorf("Error: image %s:%s not found", repoInfo.RemoteName, askedTag) + return fmt.Errorf("Error: image %s not found", utils.ImageReference(repoInfo.RemoteName, askedTag)) } // Unexpected HTTP error return err @@ -258,7 +259,7 @@ func (s *TagStore) pullRepository(r *registry.Session, out io.Writer, repoInfo * requestedTag := repoInfo.CanonicalName if len(askedTag) > 0 { - requestedTag = repoInfo.CanonicalName + ":" + askedTag + requestedTag = utils.ImageReference(repoInfo.CanonicalName, askedTag) } WriteStatus(requestedTag, out, sf, layers_downloaded) return nil @@ -337,7 +338,15 @@ func (s *TagStore) pullImage(r *registry.Session, out io.Writer, imgID, endpoint defer layer.Close() err = s.graph.Register(img, - utils.ProgressReader(layer, imgSize, out, sf, false, common.TruncateID(id), "Downloading")) + progressreader.New(progressreader.Config{ + In: layer, + Out: out, + Formatter: sf, + Size: imgSize, + NewLines: false, + ID: common.TruncateID(id), + Action: "Downloading", + })) if terr, ok := err.(net.Error); ok && terr.Timeout() && j < retries { time.Sleep(time.Duration(j) * 500 * time.Millisecond) continue @@ -366,6 +375,7 @@ func WriteStatus(requestedTag string, out io.Writer, sf *utils.StreamFormatter, type downloadInfo struct { imgJSON []byte img *image.Image + digest digest.Digest tmpFile *os.File length int64 downloaded bool @@ -412,7 +422,7 @@ func (s *TagStore) pullV2Repository(eng *engine.Engine, r *registry.Session, out requestedTag := repoInfo.CanonicalName if len(tag) > 0 { - requestedTag = repoInfo.CanonicalName + ":" + tag + requestedTag = utils.ImageReference(repoInfo.CanonicalName, tag) } WriteStatus(requestedTag, out, sf, layersDownloaded) return nil @@ -420,12 +430,15 @@ func (s *TagStore) pullV2Repository(eng *engine.Engine, r *registry.Session, out func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Writer, endpoint *registry.Endpoint, repoInfo *registry.RepositoryInfo, tag string, sf *utils.StreamFormatter, parallel bool, auth *registry.RequestAuthorization) (bool, error) { log.Debugf("Pulling tag from V2 registry: %q", tag) - manifestBytes, err := r.GetV2ImageManifest(endpoint, repoInfo.RemoteName, tag, auth) + + manifestBytes, manifestDigest, err := r.GetV2ImageManifest(endpoint, repoInfo.RemoteName, tag, auth) if err != nil { return false, err } - manifest, verified, err := s.loadManifest(eng, manifestBytes) + // loadManifest ensures that the manifest payload has the expected digest + // if the tag is a digest reference. + manifest, verified, err := s.loadManifest(eng, manifestBytes, manifestDigest, tag) if err != nil { return false, fmt.Errorf("error verifying manifest: %s", err) } @@ -435,7 +448,7 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri } if verified { - log.Printf("Image manifest for %s:%s has been verified", repoInfo.CanonicalName, tag) + log.Printf("Image manifest for %s has been verified", utils.ImageReference(repoInfo.CanonicalName, tag)) } out.Write(sf.FormatStatus(tag, "Pulling from %s", repoInfo.CanonicalName)) @@ -459,11 +472,12 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri continue } - chunks := strings.SplitN(sumStr, ":", 2) - if len(chunks) < 2 { - return false, fmt.Errorf("expected 2 parts in the sumStr, got %#v", chunks) + dgst, err := digest.ParseDigest(sumStr) + if err != nil { + return false, err } - sumType, checksum := chunks[0], chunks[1] + downloads[i].digest = dgst + out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Pulling fs layer", nil)) downloadFunc := func(di *downloadInfo) error { @@ -484,24 +498,33 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri return err } - r, l, err := r.GetV2ImageBlobReader(endpoint, repoInfo.RemoteName, sumType, checksum, auth) + r, l, err := r.GetV2ImageBlobReader(endpoint, repoInfo.RemoteName, di.digest.Algorithm(), di.digest.Hex(), auth) if err != nil { return err } defer r.Close() - // Wrap the reader with the appropriate TarSum reader. - tarSumReader, err := tarsum.NewTarSumForLabel(r, true, sumType) + verifier, err := digest.NewDigestVerifier(di.digest) if err != nil { - return fmt.Errorf("unable to wrap image blob reader with TarSum: %s", err) + return err } - io.Copy(tmpFile, utils.ProgressReader(ioutil.NopCloser(tarSumReader), int(l), out, sf, false, common.TruncateID(img.ID), "Downloading")) + if _, err := io.Copy(tmpFile, progressreader.New(progressreader.Config{ + In: ioutil.NopCloser(io.TeeReader(r, verifier)), + Out: out, + Formatter: sf, + Size: int(l), + NewLines: false, + ID: common.TruncateID(img.ID), + Action: "Downloading", + })); err != nil { + return fmt.Errorf("unable to copy v2 image blob data: %s", err) + } out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Verifying Checksum", nil)) - if finalChecksum := tarSumReader.Sum(nil); !strings.EqualFold(finalChecksum, sumStr) { - log.Infof("Image verification failed: checksum mismatch - expected %q but got %q", sumStr, finalChecksum) + if !verifier.Verified() { + log.Infof("Image verification failed: checksum mismatch for %q", di.digest.String()) verified = false } @@ -530,7 +553,7 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri } } - var layersDownloaded bool + var tagUpdated bool for i := len(downloads) - 1; i >= 0; i-- { d := &downloads[i] if d.err != nil { @@ -546,7 +569,14 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri d.tmpFile.Seek(0, 0) if d.tmpFile != nil { err = s.graph.Register(d.img, - utils.ProgressReader(d.tmpFile, int(d.length), out, sf, false, common.TruncateID(d.img.ID), "Extracting")) + progressreader.New(progressreader.Config{ + In: d.tmpFile, + Out: out, + Formatter: sf, + Size: int(d.length), + ID: common.TruncateID(d.img.ID), + Action: "Extracting", + })) if err != nil { return false, err } @@ -554,20 +584,44 @@ func (s *TagStore) pullV2Tag(eng *engine.Engine, r *registry.Session, out io.Wri // FIXME: Pool release here for parallel tag pull (ensures any downloads block until fully extracted) } out.Write(sf.FormatProgress(common.TruncateID(d.img.ID), "Pull complete", nil)) - layersDownloaded = true + tagUpdated = true } else { out.Write(sf.FormatProgress(common.TruncateID(d.img.ID), "Already exists", nil)) } } - if verified && layersDownloaded { - out.Write(sf.FormatStatus(repoInfo.CanonicalName+":"+tag, "The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.")) + // Check for new tag if no layers downloaded + if !tagUpdated { + repo, err := s.Get(repoInfo.LocalName) + if err != nil { + return false, err + } + if repo != nil { + if _, exists := repo[tag]; !exists { + tagUpdated = true + } + } } - if err = s.Set(repoInfo.LocalName, tag, downloads[0].img.ID, true); err != nil { - return false, err + if verified && tagUpdated { + out.Write(sf.FormatStatus(utils.ImageReference(repoInfo.CanonicalName, tag), "The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.")) } - return layersDownloaded, nil + if manifestDigest != "" { + out.Write(sf.FormatStatus("", "Digest: %s", manifestDigest)) + } + + if utils.DigestReference(tag) { + if err = s.SetDigest(repoInfo.LocalName, tag, downloads[0].img.ID); err != nil { + return false, err + } + } else { + // only set the repository/tag -> image ID mapping when pulling by tag (i.e. not by digest) + if err = s.Set(repoInfo.LocalName, tag, downloads[0].img.ID, true); err != nil { + return false, err + } + } + + return tagUpdated, nil } diff --git a/graph/push.go b/graph/push.go index bce413549c..5a4f0d1de9 100644 --- a/graph/push.go +++ b/graph/push.go @@ -1,7 +1,8 @@ package graph import ( - "bytes" + "crypto/sha256" + "encoding/json" "errors" "fmt" "io" @@ -12,10 +13,13 @@ import ( "sync" log "github.com/Sirupsen/logrus" + "github.com/docker/distribution/digest" "github.com/docker/docker/engine" "github.com/docker/docker/image" "github.com/docker/docker/pkg/common" + "github.com/docker/docker/pkg/progressreader" "github.com/docker/docker/registry" + "github.com/docker/docker/runconfig" "github.com/docker/docker/utils" "github.com/docker/libtrust" ) @@ -32,8 +36,15 @@ func (s *TagStore) getImageList(localRepo map[string]string, requestedTag string for tag, id := range localRepo { if requestedTag != "" && requestedTag != tag { + // Include only the requested tag. continue } + + if utils.DigestReference(tag) { + // Ignore digest references. + continue + } + var imageListForThisTag []string tagsByImage[id] = append(tagsByImage[id], tag) @@ -69,21 +80,19 @@ func (s *TagStore) getImageList(localRepo map[string]string, requestedTag string return imageList, tagsByImage, nil } -func (s *TagStore) getImageTags(localName, askedTag string) ([]string, error) { - localRepo, err := s.Get(localName) - if err != nil { - return nil, err - } +func (s *TagStore) getImageTags(localRepo map[string]string, askedTag string) ([]string, error) { log.Debugf("Checking %s against %#v", askedTag, localRepo) if len(askedTag) > 0 { - if _, ok := localRepo[askedTag]; !ok { - return nil, fmt.Errorf("Tag does not exist for %s:%s", localName, askedTag) + if _, ok := localRepo[askedTag]; !ok || utils.DigestReference(askedTag) { + return nil, fmt.Errorf("Tag does not exist: %s", askedTag) } return []string{askedTag}, nil } var tags []string for tag := range localRepo { - tags = append(tags, tag) + if !utils.DigestReference(tag) { + tags = append(tags, tag) + } } return tags, nil } @@ -259,7 +268,16 @@ func (s *TagStore) pushImage(r *registry.Session, out io.Writer, imgID, ep strin // Send the layer log.Debugf("rendered layer for %s of [%d] size", imgData.ID, layerData.Size) - checksum, checksumPayload, err := r.PushImageLayerRegistry(imgData.ID, utils.ProgressReader(layerData, int(layerData.Size), out, sf, false, common.TruncateID(imgData.ID), "Pushing"), ep, token, jsonRaw) + checksum, checksumPayload, err := r.PushImageLayerRegistry(imgData.ID, + progressreader.New(progressreader.Config{ + In: layerData, + Out: out, + Formatter: sf, + Size: int(layerData.Size), + NewLines: false, + ID: common.TruncateID(imgData.ID), + Action: "Pushing", + }), ep, token, jsonRaw) if err != nil { return "", err } @@ -274,14 +292,7 @@ func (s *TagStore) pushImage(r *registry.Session, out io.Writer, imgID, ep strin return imgData.Checksum, nil } -func (s *TagStore) pushV2Repository(r *registry.Session, eng *engine.Engine, out io.Writer, repoInfo *registry.RepositoryInfo, tag string, sf *utils.StreamFormatter) error { - if repoInfo.Official { - j := eng.Job("trust_update_base") - if err := j.Run(); err != nil { - log.Errorf("error updating trust base graph: %s", err) - } - } - +func (s *TagStore) pushV2Repository(r *registry.Session, localRepo Repository, out io.Writer, repoInfo *registry.RepositoryInfo, tag string, sf *utils.StreamFormatter) error { endpoint, err := r.V2RegistryEndpoint(repoInfo.Index) if err != nil { if repoInfo.Index.Official { @@ -291,7 +302,7 @@ func (s *TagStore) pushV2Repository(r *registry.Session, eng *engine.Engine, out return fmt.Errorf("error getting registry endpoint: %s", err) } - tags, err := s.getImageTags(repoInfo.LocalName, tag) + tags, err := s.getImageTags(localRepo, tag) if err != nil { return err } @@ -305,8 +316,102 @@ func (s *TagStore) pushV2Repository(r *registry.Session, eng *engine.Engine, out } for _, tag := range tags { + log.Debugf("Pushing repository: %s:%s", repoInfo.CanonicalName, tag) + + layerId, exists := localRepo[tag] + if !exists { + return fmt.Errorf("tag does not exist: %s", tag) + } + + layer, err := s.graph.Get(layerId) + if err != nil { + return err + } + + m := ®istry.ManifestData{ + SchemaVersion: 1, + Name: repoInfo.RemoteName, + Tag: tag, + Architecture: layer.Architecture, + } + var metadata runconfig.Config + if layer.Config != nil { + metadata = *layer.Config + } + + layersSeen := make(map[string]bool) + layers := []*image.Image{layer} + for ; layer != nil; layer, err = layer.GetParent() { + if err != nil { + return err + } + + if layersSeen[layer.ID] { + break + } + layers = append(layers, layer) + layersSeen[layer.ID] = true + } + m.FSLayers = make([]*registry.FSLayer, len(layers)) + m.History = make([]*registry.ManifestHistory, len(layers)) + + // Schema version 1 requires layer ordering from top to root + for i, layer := range layers { + log.Debugf("Pushing layer: %s", layer.ID) + + if layer.Config != nil && metadata.Image != layer.ID { + err = runconfig.Merge(&metadata, layer.Config) + if err != nil { + return err + } + } + jsonData, err := layer.RawJson() + if err != nil { + return fmt.Errorf("cannot retrieve the path for %s: %s", layer.ID, err) + } + + checksum, err := layer.GetCheckSum(s.graph.ImageRoot(layer.ID)) + if err != nil { + return fmt.Errorf("error getting image checksum: %s", err) + } + + var exists bool + if len(checksum) > 0 { + sumParts := strings.SplitN(checksum, ":", 2) + if len(sumParts) < 2 { + return fmt.Errorf("Invalid checksum: %s", checksum) + } + + // Call mount blob + exists, err = r.HeadV2ImageBlob(endpoint, repoInfo.RemoteName, sumParts[0], sumParts[1], auth) + if err != nil { + out.Write(sf.FormatProgress(common.TruncateID(layer.ID), "Image push failed", nil)) + return err + } + } + if !exists { + if cs, err := s.pushV2Image(r, layer, endpoint, repoInfo.RemoteName, sf, out, auth); err != nil { + return err + } else if cs != checksum { + // Cache new checksum + if err := layer.SaveCheckSum(s.graph.ImageRoot(layer.ID), cs); err != nil { + return err + } + checksum = cs + } + } else { + out.Write(sf.FormatProgress(common.TruncateID(layer.ID), "Image already exists", nil)) + } + m.FSLayers[i] = ®istry.FSLayer{BlobSum: checksum} + m.History[i] = ®istry.ManifestHistory{V1Compatibility: string(jsonData)} + } + + if err := checkValidManifest(m); err != nil { + return fmt.Errorf("invalid manifest: %s", err) + } + log.Debugf("Pushing %s:%s to v2 repository", repoInfo.LocalName, tag) - mBytes, err := s.newManifest(repoInfo.LocalName, repoInfo.RemoteName, tag) + mBytes, err := json.MarshalIndent(m, "", " ") if err != nil { return err } @@ -325,99 +430,65 @@ func (s *TagStore) pushV2Repository(r *registry.Session, eng *engine.Engine, out } log.Infof("Signed manifest for %s:%s using daemon's key: %s", repoInfo.LocalName, tag, s.trustKey.KeyID()) - manifestBytes := string(signedBody) - - manifest, verified, err := s.loadManifest(eng, signedBody) - if err != nil { - return fmt.Errorf("error verifying manifest: %s", err) - } - - if err := checkValidManifest(manifest); err != nil { - return fmt.Errorf("invalid manifest: %s", err) - } - - if verified { - log.Infof("Pushing verified image, key %s is registered for %q", s.trustKey.KeyID(), repoInfo.RemoteName) - } - - for i := len(manifest.FSLayers) - 1; i >= 0; i-- { - var ( - sumStr = manifest.FSLayers[i].BlobSum - imgJSON = []byte(manifest.History[i].V1Compatibility) - ) - - sumParts := strings.SplitN(sumStr, ":", 2) - if len(sumParts) < 2 { - return fmt.Errorf("Invalid checksum: %s", sumStr) - } - manifestSum := sumParts[1] - - img, err := image.NewImgJSON(imgJSON) - if err != nil { - return fmt.Errorf("Failed to parse json: %s", err) - } - - // Call mount blob - exists, err := r.HeadV2ImageBlob(endpoint, repoInfo.RemoteName, sumParts[0], manifestSum, auth) - if err != nil { - out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Image push failed", nil)) - return err - } - - if !exists { - if err := s.pushV2Image(r, img, endpoint, repoInfo.RemoteName, sumParts[0], manifestSum, sf, out, auth); err != nil { - return err - } - } else { - out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Image already exists", nil)) - } - } - // push the manifest - if err := r.PutV2ImageManifest(endpoint, repoInfo.RemoteName, tag, bytes.NewReader([]byte(manifestBytes)), auth); err != nil { + digest, err := r.PutV2ImageManifest(endpoint, repoInfo.RemoteName, tag, signedBody, mBytes, auth) + if err != nil { return err } + + out.Write(sf.FormatStatus("", "Digest: %s", digest)) } return nil } // PushV2Image pushes the image content to the v2 registry, first buffering the contents to disk -func (s *TagStore) pushV2Image(r *registry.Session, img *image.Image, endpoint *registry.Endpoint, imageName, sumType, sumStr string, sf *utils.StreamFormatter, out io.Writer, auth *registry.RequestAuthorization) error { +func (s *TagStore) pushV2Image(r *registry.Session, img *image.Image, endpoint *registry.Endpoint, imageName string, sf *utils.StreamFormatter, out io.Writer, auth *registry.RequestAuthorization) (string, error) { out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Buffering to Disk", nil)) image, err := s.graph.Get(img.ID) if err != nil { - return err + return "", err } arch, err := image.TarLayer() if err != nil { - return err + return "", err } defer arch.Close() tf, err := s.graph.newTempFile() if err != nil { - return err + return "", err } defer func() { tf.Close() os.Remove(tf.Name()) }() - size, err := bufferToFile(tf, arch) + h := sha256.New() + size, err := bufferToFile(tf, io.TeeReader(arch, h)) if err != nil { - return err + return "", err } + dgst := digest.NewDigest("sha256", h) // Send the layer log.Debugf("rendered layer for %s of [%d] size", img.ID, size) - if err := r.PutV2ImageBlob(endpoint, imageName, sumType, sumStr, utils.ProgressReader(tf, int(size), out, sf, false, common.TruncateID(img.ID), "Pushing"), auth); err != nil { + if err := r.PutV2ImageBlob(endpoint, imageName, dgst.Algorithm(), dgst.Hex(), + progressreader.New(progressreader.Config{ + In: tf, + Out: out, + Formatter: sf, + Size: int(size), + NewLines: false, + ID: common.TruncateID(img.ID), + Action: "Pushing", + }), auth); err != nil { out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Image push failed", nil)) - return err + return "", err } out.Write(sf.FormatProgress(common.TruncateID(img.ID), "Image successfully pushed", nil)) - return nil + return dgst.String(), nil } // FIXME: Allow to interrupt current push when new push of same image is done. @@ -457,17 +528,6 @@ func (s *TagStore) CmdPush(job *engine.Job) engine.Status { return job.Error(err) } - if endpoint.Version == registry.APIVersion2 { - err := s.pushV2Repository(r, job.Eng, job.Stdout, repoInfo, tag, sf) - if err == nil { - return engine.StatusOK - } - - if err != ErrV2RegistryUnavailable { - return job.Errorf("Error pushing to registry: %s", err) - } - } - reposLen := 1 if tag == "" { reposLen = len(s.Repositories[repoInfo.LocalName]) @@ -478,6 +538,18 @@ func (s *TagStore) CmdPush(job *engine.Job) engine.Status { if !exists { return job.Errorf("Repository does not exist: %s", repoInfo.LocalName) } + + if endpoint.Version == registry.APIVersion2 { + err := s.pushV2Repository(r, localRepo, job.Stdout, repoInfo, tag, sf) + if err == nil { + return engine.StatusOK + } + + if err != ErrV2RegistryUnavailable { + return job.Errorf("Error pushing to registry: %s", err) + } + } + if err := s.pushRepository(r, job.Stdout, repoInfo, localRepo, tag, sf); err != nil { return job.Error(err) } diff --git a/graph/tags.go b/graph/tags.go index 465ae7f353..5d26b8cfba 100644 --- a/graph/tags.go +++ b/graph/tags.go @@ -2,6 +2,7 @@ package graph import ( "encoding/json" + "errors" "fmt" "io/ioutil" "os" @@ -15,13 +16,16 @@ import ( "github.com/docker/docker/pkg/common" "github.com/docker/docker/pkg/parsers" "github.com/docker/docker/registry" + "github.com/docker/docker/utils" "github.com/docker/libtrust" ) const DEFAULTTAG = "latest" var ( + //FIXME these 2 regexes also exist in registry/v2/regexp.go validTagName = regexp.MustCompile(`^[\w][\w.-]{0,127}$`) + validDigest = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`) ) type TagStore struct { @@ -107,20 +111,31 @@ func (store *TagStore) reload() error { func (store *TagStore) LookupImage(name string) (*image.Image, error) { // FIXME: standardize on returning nil when the image doesn't exist, and err for everything else // (so we can pass all errors here) - repos, tag := parsers.ParseRepositoryTag(name) - if tag == "" { - tag = DEFAULTTAG + repoName, ref := parsers.ParseRepositoryTag(name) + if ref == "" { + ref = DEFAULTTAG } - img, err := store.GetImage(repos, tag) - store.Lock() - defer store.Unlock() + var ( + err error + img *image.Image + ) + + img, err = store.GetImage(repoName, ref) if err != nil { return nil, err - } else if img == nil { - if img, err = store.graph.Get(name); err != nil { - return nil, err - } } + + if img != nil { + return img, err + } + + // name must be an image ID. + store.Lock() + defer store.Unlock() + if img, err = store.graph.Get(name); err != nil { + return nil, err + } + return img, nil } @@ -132,7 +147,7 @@ func (store *TagStore) ByID() map[string][]string { byID := make(map[string][]string) for repoName, repository := range store.Repositories { for tag, id := range repository { - name := repoName + ":" + tag + name := utils.ImageReference(repoName, tag) if _, exists := byID[id]; !exists { byID[id] = []string{name} } else { @@ -171,32 +186,35 @@ func (store *TagStore) DeleteAll(id string) error { return nil } -func (store *TagStore) Delete(repoName, tag string) (bool, error) { +func (store *TagStore) Delete(repoName, ref string) (bool, error) { store.Lock() defer store.Unlock() deleted := false if err := store.reload(); err != nil { return false, err } + repoName = registry.NormalizeLocalName(repoName) - if r, exists := store.Repositories[repoName]; exists { - if tag != "" { - if _, exists2 := r[tag]; exists2 { - delete(r, tag) - if len(r) == 0 { - delete(store.Repositories, repoName) - } - deleted = true - } else { - return false, fmt.Errorf("No such tag: %s:%s", repoName, tag) - } - } else { - delete(store.Repositories, repoName) - deleted = true - } - } else { + + if ref == "" { + // Delete the whole repository. + delete(store.Repositories, repoName) + return true, store.save() + } + + repoRefs, exists := store.Repositories[repoName] + if !exists { return false, fmt.Errorf("No such repository: %s", repoName) } + + if _, exists := repoRefs[ref]; exists { + delete(repoRefs, ref) + if len(repoRefs) == 0 { + delete(store.Repositories, repoName) + } + deleted = true + } + return deleted, store.save() } @@ -234,6 +252,40 @@ func (store *TagStore) Set(repoName, tag, imageName string, force bool) error { return store.save() } +// SetDigest creates a digest reference to an image ID. +func (store *TagStore) SetDigest(repoName, digest, imageName string) error { + img, err := store.LookupImage(imageName) + if err != nil { + return err + } + + if err := validateRepoName(repoName); err != nil { + return err + } + + if err := validateDigest(digest); err != nil { + return err + } + + store.Lock() + defer store.Unlock() + if err := store.reload(); err != nil { + return err + } + + repoName = registry.NormalizeLocalName(repoName) + repoRefs, exists := store.Repositories[repoName] + if !exists { + repoRefs = Repository{} + store.Repositories[repoName] = repoRefs + } else if oldID, exists := repoRefs[digest]; exists && oldID != img.ID { + return fmt.Errorf("Conflict: Digest %s is already set to image %s", digest, oldID) + } + + repoRefs[digest] = img.ID + return store.save() +} + func (store *TagStore) Get(repoName string) (Repository, error) { store.Lock() defer store.Unlock() @@ -247,24 +299,29 @@ func (store *TagStore) Get(repoName string) (Repository, error) { return nil, nil } -func (store *TagStore) GetImage(repoName, tagOrID string) (*image.Image, error) { +func (store *TagStore) GetImage(repoName, refOrID string) (*image.Image, error) { repo, err := store.Get(repoName) - store.Lock() - defer store.Unlock() + if err != nil { return nil, err - } else if repo == nil { + } + if repo == nil { return nil, nil } - if revision, exists := repo[tagOrID]; exists { - return store.graph.Get(revision) + + store.Lock() + defer store.Unlock() + if imgID, exists := repo[refOrID]; exists { + return store.graph.Get(imgID) } + // If no matching tag is found, search through images for a matching image id for _, revision := range repo { - if strings.HasPrefix(revision, tagOrID) { + if strings.HasPrefix(revision, refOrID) { return store.graph.Get(revision) } } + return nil, nil } @@ -275,7 +332,7 @@ func (store *TagStore) GetRepoRefs() map[string][]string { for name, repository := range store.Repositories { for tag, id := range repository { shortID := common.TruncateID(id) - reporefs[shortID] = append(reporefs[shortID], fmt.Sprintf("%s:%s", name, tag)) + reporefs[shortID] = append(reporefs[shortID], utils.ImageReference(name, tag)) } } store.Unlock() @@ -293,10 +350,10 @@ func validateRepoName(name string) error { return nil } -// Validate the name of a tag +// ValidateTagName validates the name of a tag func ValidateTagName(name string) error { if name == "" { - return fmt.Errorf("Tag name can't be empty") + return fmt.Errorf("tag name can't be empty") } if !validTagName.MatchString(name) { return fmt.Errorf("Illegal tag name (%s): only [A-Za-z0-9_.-] are allowed, minimum 1, maximum 128 in length", name) @@ -304,6 +361,16 @@ func ValidateTagName(name string) error { return nil } +func validateDigest(dgst string) error { + if dgst == "" { + return errors.New("digest can't be empty") + } + if !validDigest.MatchString(dgst) { + return fmt.Errorf("illegal digest (%s): must be of the form [a-zA-Z0-9-_+.]+:[a-fA-F0-9]+", dgst) + } + return nil +} + func (store *TagStore) poolAdd(kind, key string) (chan struct{}, error) { store.Lock() defer store.Unlock() diff --git a/graph/tags_unit_test.go b/graph/tags_unit_test.go index 58ad8ed878..c1a686bbc4 100644 --- a/graph/tags_unit_test.go +++ b/graph/tags_unit_test.go @@ -21,6 +21,8 @@ const ( testPrivateImageName = "127.0.0.1:8000/privateapp" testPrivateImageID = "5bc255f8699e4ee89ac4469266c3d11515da88fdcbde45d7b069b636ff4efd81" testPrivateImageIDShort = "5bc255f8699e" + testPrivateImageDigest = "sha256:bc8813ea7b3603864987522f02a76101c17ad122e1c46d790efc0fca78ca7bfb" + testPrivateImageTag = "sometag" ) func fakeTar() (io.Reader, error) { @@ -83,6 +85,9 @@ func mkTestTagStore(root string, t *testing.T) *TagStore { if err := store.Set(testPrivateImageName, "", testPrivateImageID, false); err != nil { t.Fatal(err) } + if err := store.SetDigest(testPrivateImageName, testPrivateImageDigest, testPrivateImageID); err != nil { + t.Fatal(err) + } return store } @@ -128,6 +133,10 @@ func TestLookupImage(t *testing.T) { "fail:fail", } + digestLookups := []string{ + testPrivateImageName + "@" + testPrivateImageDigest, + } + for _, name := range officialLookups { if img, err := store.LookupImage(name); err != nil { t.Errorf("Error looking up %s: %s", name, err) @@ -155,6 +164,16 @@ func TestLookupImage(t *testing.T) { t.Errorf("Expected 0 image, 1 found: %s", name) } } + + for _, name := range digestLookups { + if img, err := store.LookupImage(name); err != nil { + t.Errorf("Error looking up %s: %s", name, err) + } else if img == nil { + t.Errorf("Expected 1 image, none found: %s", name) + } else if img.ID != testPrivateImageID { + t.Errorf("Expected ID '%s' found '%s'", testPrivateImageID, img.ID) + } + } } func TestValidTagName(t *testing.T) { @@ -174,3 +193,24 @@ func TestInvalidTagName(t *testing.T) { } } } + +func TestValidateDigest(t *testing.T) { + tests := []struct { + input string + expectError bool + }{ + {"", true}, + {"latest", true}, + {"a:b", false}, + {"aZ0124-.+:bY852-_.+=", false}, + {"#$%#$^:$%^#$%", true}, + } + + for i, test := range tests { + err := validateDigest(test.input) + gotError := err != nil + if e, a := test.expectError, gotError; e != a { + t.Errorf("%d: with input %s, expected error=%t, got %t: %s", i, test.input, test.expectError, gotError, err) + } + } +} diff --git a/hack b/hack deleted file mode 120000 index e3f094ee63..0000000000 --- a/hack +++ /dev/null @@ -1 +0,0 @@ -project \ No newline at end of file diff --git a/project/dind b/hack/dind similarity index 100% rename from project/dind rename to hack/dind diff --git a/project/generate-authors.sh b/hack/generate-authors.sh similarity index 83% rename from project/generate-authors.sh rename to hack/generate-authors.sh index 4bd60364a4..e78a97f962 100755 --- a/project/generate-authors.sh +++ b/hack/generate-authors.sh @@ -8,7 +8,7 @@ cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.." { cat <<-'EOH' # This file lists all individuals having contributed content to the repository. - # For how it is generated, see `project/generate-authors.sh`. + # For how it is generated, see `hack/generate-authors.sh`. EOH echo git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf diff --git a/project/install.sh b/hack/install.sh similarity index 88% rename from project/install.sh rename to hack/install.sh index 2b5f2f41cb..5d8caaba09 100755 --- a/project/install.sh +++ b/hack/install.sh @@ -125,14 +125,21 @@ case "$lsb_dist" in # aufs is preferred over devicemapper; try to ensure the driver is available. if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then - kern_extras="linux-image-extra-$(uname -r)" + if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -q '^ii' 2>/dev/null; then + kern_extras="linux-image-extra-$(uname -r) linux-image-extra-virtual" - apt_get_update - ( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true + apt_get_update + ( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true - if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then - echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)' - echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!' + if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then + echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)' + echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!' + ( set -x; sleep 10 ) + fi + else + echo >&2 'Warning: current kernel is not supported by the linux-image-extra-virtual' + echo >&2 ' package. We have no AUFS support. Consider installing the packages' + echo >&2 ' linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.' ( set -x; sleep 10 ) fi fi diff --git a/project/make.sh b/hack/make.sh similarity index 99% rename from project/make.sh rename to hack/make.sh index ec880df39c..0db70a750e 100755 --- a/project/make.sh +++ b/hack/make.sh @@ -101,7 +101,7 @@ fi # Use these flags when compiling the tests and final binary IAMSTATIC='true' -source "$(dirname "$BASH_SOURCE")/make/.dockerversion" +source "$(dirname "$BASH_SOURCE")/make/.go-autogen" LDFLAGS='-w' LDFLAGS_STATIC='-linkmode external' diff --git a/project/make/.dockerinit b/hack/make/.dockerinit similarity index 94% rename from project/make/.dockerinit rename to hack/make/.dockerinit index 0f51fce0ac..fceba7db92 100644 --- a/project/make/.dockerinit +++ b/hack/make/.dockerinit @@ -2,7 +2,7 @@ set -e IAMSTATIC="true" -source "$(dirname "$BASH_SOURCE")/.dockerversion" +source "$(dirname "$BASH_SOURCE")/.go-autogen" # dockerinit still needs to be a static binary, even if docker is dynamic go build \ @@ -14,6 +14,7 @@ go build \ -extldflags \"$EXTLDFLAGS_STATIC\" " \ ./dockerinit + echo "Created binary: $DEST/dockerinit-$VERSION" ln -sf "dockerinit-$VERSION" "$DEST/dockerinit" diff --git a/hack/make/.dockerinit-gccgo b/hack/make/.dockerinit-gccgo new file mode 100644 index 0000000000..592a4152c8 --- /dev/null +++ b/hack/make/.dockerinit-gccgo @@ -0,0 +1,30 @@ +#!/bin/bash +set -e + +IAMSTATIC="true" +source "$(dirname "$BASH_SOURCE")/.go-autogen" + +# dockerinit still needs to be a static binary, even if docker is dynamic +go build --compiler=gccgo \ + -o "$DEST/dockerinit-$VERSION" \ + "${BUILDFLAGS[@]}" \ + --gccgoflags " + -g + -Wl,--no-export-dynamic + $EXTLDFLAGS_STATIC_DOCKER + " \ + ./dockerinit + +echo "Created binary: $DEST/dockerinit-$VERSION" +ln -sf "dockerinit-$VERSION" "$DEST/dockerinit" + +sha1sum= +if command -v sha1sum &> /dev/null; then + sha1sum=sha1sum +else + echo >&2 'error: cannot find sha1sum command or equivalent' + exit 1 +fi + +# sha1 our new dockerinit to ensure separate docker and dockerinit always run in a perfect pair compiled for one another +export DOCKER_INITSHA1="$($sha1sum $DEST/dockerinit-$VERSION | cut -d' ' -f1)" diff --git a/project/make/.ensure-emptyfs b/hack/make/.ensure-emptyfs similarity index 100% rename from project/make/.ensure-emptyfs rename to hack/make/.ensure-emptyfs diff --git a/hack/make/.ensure-frozen-images b/hack/make/.ensure-frozen-images new file mode 100644 index 0000000000..379f738495 --- /dev/null +++ b/hack/make/.ensure-frozen-images @@ -0,0 +1,37 @@ +#!/bin/bash +set -e + +# this list should match roughly what's in the Dockerfile (minus the explicit image IDs, of course) +images=( + busybox:latest + hello-world:frozen +) + +if ! docker inspect "${images[@]}" &> /dev/null; then + hardCodedDir='/docker-frozen-images' + if [ -d "$hardCodedDir" ]; then + ( set -x; tar -cC "$hardCodedDir" . | docker load ) + else + dir="$DEST/frozen-images" + # extract the exact "RUN download-frozen-image.sh" line from the Dockerfile itself for consistency + # NOTE: this will fail if either "curl" is not installed or if the Dockerfile is not available/readable + awk ' + $1 == "RUN" && $2 == "./contrib/download-frozen-image.sh" { + for (i = 2; i < NF; i++) + printf ( $i == "'"$hardCodedDir"'" ? "'"$dir"'" : $i ) " "; + print $NF; + if (/\\$/) { + inCont = 1; + next; + } + } + inCont { + print; + if (!/\\$/) { + inCont = 0; + } + } + ' Dockerfile | sh -x + ( set -x; tar -cC "$dir" . | docker load ) + fi +fi diff --git a/hack/make/.ensure-httpserver b/hack/make/.ensure-httpserver new file mode 100644 index 0000000000..38659edaba --- /dev/null +++ b/hack/make/.ensure-httpserver @@ -0,0 +1,15 @@ +#!/bin/bash +set -e + +# Build a Go static web server on top of busybox image +# and compile it for target daemon + +dir="$DEST/httpserver" +mkdir -p "$dir" +( + cd "$dir" + GOOS=linux GOARCH=amd64 go build -o httpserver github.com/docker/docker/contrib/httpserver + cp ../../../../contrib/httpserver/Dockerfile . + docker build -qt httpserver . > /dev/null +) +rm -rf "$dir" diff --git a/project/make/.dockerversion b/hack/make/.go-autogen similarity index 75% rename from project/make/.dockerversion rename to hack/make/.go-autogen index c96ff065bd..1b48f611ef 100644 --- a/project/make/.dockerversion +++ b/hack/make/.go-autogen @@ -1,6 +1,7 @@ #!/bin/bash rm -rf autogen + mkdir -p autogen/dockerversion cat > autogen/dockerversion/dockerversion.go < autogen/dockerversion/static.go < /dev/null; then echo >&2 'error: binary or dynbinary must be run before .integration-daemon-start' diff --git a/project/make/.integration-daemon-stop b/hack/make/.integration-daemon-stop similarity index 100% rename from project/make/.integration-daemon-stop rename to hack/make/.integration-daemon-stop diff --git a/project/make/.validate b/hack/make/.validate similarity index 100% rename from project/make/.validate rename to hack/make/.validate diff --git a/project/make/README.md b/hack/make/README.md similarity index 82% rename from project/make/README.md rename to hack/make/README.md index 9cc88d441c..6574b0efe6 100644 --- a/project/make/README.md +++ b/hack/make/README.md @@ -4,11 +4,11 @@ Each script is named after the bundle it creates. They should not be called directly - instead, pass it as argument to make.sh, for example: ``` -./project/make.sh test -./project/make.sh binary ubuntu +./hack/make.sh test +./hack/make.sh binary ubuntu # Or to run all bundles: -./project/make.sh +./hack/make.sh ``` To add a bundle: diff --git a/project/make/binary b/hack/make/binary old mode 100755 new mode 100644 similarity index 91% rename from project/make/binary rename to hack/make/binary index edc38a640c..0f57ea0d69 --- a/project/make/binary +++ b/hack/make/binary @@ -11,7 +11,7 @@ if [[ "$(uname -s)" == CYGWIN* ]]; then DEST=$(cygpath -mw $DEST) fi -source "$(dirname "$BASH_SOURCE")/.dockerversion" +source "$(dirname "$BASH_SOURCE")/.go-autogen" go build \ -o "$DEST/$BINARY_FULLNAME" \ @@ -21,6 +21,7 @@ go build \ $LDFLAGS_STATIC_DOCKER " \ ./docker + echo "Created binary: $DEST/$BINARY_FULLNAME" ln -sf "$BINARY_FULLNAME" "$DEST/docker$BINARY_EXTENSION" diff --git a/project/make/cover b/hack/make/cover similarity index 100% rename from project/make/cover rename to hack/make/cover diff --git a/project/make/cross b/hack/make/cross similarity index 100% rename from project/make/cross rename to hack/make/cross diff --git a/project/make/dynbinary b/hack/make/dynbinary similarity index 100% rename from project/make/dynbinary rename to hack/make/dynbinary diff --git a/hack/make/dyngccgo b/hack/make/dyngccgo new file mode 100644 index 0000000000..a76e9c5b56 --- /dev/null +++ b/hack/make/dyngccgo @@ -0,0 +1,23 @@ +#!/bin/bash +set -e + +DEST=$1 + +if [ -z "$DOCKER_CLIENTONLY" ]; then + source "$(dirname "$BASH_SOURCE")/.dockerinit-gccgo" + + hash_files "$DEST/dockerinit-$VERSION" +else + # DOCKER_CLIENTONLY must be truthy, so we don't need to bother with dockerinit :) + export DOCKER_INITSHA1="" +fi +# DOCKER_INITSHA1 is exported so that other bundlescripts can easily access it later without recalculating it + +( + export IAMSTATIC="false" + export EXTLDFLAGS_STATIC_DOCKER='' + export LDFLAGS_STATIC_DOCKER='' + export BUILDFLAGS=( "${BUILDFLAGS[@]/netgo /}" ) # disable netgo, since we don't need it for a dynamic binary + export BUILDFLAGS=( "${BUILDFLAGS[@]/static_build /}" ) # we're not building a "static" binary here + source "$(dirname "$BASH_SOURCE")/gccgo" +) diff --git a/hack/make/gccgo b/hack/make/gccgo new file mode 100644 index 0000000000..c85d2fbda5 --- /dev/null +++ b/hack/make/gccgo @@ -0,0 +1,25 @@ +#!/bin/bash +set -e + +DEST=$1 +BINARY_NAME="docker-$VERSION" +BINARY_EXTENSION="$(binary_extension)" +BINARY_FULLNAME="$BINARY_NAME$BINARY_EXTENSION" + +source "$(dirname "$BASH_SOURCE")/.go-autogen" + +go build -compiler=gccgo \ + -o "$DEST/$BINARY_FULLNAME" \ + "${BUILDFLAGS[@]}" \ + -gccgoflags " + -g + $EXTLDFLAGS_STATIC_DOCKER + -Wl,--no-export-dynamic + -ldl + " \ + ./docker + +echo "Created binary: $DEST/$BINARY_FULLNAME" +ln -sf "$BINARY_FULLNAME" "$DEST/docker$BINARY_EXTENSION" + +hash_files "$DEST/$BINARY_FULLNAME" diff --git a/project/make/test-docker-py b/hack/make/test-docker-py similarity index 100% rename from project/make/test-docker-py rename to hack/make/test-docker-py diff --git a/project/make/test-integration b/hack/make/test-integration similarity index 100% rename from project/make/test-integration rename to hack/make/test-integration diff --git a/project/make/test-integration-cli b/hack/make/test-integration-cli similarity index 85% rename from project/make/test-integration-cli rename to hack/make/test-integration-cli index 5ea3be3872..3ef41d919e 100644 --- a/project/make/test-integration-cli +++ b/hack/make/test-integration-cli @@ -16,7 +16,8 @@ bundle_test_integration_cli() { # even and especially on test failures didFail= if ! { - source "$(dirname "$BASH_SOURCE")/.ensure-busybox" + source "$(dirname "$BASH_SOURCE")/.ensure-frozen-images" + source "$(dirname "$BASH_SOURCE")/.ensure-httpserver" source "$(dirname "$BASH_SOURCE")/.ensure-emptyfs" bundle_test_integration_cli diff --git a/project/make/test-unit b/hack/make/test-unit similarity index 100% rename from project/make/test-unit rename to hack/make/test-unit diff --git a/project/make/tgz b/hack/make/tgz similarity index 100% rename from project/make/tgz rename to hack/make/tgz diff --git a/project/make/ubuntu b/hack/make/ubuntu similarity index 100% rename from project/make/ubuntu rename to hack/make/ubuntu diff --git a/project/make/validate-dco b/hack/make/validate-dco similarity index 100% rename from project/make/validate-dco rename to hack/make/validate-dco diff --git a/project/make/validate-gofmt b/hack/make/validate-gofmt similarity index 100% rename from project/make/validate-gofmt rename to hack/make/validate-gofmt diff --git a/project/make/validate-toml b/hack/make/validate-toml similarity index 100% rename from project/make/validate-toml rename to hack/make/validate-toml diff --git a/project/release.sh b/hack/release.sh similarity index 98% rename from project/release.sh rename to hack/release.sh index f0adc8be60..da95808c5a 100755 --- a/project/release.sh +++ b/hack/release.sh @@ -190,6 +190,13 @@ release_build() { linux) s3Os=Linux ;; + windows) + s3Os=Windows + binary+='.exe' + if [ "$latestBase" ]; then + latestBase+='.exe' + fi + ;; *) echo >&2 "error: can't convert $s3Os to an appropriate value for 'uname -s'" exit 1 diff --git a/project/vendor.sh b/hack/vendor.sh similarity index 78% rename from project/vendor.sh rename to hack/vendor.sh index 634e17602c..b3ba928a05 100755 --- a/project/vendor.sh +++ b/hack/vendor.sh @@ -53,7 +53,7 @@ clone hg code.google.com/p/gosqlite 74691fb6f837 clone git github.com/docker/libtrust 230dfd18c232 -clone git github.com/Sirupsen/logrus v0.6.0 +clone git github.com/Sirupsen/logrus v0.6.6 clone git github.com/go-fsnotify/fsnotify v1.0.4 @@ -68,9 +68,15 @@ if [ "$1" = '--go' ]; then mv tmp-tar src/code.google.com/p/go/src/pkg/archive/tar fi -# this commit is from docker_1.5 branch in libcontainer, pls delete that branch when you'll update libcontainer again -clone git github.com/docker/libcontainer 2d3b5af7486f1a4e80a5ed91859d309b4eebf80c +# get digest package from distribution +clone git github.com/docker/distribution d957768537c5af40e4f4cd96871f7b2bde9e2923 +mv src/github.com/docker/distribution/digest tmp-digest +rm -rf src/github.com/docker/distribution +mkdir -p src/github.com/docker/distribution +mv tmp-digest src/github.com/docker/distribution/digest + +clone git github.com/docker/libcontainer 4a72e540feb67091156b907c4700e580a99f5a9d # see src/github.com/docker/libcontainer/update-vendor.sh which is the "source of truth" for libcontainer deps (just like this file) rm -rf src/github.com/docker/libcontainer/vendor -eval "$(grep '^clone ' src/github.com/docker/libcontainer/update-vendor.sh | grep -v 'github.com/codegangsta/cli')" +eval "$(grep '^clone ' src/github.com/docker/libcontainer/update-vendor.sh | grep -v 'github.com/codegangsta/cli' | grep -v 'github.com/Sirupsen/logrus')" # we exclude "github.com/codegangsta/cli" here because it's only needed for "nsinit", which Docker doesn't include diff --git a/integration-cli/MAINTAINERS b/integration-cli/MAINTAINERS deleted file mode 100644 index 6dde4769d7..0000000000 --- a/integration-cli/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Cristian Staretu (@unclejack) diff --git a/integration-cli/docker_api_containers_test.go b/integration-cli/docker_api_containers_test.go index c0ea97acb9..e717eca574 100644 --- a/integration-cli/docker_api_containers_test.go +++ b/integration-cli/docker_api_containers_test.go @@ -307,7 +307,7 @@ func TestGetContainerStats(t *testing.T) { t.Fatal("stream was not closed after container was removed") case sr := <-bc: if sr.err != nil { - t.Fatal(err) + t.Fatal(sr.err) } dec := json.NewDecoder(bytes.NewBuffer(sr.body)) @@ -320,6 +320,32 @@ func TestGetContainerStats(t *testing.T) { logDone("container REST API - check GET containers/stats") } +func TestGetStoppedContainerStats(t *testing.T) { + defer deleteAllContainers() + var ( + name = "statscontainer" + runCmd = exec.Command(dockerBinary, "create", "--name", name, "busybox", "top") + ) + out, _, err := runCommandWithOutput(runCmd) + if err != nil { + t.Fatalf("Error on container creation: %v, output: %q", err, out) + } + + go func() { + // We'll never get return for GET stats from sockRequest as of now, + // just send request and see if panic or error would happen on daemon side. + _, err := sockRequest("GET", "/containers/"+name+"/stats", nil) + if err != nil { + t.Fatal(err) + } + }() + + // allow some time to send request and let daemon deal with it + time.Sleep(1 * time.Second) + + logDone("container REST API - check GET stopped containers/stats") +} + func TestBuildApiDockerfilePath(t *testing.T) { // Test to make sure we stop people from trying to leave the // build context when specifying the path to the dockerfile @@ -357,6 +383,7 @@ func TestBuildApiDockerFileRemote(t *testing.T) { server, err := fakeStorage(map[string]string{ "testD": `FROM busybox COPY * /tmp/ +RUN find / -name ba* RUN find /tmp/`, }) if err != nil { @@ -364,14 +391,16 @@ RUN find /tmp/`, } defer server.Close() - buf, err := sockRequestRaw("POST", "/build?dockerfile=baz&remote="+server.URL+"/testD", nil, "application/json") + buf, err := sockRequestRaw("POST", "/build?dockerfile=baz&remote="+server.URL()+"/testD", nil, "application/json") if err != nil { t.Fatalf("Build failed: %s", err) } + // Make sure Dockerfile exists. + // Make sure 'baz' doesn't exist ANYWHERE despite being mentioned in the URL out := string(buf) if !strings.Contains(out, "/tmp/Dockerfile") || - strings.Contains(out, "/tmp/baz") { + strings.Contains(out, "baz") { t.Fatalf("Incorrect output: %s", out) } @@ -382,7 +411,7 @@ func TestBuildApiLowerDockerfile(t *testing.T) { git, err := fakeGIT("repo", map[string]string{ "dockerfile": `FROM busybox RUN echo from dockerfile`, - }) + }, false) if err != nil { t.Fatal(err) } @@ -407,7 +436,7 @@ func TestBuildApiBuildGitWithF(t *testing.T) { RUN echo from baz`, "Dockerfile": `FROM busybox RUN echo from Dockerfile`, - }) + }, false) if err != nil { t.Fatal(err) } @@ -428,12 +457,13 @@ RUN echo from Dockerfile`, } func TestBuildApiDoubleDockerfile(t *testing.T) { + testRequires(t, UnixCli) // dockerfile overwrites Dockerfile on Windows git, err := fakeGIT("repo", map[string]string{ "Dockerfile": `FROM busybox RUN echo from Dockerfile`, "dockerfile": `FROM busybox RUN echo from dockerfile`, - }) + }, false) if err != nil { t.Fatal(err) } diff --git a/integration-cli/docker_cli_build_test.go b/integration-cli/docker_cli_build_test.go index ac1ac14775..6a83595746 100644 --- a/integration-cli/docker_cli_build_test.go +++ b/integration-cli/docker_cli_build_test.go @@ -8,14 +8,12 @@ import ( "io/ioutil" "os" "os/exec" - "path" "path/filepath" "reflect" "regexp" "runtime" "strconv" "strings" - "syscall" "testing" "text/template" "time" @@ -645,9 +643,10 @@ func TestBuildCacheADD(t *testing.T) { t.Fatal(err) } defer server.Close() + if _, err := buildImage(name, fmt.Sprintf(`FROM scratch - ADD %s/robots.txt /`, server.URL), + ADD %s/robots.txt /`, server.URL()), true); err != nil { t.Fatal(err) } @@ -657,7 +656,7 @@ func TestBuildCacheADD(t *testing.T) { deleteImages(name) _, out, err := buildImageWithOut(name, fmt.Sprintf(`FROM scratch - ADD %s/index.html /`, server.URL), + ADD %s/index.html /`, server.URL()), true) if err != nil { t.Fatal(err) @@ -669,6 +668,73 @@ func TestBuildCacheADD(t *testing.T) { logDone("build - build two images with remote ADD") } +func TestBuildLastModified(t *testing.T) { + name := "testbuildlastmodified" + defer deleteImages(name) + + server, err := fakeStorage(map[string]string{ + "file": "hello", + }) + if err != nil { + t.Fatal(err) + } + defer server.Close() + + var out, out2 string + + dFmt := `FROM busybox +ADD %s/file / +RUN ls -le /file` + + dockerfile := fmt.Sprintf(dFmt, server.URL()) + + if _, out, err = buildImageWithOut(name, dockerfile, false); err != nil { + t.Fatal(err) + } + + originMTime := regexp.MustCompile(`root.*/file.*\n`).FindString(out) + // Make sure our regexp is correct + if strings.Index(originMTime, "/file") < 0 { + t.Fatalf("Missing ls info on 'file':\n%s", out) + } + + // Build it again and make sure the mtime of the file didn't change. + // Wait a few seconds to make sure the time changed enough to notice + time.Sleep(2 * time.Second) + + if _, out2, err = buildImageWithOut(name, dockerfile, false); err != nil { + t.Fatal(err) + } + + newMTime := regexp.MustCompile(`root.*/file.*\n`).FindString(out2) + if newMTime != originMTime { + t.Fatalf("MTime changed:\nOrigin:%s\nNew:%s", originMTime, newMTime) + } + + // Now 'touch' the file and make sure the timestamp DID change this time + // Create a new fakeStorage instead of just using Add() to help windows + server, err = fakeStorage(map[string]string{ + "file": "hello", + }) + if err != nil { + t.Fatal(err) + } + defer server.Close() + + dockerfile = fmt.Sprintf(dFmt, server.URL()) + + if _, out2, err = buildImageWithOut(name, dockerfile, false); err != nil { + t.Fatal(err) + } + + newMTime = regexp.MustCompile(`root.*/file.*\n`).FindString(out2) + if newMTime == originMTime { + t.Fatalf("MTime didn't change:\nOrigin:%s\nNew:%s", originMTime, newMTime) + } + + logDone("build - use Last-Modified header") +} + func TestBuildSixtySteps(t *testing.T) { name := "foobuildsixtysteps" defer deleteImages(name) @@ -690,15 +756,15 @@ func TestBuildSixtySteps(t *testing.T) { func TestBuildAddSingleFileToRoot(t *testing.T) { name := "testaddimg" defer deleteImages(name) - ctx, err := fakeContext(`FROM busybox + ctx, err := fakeContext(fmt.Sprintf(`FROM busybox RUN echo 'dockerio:x:1001:1001::/bin:/bin/false' >> /etc/passwd RUN echo 'dockerio:x:1001:' >> /etc/group RUN touch /exists RUN chown dockerio.dockerio /exists ADD test_file / RUN [ $(ls -l /test_file | awk '{print $3":"$4}') = 'root:root' ] -RUN [ $(ls -l /test_file | awk '{print $1}') = '-rw-r--r--' ] -RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, +RUN [ $(ls -l /test_file | awk '{print $1}') = '%s' ] +RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, expectedFileChmod), map[string]string{ "test_file": "test1", }) @@ -797,7 +863,7 @@ RUN [ $(ls -l /exists/test_file4 | awk '{print $3":"$4}') = 'root:root' ] RUN [ $(ls -l /exists/robots.txt | awk '{print $3":"$4}') = 'root:root' ] RUN [ $(ls -l /exists/exists_file | awk '{print $3":"$4}') = 'dockerio:dockerio' ] -`, server.URL), +`, server.URL()), map[string]string{ "test_file1": "test1", "test_file2": "test2", @@ -1084,6 +1150,7 @@ func TestBuildCopyWildcard(t *testing.T) { t.Fatal(err) } defer server.Close() + ctx, err := fakeContext(fmt.Sprintf(`FROM busybox COPY file*.txt /tmp/ RUN ls /tmp/file1.txt /tmp/file2.txt @@ -1093,7 +1160,7 @@ func TestBuildCopyWildcard(t *testing.T) { RUN mkdir /tmp2 ADD dir/*dir %s/robots.txt /tmp2/ RUN ls /tmp2/nest_nest_file /tmp2/robots.txt - `, server.URL), + `, server.URL()), map[string]string{ "file1.txt": "test1", "file2.txt": "test2", @@ -1263,7 +1330,7 @@ RUN [ $(ls -l /exists/test_file | awk '{print $3":"$4}') = 'root:root' ]`, func TestBuildAddWholeDirToRoot(t *testing.T) { name := "testaddwholedirtoroot" defer deleteImages(name) - ctx, err := fakeContext(`FROM busybox + ctx, err := fakeContext(fmt.Sprintf(`FROM busybox RUN echo 'dockerio:x:1001:1001::/bin:/bin/false' >> /etc/passwd RUN echo 'dockerio:x:1001:' >> /etc/group RUN touch /exists @@ -1272,8 +1339,8 @@ ADD test_dir /test_dir RUN [ $(ls -l / | grep test_dir | awk '{print $3":"$4}') = 'root:root' ] RUN [ $(ls -l / | grep test_dir | awk '{print $1}') = 'drwxr-xr-x' ] RUN [ $(ls -l /test_dir/test_file | awk '{print $3":"$4}') = 'root:root' ] -RUN [ $(ls -l /test_dir/test_file | awk '{print $1}') = '-rw-r--r--' ] -RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, +RUN [ $(ls -l /test_dir/test_file | awk '{print $1}') = '%s' ] +RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, expectedFileChmod), map[string]string{ "test_dir/test_file": "test1", }) @@ -1336,15 +1403,15 @@ RUN [ $(ls -l /usr/bin/suidbin | awk '{print $1}') = '-rwsr-xr-x' ]`, func TestBuildCopySingleFileToRoot(t *testing.T) { name := "testcopysinglefiletoroot" defer deleteImages(name) - ctx, err := fakeContext(`FROM busybox + ctx, err := fakeContext(fmt.Sprintf(`FROM busybox RUN echo 'dockerio:x:1001:1001::/bin:/bin/false' >> /etc/passwd RUN echo 'dockerio:x:1001:' >> /etc/group RUN touch /exists RUN chown dockerio.dockerio /exists COPY test_file / RUN [ $(ls -l /test_file | awk '{print $3":"$4}') = 'root:root' ] -RUN [ $(ls -l /test_file | awk '{print $1}') = '-rw-r--r--' ] -RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, +RUN [ $(ls -l /test_file | awk '{print $1}') = '%s' ] +RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, expectedFileChmod), map[string]string{ "test_file": "test1", }) @@ -1496,7 +1563,7 @@ RUN [ $(ls -l /exists/test_file | awk '{print $3":"$4}') = 'root:root' ]`, func TestBuildCopyWholeDirToRoot(t *testing.T) { name := "testcopywholedirtoroot" defer deleteImages(name) - ctx, err := fakeContext(`FROM busybox + ctx, err := fakeContext(fmt.Sprintf(`FROM busybox RUN echo 'dockerio:x:1001:1001::/bin:/bin/false' >> /etc/passwd RUN echo 'dockerio:x:1001:' >> /etc/group RUN touch /exists @@ -1505,8 +1572,8 @@ COPY test_dir /test_dir RUN [ $(ls -l / | grep test_dir | awk '{print $3":"$4}') = 'root:root' ] RUN [ $(ls -l / | grep test_dir | awk '{print $1}') = 'drwxr-xr-x' ] RUN [ $(ls -l /test_dir/test_file | awk '{print $3":"$4}') = 'root:root' ] -RUN [ $(ls -l /test_dir/test_file | awk '{print $1}') = '-rw-r--r--' ] -RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, +RUN [ $(ls -l /test_dir/test_file | awk '{print $1}') = '%s' ] +RUN [ $(ls -l /exists | awk '{print $3":"$4}') = 'dockerio:dockerio' ]`, expectedFileChmod), map[string]string{ "test_dir/test_file": "test1", }) @@ -2343,6 +2410,33 @@ func TestBuildExposeUpperCaseProto(t *testing.T) { logDone("build - expose port with upper case proto") } +func TestBuildExposeHostPort(t *testing.T) { + // start building docker file with ip:hostPort:containerPort + name := "testbuildexpose" + expected := "map[5678/tcp:map[]]" + defer deleteImages(name) + _, out, err := buildImageWithOut(name, + `FROM scratch + EXPOSE 192.168.1.2:2375:5678`, + true) + if err != nil { + t.Fatal(err) + } + + if !strings.Contains(out, "to map host ports to container ports (ip:hostPort:containerPort) is deprecated.") { + t.Fatal("Missing warning message") + } + + res, err := inspectField(name, "Config.ExposedPorts") + if err != nil { + t.Fatal(err) + } + if res != expected { + t.Fatalf("Exposed ports %s, expected %s", res, expected) + } + logDone("build - ignore exposing host's port") +} + func TestBuildEmptyEntrypointInheritance(t *testing.T) { name := "testbuildentrypointinheritance" name2 := "testbuildentrypointinheritance2" @@ -2831,10 +2925,11 @@ func TestBuildADDRemoteFileWithCache(t *testing.T) { t.Fatal(err) } defer server.Close() + id1, err := buildImage(name, fmt.Sprintf(`FROM scratch MAINTAINER dockerio - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), true) if err != nil { t.Fatal(err) @@ -2842,7 +2937,7 @@ func TestBuildADDRemoteFileWithCache(t *testing.T) { id2, err := buildImage(name, fmt.Sprintf(`FROM scratch MAINTAINER dockerio - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), true) if err != nil { t.Fatal(err) @@ -2864,10 +2959,11 @@ func TestBuildADDRemoteFileWithoutCache(t *testing.T) { t.Fatal(err) } defer server.Close() + id1, err := buildImage(name, fmt.Sprintf(`FROM scratch MAINTAINER dockerio - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), true) if err != nil { t.Fatal(err) @@ -2875,7 +2971,7 @@ func TestBuildADDRemoteFileWithoutCache(t *testing.T) { id2, err := buildImage(name2, fmt.Sprintf(`FROM scratch MAINTAINER dockerio - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), false) if err != nil { t.Fatal(err) @@ -2894,7 +2990,8 @@ func TestBuildADDRemoteFileMTime(t *testing.T) { defer deleteImages(name, name2, name3, name4) - server, err := fakeStorage(map[string]string{"baz": "hello"}) + files := map[string]string{"baz": "hello"} + server, err := fakeStorage(files) if err != nil { t.Fatal(err) } @@ -2902,7 +2999,7 @@ func TestBuildADDRemoteFileMTime(t *testing.T) { ctx, err := fakeContext(fmt.Sprintf(`FROM scratch MAINTAINER dockerio - ADD %s/baz /usr/lib/baz/quux`, server.URL), nil) + ADD %s/baz /usr/lib/baz/quux`, server.URL()), nil) if err != nil { t.Fatal(err) } @@ -2921,15 +3018,26 @@ func TestBuildADDRemoteFileMTime(t *testing.T) { t.Fatal("The cache should have been used but wasn't - #1") } - // Now set baz's times to anything else and redo the build + // Now create a different server withsame contents (causes different mtim) // This time the cache should not be used - bazPath := path.Join(server.FakeContext.Dir, "baz") - err = syscall.UtimesNano(bazPath, make([]syscall.Timespec, 2)) - if err != nil { - t.Fatalf("Error setting mtime on %q: %v", bazPath, err) - } - id3, err := buildImageFromContext(name3, ctx, true) + // allow some time for clock to pass as mtime precision is only 1s + time.Sleep(2 * time.Second) + + server2, err := fakeStorage(files) + if err != nil { + t.Fatal(err) + } + defer server2.Close() + + ctx2, err := fakeContext(fmt.Sprintf(`FROM scratch + MAINTAINER dockerio + ADD %s/baz /usr/lib/baz/quux`, server2.URL()), nil) + if err != nil { + t.Fatal(err) + } + defer ctx2.Close() + id3, err := buildImageFromContext(name3, ctx2, true) if err != nil { t.Fatal(err) } @@ -2938,7 +3046,7 @@ func TestBuildADDRemoteFileMTime(t *testing.T) { } // And for good measure do it again and make sure cache is used this time - id4, err := buildImageFromContext(name4, ctx, true) + id4, err := buildImageFromContext(name4, ctx2, true) if err != nil { t.Fatal(err) } @@ -2958,10 +3066,11 @@ func TestBuildADDLocalAndRemoteFilesWithCache(t *testing.T) { t.Fatal(err) } defer server.Close() + ctx, err := fakeContext(fmt.Sprintf(`FROM scratch MAINTAINER dockerio ADD foo /usr/lib/bla/bar - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), map[string]string{ "foo": "hello world", }) @@ -3047,10 +3156,11 @@ func TestBuildADDLocalAndRemoteFilesWithoutCache(t *testing.T) { t.Fatal(err) } defer server.Close() + ctx, err := fakeContext(fmt.Sprintf(`FROM scratch MAINTAINER dockerio ADD foo /usr/lib/bla/bar - ADD %s/baz /usr/lib/baz/quux`, server.URL), + ADD %s/baz /usr/lib/baz/quux`, server.URL()), map[string]string{ "foo": "hello world", }) @@ -3897,7 +4007,7 @@ RUN cat /existing-directory-trailing-slash/test/foo | grep Hi` if err := ioutil.WriteFile(filepath.Join(tmpDir, "Dockerfile"), []byte(dockerfile), 0644); err != nil { t.Fatalf("failed to open destination dockerfile: %v", err) } - return &FakeContext{Dir: tmpDir} + return fakeContextFromDir(tmpDir) }() defer ctx.Close() @@ -3948,7 +4058,7 @@ func TestBuildAddTarXz(t *testing.T) { if err := ioutil.WriteFile(filepath.Join(tmpDir, "Dockerfile"), []byte(dockerfile), 0644); err != nil { t.Fatalf("failed to open destination dockerfile: %v", err) } - return &FakeContext{Dir: tmpDir} + return fakeContextFromDir(tmpDir) }() defer ctx.Close() @@ -4008,7 +4118,7 @@ func TestBuildAddTarXzGz(t *testing.T) { if err := ioutil.WriteFile(filepath.Join(tmpDir, "Dockerfile"), []byte(dockerfile), 0644); err != nil { t.Fatalf("failed to open destination dockerfile: %v", err) } - return &FakeContext{Dir: tmpDir} + return fakeContextFromDir(tmpDir) }() defer ctx.Close() @@ -4029,7 +4139,7 @@ func TestBuildFromGIT(t *testing.T) { RUN [ -f /first ] MAINTAINER docker`, "first": "test git data", - }) + }, true) if err != nil { t.Fatal(err) } @@ -4233,16 +4343,16 @@ func TestBuildCmdJSONNoShDashC(t *testing.T) { logDone("build - cmd should not have /bin/sh -c for json") } -func TestBuildIgnoreInvalidInstruction(t *testing.T) { +func TestBuildErrorInvalidInstruction(t *testing.T) { name := "testbuildignoreinvalidinstruction" defer deleteImages(name) out, _, err := buildImageWithOut(name, "FROM busybox\nfoo bar", true) - if err != nil { - t.Fatal(err, out) + if err == nil { + t.Fatalf("Should have failed: %s", out) } - logDone("build - ignore invalid Dockerfile instruction") + logDone("build - error invalid Dockerfile instruction") } func TestBuildEntrypointInheritance(t *testing.T) { @@ -4345,7 +4455,7 @@ func TestBuildExoticShellInterpolation(t *testing.T) { _, err := buildImage(name, ` FROM busybox - + ENV SOME_VAR a.b.c RUN [ "$SOME_VAR" = 'a.b.c' ] @@ -4431,6 +4541,78 @@ func TestBuildWithTabs(t *testing.T) { logDone("build - with tabs") } +func TestBuildLabels(t *testing.T) { + name := "testbuildlabel" + expected := `{"License":"GPL","Vendor":"Acme"}` + defer deleteImages(name) + _, err := buildImage(name, + `FROM busybox + LABEL Vendor=Acme + LABEL License GPL`, + true) + if err != nil { + t.Fatal(err) + } + res, err := inspectFieldJSON(name, "Config.Labels") + if err != nil { + t.Fatal(err) + } + if res != expected { + t.Fatalf("Labels %s, expected %s", res, expected) + } + logDone("build - label") +} + +func TestBuildLabelsCache(t *testing.T) { + name := "testbuildlabelcache" + defer deleteImages(name) + + id1, err := buildImage(name, + `FROM busybox + LABEL Vendor=Acme`, false) + if err != nil { + t.Fatalf("Build 1 should have worked: %v", err) + } + + id2, err := buildImage(name, + `FROM busybox + LABEL Vendor=Acme`, true) + if err != nil || id1 != id2 { + t.Fatalf("Build 2 should have worked & used cache(%s,%s): %v", id1, id2, err) + } + + id2, err = buildImage(name, + `FROM busybox + LABEL Vendor=Acme1`, true) + if err != nil || id1 == id2 { + t.Fatalf("Build 3 should have worked & NOT used cache(%s,%s): %v", id1, id2, err) + } + + id2, err = buildImage(name, + `FROM busybox + LABEL Vendor Acme`, true) // Note: " " and "=" should be same + if err != nil || id1 != id2 { + t.Fatalf("Build 4 should have worked & used cache(%s,%s): %v", id1, id2, err) + } + + // Now make sure the cache isn't used by mistake + id1, err = buildImage(name, + `FROM busybox + LABEL f1=b1 f2=b2`, false) + if err != nil { + t.Fatalf("Build 5 should have worked: %q", err) + } + + id2, err = buildImage(name, + `FROM busybox + LABEL f1="b1 f2=b2"`, true) + if err != nil || id1 == id2 { + t.Fatalf("Build 6 should have worked & NOT used the cache(%s,%s): %q", id1, id2, err) + } + + logDone("build - label cache") +} + func TestBuildStderr(t *testing.T) { // This test just makes sure that no non-error output goes // to stderr @@ -4520,7 +4702,7 @@ func TestBuildSymlinkBreakout(t *testing.T) { }) w.Close() f.Close() - if _, err := buildImageFromContext(name, &FakeContext{Dir: ctx}, false); err != nil { + if _, err := buildImageFromContext(name, fakeContextFromDir(ctx), false); err != nil { t.Fatal(err) } if _, err := os.Lstat(filepath.Join(tmpdir, "inject")); err == nil { @@ -4700,6 +4882,7 @@ func TestBuildRenamedDockerfile(t *testing.T) { } func TestBuildFromMixedcaseDockerfile(t *testing.T) { + testRequires(t, UnixCli) // Dockerfile overwrites dockerfile on windows defer deleteImages("test1") ctx, err := fakeContext(`FROM busybox @@ -4725,6 +4908,7 @@ func TestBuildFromMixedcaseDockerfile(t *testing.T) { } func TestBuildWithTwoDockerfiles(t *testing.T) { + testRequires(t, UnixCli) // Dockerfile overwrites dockerfile on windows defer deleteImages("test1") ctx, err := fakeContext(`FROM busybox @@ -4771,7 +4955,7 @@ RUN echo from Dockerfile`, // Make sure that -f is ignored and that we don't use the Dockerfile // that's in the current dir - out, _, err := dockerCmdInDir(t, ctx.Dir, "build", "-f", "baz", "-t", "test1", server.URL+"/baz") + out, _, err := dockerCmdInDir(t, ctx.Dir, "build", "-f", "baz", "-t", "test1", server.URL()+"/baz") if err != nil { t.Fatalf("Failed to build: %s\n%s", out, err) } @@ -4930,8 +5114,8 @@ func TestBuildSpaces(t *testing.T) { } // Skip over the times - e1 := err1.Error()[strings.Index(err1.Error(), `level="`):] - e2 := err2.Error()[strings.Index(err1.Error(), `level="`):] + e1 := err1.Error()[strings.Index(err1.Error(), `level=`):] + e2 := err2.Error()[strings.Index(err1.Error(), `level=`):] // Ignore whitespace since that's what were verifying doesn't change stuff if strings.Replace(e1, " ", "", -1) != strings.Replace(e2, " ", "", -1) { @@ -4944,8 +5128,8 @@ func TestBuildSpaces(t *testing.T) { } // Skip over the times - e1 = err1.Error()[strings.Index(err1.Error(), `level="`):] - e2 = err2.Error()[strings.Index(err1.Error(), `level="`):] + e1 = err1.Error()[strings.Index(err1.Error(), `level=`):] + e2 = err2.Error()[strings.Index(err1.Error(), `level=`):] // Ignore whitespace since that's what were verifying doesn't change stuff if strings.Replace(e1, " ", "", -1) != strings.Replace(e2, " ", "", -1) { @@ -4958,8 +5142,8 @@ func TestBuildSpaces(t *testing.T) { } // Skip over the times - e1 = err1.Error()[strings.Index(err1.Error(), `level="`):] - e2 = err2.Error()[strings.Index(err1.Error(), `level="`):] + e1 = err1.Error()[strings.Index(err1.Error(), `level=`):] + e2 = err2.Error()[strings.Index(err1.Error(), `level=`):] // Ignore whitespace since that's what were verifying doesn't change stuff if strings.Replace(e1, " ", "", -1) != strings.Replace(e2, " ", "", -1) { @@ -5115,3 +5299,101 @@ func TestBuildNotVerbose(t *testing.T) { logDone("build - not verbose") } + +func TestBuildRUNoneJSON(t *testing.T) { + name := "testbuildrunonejson" + + defer deleteAllContainers() + defer deleteImages(name) + + ctx, err := fakeContext(`FROM hello-world:frozen +RUN [ "/hello" ]`, map[string]string{}) + if err != nil { + t.Fatal(err) + } + defer ctx.Close() + + buildCmd := exec.Command(dockerBinary, "build", "--no-cache", "-t", name, ".") + buildCmd.Dir = ctx.Dir + out, _, err := runCommandWithOutput(buildCmd) + if err != nil { + t.Fatalf("failed to build the image: %s, %v", out, err) + } + + if !strings.Contains(out, "Hello from Docker") { + t.Fatalf("bad output: %s", out) + } + + logDone("build - RUN with one JSON arg") +} + +func TestBuildResourceConstraintsAreUsed(t *testing.T) { + name := "testbuildresourceconstraints" + defer deleteAllContainers() + defer deleteImages(name) + + ctx, err := fakeContext(` + FROM hello-world:frozen + RUN ["/hello"] + `, map[string]string{}) + if err != nil { + t.Fatal(err) + } + + cmd := exec.Command(dockerBinary, "build", "--rm=false", "--memory=64m", "--memory-swap=-1", "--cpuset-cpus=1", "--cpu-shares=100", "-t", name, ".") + cmd.Dir = ctx.Dir + + out, _, err := runCommandWithOutput(cmd) + if err != nil { + t.Fatal(err, out) + } + out, _, err = dockerCmd(t, "ps", "-lq") + if err != nil { + t.Fatal(err, out) + } + + cID := stripTrailingCharacters(out) + + type hostConfig struct { + Memory float64 // Use float64 here since the json decoder sees it that way + MemorySwap int + CpusetCpus string + CpuShares int + } + + cfg, err := inspectFieldJSON(cID, "HostConfig") + if err != nil { + t.Fatal(err) + } + + var c1 hostConfig + if err := json.Unmarshal([]byte(cfg), &c1); err != nil { + t.Fatal(err, cfg) + } + mem := int64(c1.Memory) + if mem != 67108864 || c1.MemorySwap != -1 || c1.CpusetCpus != "1" || c1.CpuShares != 100 { + t.Fatalf("resource constraints not set properly:\nMemory: %d, MemSwap: %d, CpusetCpus: %s, CpuShares: %d", + mem, c1.MemorySwap, c1.CpusetCpus, c1.CpuShares) + } + + // Make sure constraints aren't saved to image + _, _, err = dockerCmd(t, "run", "--name=test", name) + if err != nil { + t.Fatal(err) + } + cfg, err = inspectFieldJSON("test", "HostConfig") + if err != nil { + t.Fatal(err) + } + var c2 hostConfig + if err := json.Unmarshal([]byte(cfg), &c2); err != nil { + t.Fatal(err, cfg) + } + mem = int64(c2.Memory) + if mem == 67108864 || c2.MemorySwap == -1 || c2.CpusetCpus == "1" || c2.CpuShares == 100 { + t.Fatalf("resource constraints leaked from build:\nMemory: %d, MemSwap: %d, CpusetCpus: %s, CpuShares: %d", + mem, c2.MemorySwap, c2.CpusetCpus, c2.CpuShares) + } + + logDone("build - resource constraints applied") +} diff --git a/integration-cli/docker_cli_by_digest_test.go b/integration-cli/docker_cli_by_digest_test.go new file mode 100644 index 0000000000..24ebf0cf70 --- /dev/null +++ b/integration-cli/docker_cli_by_digest_test.go @@ -0,0 +1,535 @@ +package main + +import ( + "fmt" + "os/exec" + "regexp" + "strings" + "testing" + + "github.com/docker/docker/utils" +) + +var ( + repoName = fmt.Sprintf("%v/dockercli/busybox-by-dgst", privateRegistryURL) + digestRegex = regexp.MustCompile("Digest: ([^\n]+)") +) + +func setupImage() (string, error) { + return setupImageWithTag("latest") +} + +func setupImageWithTag(tag string) (string, error) { + containerName := "busyboxbydigest" + + c := exec.Command(dockerBinary, "run", "-d", "-e", "digest=1", "--name", containerName, "busybox") + if _, err := runCommand(c); err != nil { + return "", err + } + + // tag the image to upload it to the private registry + repoAndTag := utils.ImageReference(repoName, tag) + c = exec.Command(dockerBinary, "commit", containerName, repoAndTag) + if out, _, err := runCommandWithOutput(c); err != nil { + return "", fmt.Errorf("image tagging failed: %s, %v", out, err) + } + defer deleteImages(repoAndTag) + + // delete the container as we don't need it any more + if err := deleteContainer(containerName); err != nil { + return "", err + } + + // push the image + c = exec.Command(dockerBinary, "push", repoAndTag) + out, _, err := runCommandWithOutput(c) + if err != nil { + return "", fmt.Errorf("pushing the image to the private registry has failed: %s, %v", out, err) + } + + // delete our local repo that we previously tagged + c = exec.Command(dockerBinary, "rmi", repoAndTag) + if out, _, err := runCommandWithOutput(c); err != nil { + return "", fmt.Errorf("error deleting images prior to real test: %s, %v", out, err) + } + + // the push output includes "Digest: ", so find that + matches := digestRegex.FindStringSubmatch(out) + if len(matches) != 2 { + return "", fmt.Errorf("unable to parse digest from push output: %s", out) + } + pushDigest := matches[1] + + return pushDigest, nil +} + +func TestPullByTagDisplaysDigest(t *testing.T) { + defer setupRegistry(t)() + + pushDigest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + // pull from the registry using the tag + c := exec.Command(dockerBinary, "pull", repoName) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by tag: %s, %v", out, err) + } + defer deleteImages(repoName) + + // the pull output includes "Digest: ", so find that + matches := digestRegex.FindStringSubmatch(out) + if len(matches) != 2 { + t.Fatalf("unable to parse digest from pull output: %s", out) + } + pullDigest := matches[1] + + // make sure the pushed and pull digests match + if pushDigest != pullDigest { + t.Fatalf("push digest %q didn't match pull digest %q", pushDigest, pullDigest) + } + + logDone("by_digest - pull by tag displays digest") +} + +func TestPullByDigest(t *testing.T) { + defer setupRegistry(t)() + + pushDigest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + // pull from the registry using the @ reference + imageReference := fmt.Sprintf("%s@%s", repoName, pushDigest) + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + defer deleteImages(imageReference) + + // the pull output includes "Digest: ", so find that + matches := digestRegex.FindStringSubmatch(out) + if len(matches) != 2 { + t.Fatalf("unable to parse digest from pull output: %s", out) + } + pullDigest := matches[1] + + // make sure the pushed and pull digests match + if pushDigest != pullDigest { + t.Fatalf("push digest %q didn't match pull digest %q", pushDigest, pullDigest) + } + + logDone("by_digest - pull by digest") +} + +func TestCreateByDigest(t *testing.T) { + defer setupRegistry(t)() + + pushDigest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, pushDigest) + + containerName := "createByDigest" + c := exec.Command(dockerBinary, "create", "--name", containerName, imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error creating by digest: %s, %v", out, err) + } + defer deleteContainer(containerName) + + res, err := inspectField(containerName, "Config.Image") + if err != nil { + t.Fatalf("failed to get Config.Image: %s, %v", out, err) + } + if res != imageReference { + t.Fatalf("unexpected Config.Image: %s (expected %s)", res, imageReference) + } + + logDone("by_digest - create by digest") +} + +func TestRunByDigest(t *testing.T) { + defer setupRegistry(t)() + + pushDigest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, pushDigest) + + containerName := "runByDigest" + c := exec.Command(dockerBinary, "run", "--name", containerName, imageReference, "sh", "-c", "echo found=$digest") + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error run by digest: %s, %v", out, err) + } + defer deleteContainer(containerName) + + foundRegex := regexp.MustCompile("found=([^\n]+)") + matches := foundRegex.FindStringSubmatch(out) + if len(matches) != 2 { + t.Fatalf("error locating expected 'found=1' output: %s", out) + } + if matches[1] != "1" { + t.Fatalf("Expected %q, got %q", "1", matches[1]) + } + + res, err := inspectField(containerName, "Config.Image") + if err != nil { + t.Fatalf("failed to get Config.Image: %s, %v", out, err) + } + if res != imageReference { + t.Fatalf("unexpected Config.Image: %s (expected %s)", res, imageReference) + } + + logDone("by_digest - run by digest") +} + +func TestRemoveImageByDigest(t *testing.T) { + defer setupRegistry(t)() + + digest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, digest) + + // pull from the registry using the @ reference + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // make sure inspect runs ok + if _, err := inspectField(imageReference, "Id"); err != nil { + t.Fatalf("failed to inspect image: %v", err) + } + + // do the delete + if err := deleteImages(imageReference); err != nil { + t.Fatalf("unexpected error deleting image: %v", err) + } + + // try to inspect again - it should error this time + if _, err := inspectField(imageReference, "Id"); err == nil { + t.Fatalf("unexpected nil err trying to inspect what should be a non-existent image") + } else if !strings.Contains(err.Error(), "No such image") { + t.Fatalf("expected 'No such image' output, got %v", err) + } + + logDone("by_digest - remove image by digest") +} + +func TestBuildByDigest(t *testing.T) { + defer setupRegistry(t)() + + digest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, digest) + + // pull from the registry using the @ reference + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // get the image id + imageID, err := inspectField(imageReference, "Id") + if err != nil { + t.Fatalf("error getting image id: %v", err) + } + + // do the build + name := "buildbydigest" + defer deleteImages(name) + _, err = buildImage(name, fmt.Sprintf( + `FROM %s + CMD ["/bin/echo", "Hello World"]`, imageReference), + true) + if err != nil { + t.Fatal(err) + } + + // get the build's image id + res, err := inspectField(name, "Config.Image") + if err != nil { + t.Fatal(err) + } + // make sure they match + if res != imageID { + t.Fatalf("Image %s, expected %s", res, imageID) + } + + logDone("by_digest - build by digest") +} + +func TestTagByDigest(t *testing.T) { + defer setupRegistry(t)() + + digest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, digest) + + // pull from the registry using the @ reference + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // tag it + tag := "tagbydigest" + c = exec.Command(dockerBinary, "tag", imageReference, tag) + if _, err := runCommand(c); err != nil { + t.Fatalf("unexpected error tagging: %v", err) + } + + expectedID, err := inspectField(imageReference, "Id") + if err != nil { + t.Fatalf("error getting original image id: %v", err) + } + + tagID, err := inspectField(tag, "Id") + if err != nil { + t.Fatalf("error getting tagged image id: %v", err) + } + + if tagID != expectedID { + t.Fatalf("expected image id %q, got %q", expectedID, tagID) + } + + logDone("by_digest - tag by digest") +} + +func TestListImagesWithoutDigests(t *testing.T) { + defer setupRegistry(t)() + + digest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + imageReference := fmt.Sprintf("%s@%s", repoName, digest) + + // pull from the registry using the @ reference + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + c = exec.Command(dockerBinary, "images") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + if strings.Contains(out, "DIGEST") { + t.Fatalf("list output should not have contained DIGEST header: %s", out) + } + + logDone("by_digest - list images - digest header not displayed by default") +} + +func TestListImagesWithDigests(t *testing.T) { + defer setupRegistry(t)() + defer deleteImages(repoName+":tag1", repoName+":tag2") + + // setup image1 + digest1, err := setupImageWithTag("tag1") + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + imageReference1 := fmt.Sprintf("%s@%s", repoName, digest1) + defer deleteImages(imageReference1) + t.Logf("imageReference1 = %s", imageReference1) + + // pull image1 by digest + c := exec.Command(dockerBinary, "pull", imageReference1) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // list images + c = exec.Command(dockerBinary, "images", "--digests") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + // make sure repo shown, tag=, digest = $digest1 + re1 := regexp.MustCompile(`\s*` + repoName + `\s*\s*` + digest1 + `\s`) + if !re1.MatchString(out) { + t.Fatalf("expected %q: %s", re1.String(), out) + } + + // setup image2 + digest2, err := setupImageWithTag("tag2") + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + imageReference2 := fmt.Sprintf("%s@%s", repoName, digest2) + defer deleteImages(imageReference2) + t.Logf("imageReference2 = %s", imageReference2) + + // pull image1 by digest + c = exec.Command(dockerBinary, "pull", imageReference1) + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // pull image2 by digest + c = exec.Command(dockerBinary, "pull", imageReference2) + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + + // list images + c = exec.Command(dockerBinary, "images", "--digests") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + // make sure repo shown, tag=, digest = $digest1 + if !re1.MatchString(out) { + t.Fatalf("expected %q: %s", re1.String(), out) + } + + // make sure repo shown, tag=, digest = $digest2 + re2 := regexp.MustCompile(`\s*` + repoName + `\s*\s*` + digest2 + `\s`) + if !re2.MatchString(out) { + t.Fatalf("expected %q: %s", re2.String(), out) + } + + // pull tag1 + c = exec.Command(dockerBinary, "pull", repoName+":tag1") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling tag1: %s, %v", out, err) + } + + // list images + c = exec.Command(dockerBinary, "images", "--digests") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + // make sure image 1 has repo, tag, AND repo, , digest + reWithTag1 := regexp.MustCompile(`\s*` + repoName + `\s*tag1\s*\s`) + reWithDigest1 := regexp.MustCompile(`\s*` + repoName + `\s*\s*` + digest1 + `\s`) + if !reWithTag1.MatchString(out) { + t.Fatalf("expected %q: %s", reWithTag1.String(), out) + } + if !reWithDigest1.MatchString(out) { + t.Fatalf("expected %q: %s", reWithDigest1.String(), out) + } + // make sure image 2 has repo, , digest + if !re2.MatchString(out) { + t.Fatalf("expected %q: %s", re2.String(), out) + } + + // pull tag 2 + c = exec.Command(dockerBinary, "pull", repoName+":tag2") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling tag2: %s, %v", out, err) + } + + // list images + c = exec.Command(dockerBinary, "images", "--digests") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + // make sure image 1 has repo, tag, digest + if !reWithTag1.MatchString(out) { + t.Fatalf("expected %q: %s", re1.String(), out) + } + + // make sure image 2 has repo, tag, digest + reWithTag2 := regexp.MustCompile(`\s*` + repoName + `\s*tag2\s*\s`) + reWithDigest2 := regexp.MustCompile(`\s*` + repoName + `\s*\s*` + digest2 + `\s`) + if !reWithTag2.MatchString(out) { + t.Fatalf("expected %q: %s", reWithTag2.String(), out) + } + if !reWithDigest2.MatchString(out) { + t.Fatalf("expected %q: %s", reWithDigest2.String(), out) + } + + // list images + c = exec.Command(dockerBinary, "images", "--digests") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error listing images: %s, %v", out, err) + } + + // make sure image 1 has repo, tag, digest + if !reWithTag1.MatchString(out) { + t.Fatalf("expected %q: %s", re1.String(), out) + } + // make sure image 2 has repo, tag, digest + if !reWithTag2.MatchString(out) { + t.Fatalf("expected %q: %s", re2.String(), out) + } + // make sure busybox has tag, but not digest + busyboxRe := regexp.MustCompile(`\s*busybox\s*latest\s*\s`) + if !busyboxRe.MatchString(out) { + t.Fatalf("expected %q: %s", busyboxRe.String(), out) + } + + logDone("by_digest - list images with digests") +} + +func TestDeleteImageByIDOnlyPulledByDigest(t *testing.T) { + defer setupRegistry(t)() + + pushDigest, err := setupImage() + if err != nil { + t.Fatalf("error setting up image: %v", err) + } + + // pull from the registry using the @ reference + imageReference := fmt.Sprintf("%s@%s", repoName, pushDigest) + c := exec.Command(dockerBinary, "pull", imageReference) + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error pulling by digest: %s, %v", out, err) + } + // just in case... + defer deleteImages(imageReference) + + imageID, err := inspectField(imageReference, ".Id") + if err != nil { + t.Fatalf("error inspecting image id: %v", err) + } + + c = exec.Command(dockerBinary, "rmi", imageID) + if _, err := runCommand(c); err != nil { + t.Fatalf("error deleting image by id: %v", err) + } + + logDone("by_digest - delete image by id only pulled by digest") +} diff --git a/integration-cli/docker_cli_cp_test.go b/integration-cli/docker_cli_cp_test.go index ecda53526a..db5f363882 100644 --- a/integration-cli/docker_cli_cp_test.go +++ b/integration-cli/docker_cli_cp_test.go @@ -8,6 +8,7 @@ import ( "os/exec" "path" "path/filepath" + "strings" "testing" ) @@ -528,3 +529,30 @@ func TestCpToDot(t *testing.T) { } logDone("cp - to dot path") } + +func TestCpToStdout(t *testing.T) { + out, exitCode, err := dockerCmd(t, "run", "-d", "busybox", "/bin/sh", "-c", "echo lololol > /test") + if err != nil || exitCode != 0 { + t.Fatalf("failed to create a container:%s\n%s", out, err) + } + + cID := stripTrailingCharacters(out) + defer deleteContainer(cID) + + out, _, err = dockerCmd(t, "wait", cID) + if err != nil || stripTrailingCharacters(out) != "0" { + t.Fatalf("failed to set up container:%s\n%s", out, err) + } + + out, _, err = runCommandPipelineWithOutput( + exec.Command(dockerBinary, "cp", cID+":/test", "-"), + exec.Command("tar", "-vtf", "-")) + if err != nil { + t.Fatalf("Failed to run commands: %s", err) + } + + if !strings.Contains(out, "test") || !strings.Contains(out, "-rw") { + t.Fatalf("Missing file from tar TOC:\n%s", out) + } + logDone("cp - to stdout") +} diff --git a/integration-cli/docker_cli_create_test.go b/integration-cli/docker_cli_create_test.go index fe402caa4a..e32400e603 100644 --- a/integration-cli/docker_cli_create_test.go +++ b/integration-cli/docker_cli_create_test.go @@ -4,6 +4,7 @@ import ( "encoding/json" "os" "os/exec" + "reflect" "testing" "time" @@ -249,3 +250,57 @@ func TestCreateVolumesCreated(t *testing.T) { logDone("create - volumes are created") } + +func TestCreateLabels(t *testing.T) { + name := "test_create_labels" + expected := map[string]string{"k1": "v1", "k2": "v2"} + if out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "create", "--name", name, "-l", "k1=v1", "--label", "k2=v2", "busybox")); err != nil { + t.Fatal(out, err) + } + + actual := make(map[string]string) + err := inspectFieldAndMarshall(name, "Config.Labels", &actual) + if err != nil { + t.Fatal(err) + } + + if !reflect.DeepEqual(expected, actual) { + t.Fatalf("Expected %s got %s", expected, actual) + } + + deleteAllContainers() + + logDone("create - labels") +} + +func TestCreateLabelFromImage(t *testing.T) { + imageName := "testcreatebuildlabel" + defer deleteImages(imageName) + _, err := buildImage(imageName, + `FROM busybox + LABEL k1=v1 k2=v2`, + true) + if err != nil { + t.Fatal(err) + } + + name := "test_create_labels_from_image" + expected := map[string]string{"k2": "x", "k3": "v3", "k1": "v1"} + if out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "create", "--name", name, "-l", "k2=x", "--label", "k3=v3", imageName)); err != nil { + t.Fatal(out, err) + } + + actual := make(map[string]string) + err = inspectFieldAndMarshall(name, "Config.Labels", &actual) + if err != nil { + t.Fatal(err) + } + + if !reflect.DeepEqual(expected, actual) { + t.Fatalf("Expected %s got %s", expected, actual) + } + + deleteAllContainers() + + logDone("create - labels from image") +} diff --git a/integration-cli/docker_cli_daemon_test.go b/integration-cli/docker_cli_daemon_test.go index 31f55296a4..49b43c2f28 100644 --- a/integration-cli/docker_cli_daemon_test.go +++ b/integration-cli/docker_cli_daemon_test.go @@ -11,6 +11,7 @@ import ( "path/filepath" "strings" "testing" + "time" "github.com/docker/libtrust" ) @@ -244,7 +245,7 @@ func TestDaemonLoggingLevel(t *testing.T) { } d.Stop() content, _ := ioutil.ReadFile(d.logFile.Name()) - if !strings.Contains(string(content), `level="debug"`) { + if !strings.Contains(string(content), `level=debug`) { t.Fatalf(`Missing level="debug" in log file:\n%s`, string(content)) } @@ -254,7 +255,7 @@ func TestDaemonLoggingLevel(t *testing.T) { } d.Stop() content, _ = ioutil.ReadFile(d.logFile.Name()) - if strings.Contains(string(content), `level="debug"`) { + if strings.Contains(string(content), `level=debug`) { t.Fatalf(`Should not have level="debug" in log file:\n%s`, string(content)) } @@ -264,7 +265,7 @@ func TestDaemonLoggingLevel(t *testing.T) { } d.Stop() content, _ = ioutil.ReadFile(d.logFile.Name()) - if !strings.Contains(string(content), `level="debug"`) { + if !strings.Contains(string(content), `level=debug`) { t.Fatalf(`Missing level="debug" in log file using -D:\n%s`, string(content)) } @@ -274,7 +275,7 @@ func TestDaemonLoggingLevel(t *testing.T) { } d.Stop() content, _ = ioutil.ReadFile(d.logFile.Name()) - if !strings.Contains(string(content), `level="debug"`) { + if !strings.Contains(string(content), `level=debug`) { t.Fatalf(`Missing level="debug" in log file using --debug:\n%s`, string(content)) } @@ -284,7 +285,7 @@ func TestDaemonLoggingLevel(t *testing.T) { } d.Stop() content, _ = ioutil.ReadFile(d.logFile.Name()) - if !strings.Contains(string(content), `level="debug"`) { + if !strings.Contains(string(content), `level=debug`) { t.Fatalf(`Missing level="debug" in log file when using both --debug and --log-level=fatal:\n%s`, string(content)) } @@ -408,7 +409,7 @@ func TestDaemonKeyMigration(t *testing.T) { } // Simulate an older daemon (pre 1.3) coming up with volumes specified in containers -// without corrosponding volume json +// without corresponding volume json func TestDaemonUpgradeWithVolumes(t *testing.T) { d := NewDaemon(t) @@ -481,6 +482,33 @@ func TestDaemonUpgradeWithVolumes(t *testing.T) { logDone("daemon - volumes from old(pre 1.3) daemon work") } +// GH#11320 - verify that the daemon exits on failure properly +// Note that this explicitly tests the conflict of {-b,--bridge} and {--bip} options as the means +// to get a daemon init failure; no other tests for -b/--bip conflict are therefore required +func TestDaemonExitOnFailure(t *testing.T) { + d := NewDaemon(t) + defer d.Stop() + + //attempt to start daemon with incorrect flags (we know -b and --bip conflict) + if err := d.Start("--bridge", "nosuchbridge", "--bip", "1.1.1.1"); err != nil { + //verify we got the right error + if !strings.Contains(err.Error(), "Daemon exited and never started") { + t.Fatalf("Expected daemon not to start, got %v", err) + } + // look in the log and make sure we got the message that daemon is shutting down + runCmd := exec.Command("grep", "Shutting down daemon due to", d.LogfileName()) + if out, _, err := runCommandWithOutput(runCmd); err != nil { + t.Fatalf("Expected 'shutting down daemon due to error' message; but doesn't exist in log: %q, err: %v", out, err) + } + } else { + //if we didn't get an error and the daemon is running, this is a failure + d.Stop() + t.Fatal("Conflicting options should cause the daemon to error out with a failure") + } + + logDone("daemon - verify no start on daemon init errors") +} + func TestDaemonUlimitDefaults(t *testing.T) { testRequires(t, NativeExecDriver) d := NewDaemon(t) @@ -534,3 +562,241 @@ func TestDaemonUlimitDefaults(t *testing.T) { logDone("daemon - default ulimits are applied") } + +// #11315 +func TestDaemonRestartRenameContainer(t *testing.T) { + d := NewDaemon(t) + if err := d.StartWithBusybox(); err != nil { + t.Fatal(err) + } + + if out, err := d.Cmd("run", "--name=test", "busybox"); err != nil { + t.Fatal(err, out) + } + + if out, err := d.Cmd("rename", "test", "test2"); err != nil { + t.Fatal(err, out) + } + + if err := d.Restart(); err != nil { + t.Fatal(err) + } + + if out, err := d.Cmd("start", "test2"); err != nil { + t.Fatal(err, out) + } + + logDone("daemon - rename persists through daemon restart") +} + +func TestDaemonLoggingDriverDefault(t *testing.T) { + d := NewDaemon(t) + + if err := d.StartWithBusybox(); err != nil { + t.Fatal(err) + } + defer d.Stop() + + out, err := d.Cmd("run", "-d", "busybox", "echo", "testline") + if err != nil { + t.Fatal(out, err) + } + id := strings.TrimSpace(out) + + if out, err := d.Cmd("wait", id); err != nil { + t.Fatal(out, err) + } + logPath := filepath.Join(d.folder, "graph", "containers", id, id+"-json.log") + + if _, err := os.Stat(logPath); err != nil { + t.Fatal(err) + } + f, err := os.Open(logPath) + if err != nil { + t.Fatal(err) + } + var res struct { + Log string `json:log` + Stream string `json:stream` + Time time.Time `json:time` + } + if err := json.NewDecoder(f).Decode(&res); err != nil { + t.Fatal(err) + } + if res.Log != "testline\n" { + t.Fatalf("Unexpected log line: %q, expected: %q", res.Log, "testline\n") + } + if res.Stream != "stdout" { + t.Fatalf("Unexpected stream: %q, expected: %q", res.Stream, "stdout") + } + if !time.Now().After(res.Time) { + t.Fatalf("Log time %v in future", res.Time) + } + logDone("daemon - default 'json-file' logging driver") +} + +func TestDaemonLoggingDriverDefaultOverride(t *testing.T) { + d := NewDaemon(t) + + if err := d.StartWithBusybox(); err != nil { + t.Fatal(err) + } + defer d.Stop() + + out, err := d.Cmd("run", "-d", "--log-driver=none", "busybox", "echo", "testline") + if err != nil { + t.Fatal(out, err) + } + id := strings.TrimSpace(out) + + if out, err := d.Cmd("wait", id); err != nil { + t.Fatal(out, err) + } + logPath := filepath.Join(d.folder, "graph", "containers", id, id+"-json.log") + + if _, err := os.Stat(logPath); err == nil || !os.IsNotExist(err) { + t.Fatalf("%s shouldn't exits, error on Stat: %s", logPath, err) + } + logDone("daemon - default logging driver override in run") +} + +func TestDaemonLoggingDriverNone(t *testing.T) { + d := NewDaemon(t) + + if err := d.StartWithBusybox("--log-driver=none"); err != nil { + t.Fatal(err) + } + defer d.Stop() + + out, err := d.Cmd("run", "-d", "busybox", "echo", "testline") + if err != nil { + t.Fatal(out, err) + } + id := strings.TrimSpace(out) + if out, err := d.Cmd("wait", id); err != nil { + t.Fatal(out, err) + } + + logPath := filepath.Join(d.folder, "graph", "containers", id, id+"-json.log") + + if _, err := os.Stat(logPath); err == nil || !os.IsNotExist(err) { + t.Fatalf("%s shouldn't exits, error on Stat: %s", logPath, err) + } + logDone("daemon - 'none' logging driver") +} + +func TestDaemonLoggingDriverNoneOverride(t *testing.T) { + d := NewDaemon(t) + + if err := d.StartWithBusybox("--log-driver=none"); err != nil { + t.Fatal(err) + } + defer d.Stop() + + out, err := d.Cmd("run", "-d", "--log-driver=json-file", "busybox", "echo", "testline") + if err != nil { + t.Fatal(out, err) + } + id := strings.TrimSpace(out) + + if out, err := d.Cmd("wait", id); err != nil { + t.Fatal(out, err) + } + logPath := filepath.Join(d.folder, "graph", "containers", id, id+"-json.log") + + if _, err := os.Stat(logPath); err != nil { + t.Fatal(err) + } + f, err := os.Open(logPath) + if err != nil { + t.Fatal(err) + } + var res struct { + Log string `json:log` + Stream string `json:stream` + Time time.Time `json:time` + } + if err := json.NewDecoder(f).Decode(&res); err != nil { + t.Fatal(err) + } + if res.Log != "testline\n" { + t.Fatalf("Unexpected log line: %q, expected: %q", res.Log, "testline\n") + } + if res.Stream != "stdout" { + t.Fatalf("Unexpected stream: %q, expected: %q", res.Stream, "stdout") + } + if !time.Now().After(res.Time) { + t.Fatalf("Log time %v in future", res.Time) + } + logDone("daemon - 'none' logging driver override in run") +} + +func TestDaemonLoggingDriverNoneLogsError(t *testing.T) { + d := NewDaemon(t) + + if err := d.StartWithBusybox("--log-driver=none"); err != nil { + t.Fatal(err) + } + defer d.Stop() + + out, err := d.Cmd("run", "-d", "busybox", "echo", "testline") + if err != nil { + t.Fatal(out, err) + } + id := strings.TrimSpace(out) + out, err = d.Cmd("logs", id) + if err == nil { + t.Fatalf("Logs should fail with \"none\" driver") + } + if !strings.Contains(out, `\"logs\" command is supported only for \"json-file\" logging driver`) { + t.Fatalf("There should be error about non-json-file driver, got %s", out) + } + logDone("daemon - logs not available for non-json-file drivers") +} + +func TestDaemonDots(t *testing.T) { + defer deleteAllContainers() + d := NewDaemon(t) + if err := d.StartWithBusybox(); err != nil { + t.Fatal(err) + } + + // Now create 4 containers + if _, err := d.Cmd("create", "busybox"); err != nil { + t.Fatalf("Error creating container: %q", err) + } + if _, err := d.Cmd("create", "busybox"); err != nil { + t.Fatalf("Error creating container: %q", err) + } + if _, err := d.Cmd("create", "busybox"); err != nil { + t.Fatalf("Error creating container: %q", err) + } + if _, err := d.Cmd("create", "busybox"); err != nil { + t.Fatalf("Error creating container: %q", err) + } + + d.Stop() + + d.Start("--log-level=debug") + d.Stop() + content, _ := ioutil.ReadFile(d.logFile.Name()) + if strings.Contains(string(content), "....") { + t.Fatalf("Debug level should not have ....\n%s", string(content)) + } + + d.Start("--log-level=error") + d.Stop() + content, _ = ioutil.ReadFile(d.logFile.Name()) + if strings.Contains(string(content), "....") { + t.Fatalf("Error level should not have ....\n%s", string(content)) + } + + d.Start("--log-level=info") + d.Stop() + content, _ = ioutil.ReadFile(d.logFile.Name()) + if !strings.Contains(string(content), "....") { + t.Fatalf("Info level should have ....\n%s", string(content)) + } + + logDone("daemon - test dots on INFO") +} diff --git a/integration-cli/docker_cli_events_test.go b/integration-cli/docker_cli_events_test.go index 8f632b6ec6..a74ce15fb4 100644 --- a/integration-cli/docker_cli_events_test.go +++ b/integration-cli/docker_cli_events_test.go @@ -45,7 +45,7 @@ func TestEventsContainerFailStartDie(t *testing.T) { t.Fatalf("Container run with command blerg should have failed, but it did not") } - eventsCmd = exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd = exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ = runCommandWithOutput(eventsCmd) events := strings.Split(out, "\n") if len(events) <= 1 { @@ -70,7 +70,7 @@ func TestEventsLimit(t *testing.T) { for i := 0; i < 30; i++ { dockerCmd(t, "run", "busybox", "echo", strconv.Itoa(i)) } - eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ := runCommandWithOutput(eventsCmd) events := strings.Split(out, "\n") nEvents := len(events) - 1 @@ -82,7 +82,7 @@ func TestEventsLimit(t *testing.T) { func TestEventsContainerEvents(t *testing.T) { dockerCmd(t, "run", "--rm", "busybox", "true") - eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, exitCode, err := runCommandWithOutput(eventsCmd) if exitCode != 0 || err != nil { t.Fatalf("Failed to get events with exit code %d: %s", exitCode, err) @@ -125,7 +125,7 @@ func TestEventsImageUntagDelete(t *testing.T) { if err := deleteImages(name); err != nil { t.Fatal(err) } - eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, exitCode, err := runCommandWithOutput(eventsCmd) if exitCode != 0 || err != nil { t.Fatalf("Failed to get events with exit code %d: %s", exitCode, err) @@ -148,7 +148,7 @@ func TestEventsImageUntagDelete(t *testing.T) { } func TestEventsImagePull(t *testing.T) { - since := time.Now().Unix() + since := daemonTime(t).Unix() defer deleteImages("hello-world") @@ -159,7 +159,7 @@ func TestEventsImagePull(t *testing.T) { eventsCmd := exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), - fmt.Sprintf("--until=%d", time.Now().Unix())) + fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ := runCommandWithOutput(eventsCmd) events := strings.Split(strings.TrimSpace(out), "\n") @@ -174,7 +174,7 @@ func TestEventsImagePull(t *testing.T) { func TestEventsImageImport(t *testing.T) { defer deleteAllContainers() - since := time.Now().Unix() + since := daemonTime(t).Unix() runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "true") out, _, err := runCommandWithOutput(runCmd) @@ -193,7 +193,7 @@ func TestEventsImageImport(t *testing.T) { eventsCmd := exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), - fmt.Sprintf("--until=%d", time.Now().Unix())) + fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ = runCommandWithOutput(eventsCmd) events := strings.Split(strings.TrimSpace(out), "\n") @@ -219,7 +219,7 @@ func TestEventsFilters(t *testing.T) { } } - since := time.Now().Unix() + since := daemonTime(t).Unix() out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "--rm", "busybox", "true")) if err != nil { t.Fatal(out, err) @@ -228,13 +228,13 @@ func TestEventsFilters(t *testing.T) { if err != nil { t.Fatal(out, err) } - out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", time.Now().Unix()), "--filter", "event=die")) + out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", daemonTime(t).Unix()), "--filter", "event=die")) if err != nil { t.Fatalf("Failed to get events: %s", err) } parseEvents(out, "die") - out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", time.Now().Unix()), "--filter", "event=die", "--filter", "event=start")) + out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", daemonTime(t).Unix()), "--filter", "event=die", "--filter", "event=start")) if err != nil { t.Fatalf("Failed to get events: %s", err) } @@ -248,3 +248,141 @@ func TestEventsFilters(t *testing.T) { logDone("events - filters") } + +func TestEventsFilterImageName(t *testing.T) { + since := daemonTime(t).Unix() + defer deleteAllContainers() + + out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "--name", "container_1", "-d", "busybox", "true")) + if err != nil { + t.Fatal(out, err) + } + container1 := stripTrailingCharacters(out) + + out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "run", "--name", "container_2", "-d", "busybox", "true")) + if err != nil { + t.Fatal(out, err) + } + container2 := stripTrailingCharacters(out) + + for _, s := range []string{"busybox", "busybox:latest"} { + eventsCmd := exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", daemonTime(t).Unix()), "--filter", fmt.Sprintf("image=%s", s)) + out, _, err := runCommandWithOutput(eventsCmd) + if err != nil { + t.Fatalf("Failed to get events, error: %s(%s)", err, out) + } + events := strings.Split(out, "\n") + events = events[:len(events)-1] + if len(events) == 0 { + t.Fatalf("Expected events but found none for the image busybox:latest") + } + count1 := 0 + count2 := 0 + for _, e := range events { + if strings.Contains(e, container1) { + count1++ + } else if strings.Contains(e, container2) { + count2++ + } + } + if count1 == 0 || count2 == 0 { + t.Fatalf("Expected events from each container but got %d from %s and %d from %s", count1, container1, count2, container2) + } + } + + logDone("events - filters using image") +} + +func TestEventsFilterContainerID(t *testing.T) { + since := daemonTime(t).Unix() + defer deleteAllContainers() + + out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "-d", "busybox", "true")) + if err != nil { + t.Fatal(out, err) + } + container1 := stripTrailingCharacters(out) + + out, _, err = runCommandWithOutput(exec.Command(dockerBinary, "run", "-d", "busybox", "true")) + if err != nil { + t.Fatal(out, err) + } + container2 := stripTrailingCharacters(out) + + for _, s := range []string{container1, container2, container1[:12], container2[:12]} { + eventsCmd := exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", daemonTime(t).Unix()), "--filter", fmt.Sprintf("container=%s", s)) + out, _, err := runCommandWithOutput(eventsCmd) + if err != nil { + t.Fatalf("Failed to get events, error: %s(%s)", err, out) + } + events := strings.Split(out, "\n") + events = events[:len(events)-1] + if len(events) == 0 || len(events) > 3 { + t.Fatalf("Expected 3 events, got %d: %v", len(events), events) + } + createEvent := strings.Fields(events[0]) + if createEvent[len(createEvent)-1] != "create" { + t.Fatalf("first event should be create, not %#v", createEvent) + } + if len(events) > 1 { + startEvent := strings.Fields(events[1]) + if startEvent[len(startEvent)-1] != "start" { + t.Fatalf("second event should be start, not %#v", startEvent) + } + } + if len(events) == 3 { + dieEvent := strings.Fields(events[len(events)-1]) + if dieEvent[len(dieEvent)-1] != "die" { + t.Fatalf("event should be die, not %#v", dieEvent) + } + } + } + + logDone("events - filters using container id") +} + +func TestEventsFilterContainerName(t *testing.T) { + since := daemonTime(t).Unix() + defer deleteAllContainers() + + _, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "--name", "container_1", "busybox", "true")) + if err != nil { + t.Fatal(err) + } + + _, _, err = runCommandWithOutput(exec.Command(dockerBinary, "run", "--name", "container_2", "busybox", "true")) + if err != nil { + t.Fatal(err) + } + + for _, s := range []string{"container_1", "container_2"} { + eventsCmd := exec.Command(dockerBinary, "events", fmt.Sprintf("--since=%d", since), fmt.Sprintf("--until=%d", daemonTime(t).Unix()), "--filter", fmt.Sprintf("container=%s", s)) + out, _, err := runCommandWithOutput(eventsCmd) + if err != nil { + t.Fatalf("Failed to get events, error : %s(%s)", err, out) + } + events := strings.Split(out, "\n") + events = events[:len(events)-1] + if len(events) == 0 || len(events) > 3 { + t.Fatalf("Expected 3 events, got %d: %v", len(events), events) + } + createEvent := strings.Fields(events[0]) + if createEvent[len(createEvent)-1] != "create" { + t.Fatalf("first event should be create, not %#v", createEvent) + } + if len(events) > 1 { + startEvent := strings.Fields(events[1]) + if startEvent[len(startEvent)-1] != "start" { + t.Fatalf("second event should be start, not %#v", startEvent) + } + } + if len(events) == 3 { + dieEvent := strings.Fields(events[len(events)-1]) + if dieEvent[len(dieEvent)-1] != "die" { + t.Fatalf("event should be die, not %#v", dieEvent) + } + } + } + + logDone("events - filters using container name") +} diff --git a/integration-cli/docker_cli_events_unix_test.go b/integration-cli/docker_cli_events_unix_test.go index fd6c434752..4e54283501 100644 --- a/integration-cli/docker_cli_events_unix_test.go +++ b/integration-cli/docker_cli_events_unix_test.go @@ -9,7 +9,6 @@ import ( "os" "os/exec" "testing" - "time" "unicode" "github.com/kr/pty" @@ -17,11 +16,8 @@ import ( // #5979 func TestEventsRedirectStdout(t *testing.T) { - - since := time.Now().Unix() - + since := daemonTime(t).Unix() dockerCmd(t, "run", "busybox", "true") - defer deleteAllContainers() file, err := ioutil.TempFile("", "") @@ -30,7 +26,7 @@ func TestEventsRedirectStdout(t *testing.T) { } defer os.Remove(file.Name()) - command := fmt.Sprintf("%s events --since=%d --until=%d > %s", dockerBinary, since, time.Now().Unix(), file.Name()) + command := fmt.Sprintf("%s events --since=%d --until=%d > %s", dockerBinary, since, daemonTime(t).Unix(), file.Name()) _, tty, err := pty.Open() if err != nil { t.Fatalf("Could not open pty: %v", err) diff --git a/integration-cli/docker_cli_exec_test.go b/integration-cli/docker_cli_exec_test.go index be8a042868..01adc43c08 100644 --- a/integration-cli/docker_cli_exec_test.go +++ b/integration-cli/docker_cli_exec_test.go @@ -60,7 +60,7 @@ func TestExecInteractiveStdinClose(t *testing.T) { out, err := cmd.CombinedOutput() if err != nil { - t.Fatal(err, out) + t.Fatal(err, string(out)) } if string(out) == "" { @@ -197,7 +197,7 @@ func TestExecAfterDaemonRestart(t *testing.T) { logDone("exec - exec running container after daemon restart") } -// Regresssion test for #9155, #9044 +// Regression test for #9155, #9044 func TestExecEnv(t *testing.T) { defer deleteAllContainers() @@ -538,7 +538,6 @@ func TestRunExecDir(t *testing.T) { id := strings.TrimSpace(out) execDir := filepath.Join(execDriverPath, id) stateFile := filepath.Join(execDir, "state.json") - contFile := filepath.Join(execDir, "container.json") { fi, err := os.Stat(execDir) @@ -552,10 +551,6 @@ func TestRunExecDir(t *testing.T) { if err != nil { t.Fatal(err) } - fi, err = os.Stat(contFile) - if err != nil { - t.Fatal(err) - } } stopCmd := exec.Command(dockerBinary, "stop", id) @@ -564,23 +559,12 @@ func TestRunExecDir(t *testing.T) { t.Fatal(err, out) } { - fi, err := os.Stat(execDir) - if err != nil { + _, err := os.Stat(execDir) + if err == nil { t.Fatal(err) } - if !fi.IsDir() { - t.Fatalf("%q must be a directory", execDir) - } - fi, err = os.Stat(stateFile) if err == nil { - t.Fatalf("Statefile %q is exists for stopped container!", stateFile) - } - if !os.IsNotExist(err) { - t.Fatalf("Error should be about non-existing, got %s", err) - } - fi, err = os.Stat(contFile) - if err == nil { - t.Fatalf("Container file %q is exists for stopped container!", contFile) + t.Fatalf("Exec directory %q exists for removed container!", execDir) } if !os.IsNotExist(err) { t.Fatalf("Error should be about non-existing, got %s", err) @@ -603,10 +587,6 @@ func TestRunExecDir(t *testing.T) { if err != nil { t.Fatal(err) } - fi, err = os.Stat(contFile) - if err != nil { - t.Fatal(err) - } } rmCmd := exec.Command(dockerBinary, "rm", "-f", id) out, _, err = runCommandWithOutput(rmCmd) diff --git a/integration-cli/docker_cli_export_import_test.go b/integration-cli/docker_cli_export_import_test.go index 224bb95bbf..5b2a016f14 100644 --- a/integration-cli/docker_cli_export_import_test.go +++ b/integration-cli/docker_cli_export_import_test.go @@ -1,6 +1,7 @@ package main import ( + "os" "os/exec" "strings" "testing" @@ -47,3 +48,52 @@ func TestExportContainerAndImportImage(t *testing.T) { logDone("export - export a container") logDone("import - import an image") } + +// Used to test output flag in the export command +func TestExportContainerWithOutputAndImportImage(t *testing.T) { + runCmd := exec.Command(dockerBinary, "run", "-d", "busybox", "true") + out, _, err := runCommandWithOutput(runCmd) + if err != nil { + t.Fatal("failed to create a container", out, err) + } + + cleanedContainerID := stripTrailingCharacters(out) + + inspectCmd := exec.Command(dockerBinary, "inspect", cleanedContainerID) + out, _, err = runCommandWithOutput(inspectCmd) + if err != nil { + t.Fatalf("output should've been a container id: %s %s ", cleanedContainerID, err) + } + + exportCmd := exec.Command(dockerBinary, "export", "--output=testexp.tar", cleanedContainerID) + if out, _, err = runCommandWithOutput(exportCmd); err != nil { + t.Fatalf("failed to export container: %s, %v", out, err) + } + + out, _, err = runCommandWithOutput(exec.Command("cat", "testexp.tar")) + if err != nil { + t.Fatal(out, err) + } + + importCmd := exec.Command(dockerBinary, "import", "-", "repo/testexp:v1") + importCmd.Stdin = strings.NewReader(out) + out, _, err = runCommandWithOutput(importCmd) + if err != nil { + t.Fatalf("failed to import image: %s, %v", out, err) + } + + cleanedImageID := stripTrailingCharacters(out) + + inspectCmd = exec.Command(dockerBinary, "inspect", cleanedImageID) + if out, _, err = runCommandWithOutput(inspectCmd); err != nil { + t.Fatalf("output should've been an image id: %s, %v", out, err) + } + + deleteContainer(cleanedContainerID) + deleteImages("repo/testexp:v1") + + os.Remove("/tmp/testexp.tar") + + logDone("export - export a container with output flag") + logDone("import - import an image with output flag") +} diff --git a/integration-cli/docker_cli_help_test.go b/integration-cli/docker_cli_help_test.go index 1c83204244..8fc5cd1aab 100644 --- a/integration-cli/docker_cli_help_test.go +++ b/integration-cli/docker_cli_help_test.go @@ -59,6 +59,11 @@ func TestHelpTextVerify(t *testing.T) { t.Fatalf("Line is too long(%d chars):\n%s", len(line), line) } + // All lines should not end with a space + if strings.HasSuffix(line, " ") { + t.Fatalf("Line should not end with a space: %s", line) + } + if scanForHome && strings.Contains(line, `=`+home) { t.Fatalf("Line should use '%q' instead of %q:\n%s", homedir.GetShortcutString(), home, line) } @@ -130,6 +135,12 @@ func TestHelpTextVerify(t *testing.T) { if strings.HasPrefix(line, " -") && strings.HasSuffix(line, ".") { t.Fatalf("Help for %q should not end with a period: %s", cmd, line) } + + // Options should NOT end with a space + if strings.HasSuffix(line, " ") { + t.Fatalf("Help for %q should not end with a space: %s", cmd, line) + } + } } diff --git a/integration-cli/docker_cli_images_test.go b/integration-cli/docker_cli_images_test.go index bb24e9a347..694971191e 100644 --- a/integration-cli/docker_cli_images_test.go +++ b/integration-cli/docker_cli_images_test.go @@ -8,6 +8,8 @@ import ( "strings" "testing" "time" + + "github.com/docker/docker/pkg/common" ) func TestImagesEnsureImageIsListed(t *testing.T) { @@ -77,6 +79,60 @@ func TestImagesErrorWithInvalidFilterNameTest(t *testing.T) { logDone("images - invalid filter name check working") } +func TestImagesFilterLabel(t *testing.T) { + imageName1 := "images_filter_test1" + imageName2 := "images_filter_test2" + imageName3 := "images_filter_test3" + defer deleteAllContainers() + defer deleteImages(imageName1) + defer deleteImages(imageName2) + defer deleteImages(imageName3) + image1ID, err := buildImage(imageName1, + `FROM scratch + LABEL match me`, true) + if err != nil { + t.Fatal(err) + } + + image2ID, err := buildImage(imageName2, + `FROM scratch + LABEL match="me too"`, true) + if err != nil { + t.Fatal(err) + } + + image3ID, err := buildImage(imageName3, + `FROM scratch + LABEL nomatch me`, true) + if err != nil { + t.Fatal(err) + } + + cmd := exec.Command(dockerBinary, "images", "--no-trunc", "-q", "-f", "label=match") + out, _, err := runCommandWithOutput(cmd) + if err != nil { + t.Fatal(out, err) + } + out = strings.TrimSpace(out) + + if (!strings.Contains(out, image1ID) && !strings.Contains(out, image2ID)) || strings.Contains(out, image3ID) { + t.Fatalf("Expected ids %s,%s got %s", image1ID, image2ID, out) + } + + cmd = exec.Command(dockerBinary, "images", "--no-trunc", "-q", "-f", "label=match=me too") + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatal(out, err) + } + out = strings.TrimSpace(out) + + if out != image2ID { + t.Fatalf("Expected %s got %s", image2ID, out) + } + + logDone("images - filter label") +} + func TestImagesFilterWhiteSpaceTrimmingAndLowerCasingWorking(t *testing.T) { imageName := "images_filter_test" defer deleteAllContainers() @@ -122,3 +178,44 @@ func TestImagesFilterWhiteSpaceTrimmingAndLowerCasingWorking(t *testing.T) { logDone("images - white space trimming and lower casing") } + +func TestImagesEnsureDanglingImageOnlyListedOnce(t *testing.T) { + defer deleteAllContainers() + + // create container 1 + c := exec.Command(dockerBinary, "run", "-d", "busybox", "true") + out, _, err := runCommandWithOutput(c) + if err != nil { + t.Fatalf("error running busybox: %s, %v", out, err) + } + containerId1 := strings.TrimSpace(out) + + // tag as foobox + c = exec.Command(dockerBinary, "commit", containerId1, "foobox") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error tagging foobox: %s", err) + } + imageId := common.TruncateID(strings.TrimSpace(out)) + defer deleteImages(imageId) + + // overwrite the tag, making the previous image dangling + c = exec.Command(dockerBinary, "tag", "-f", "busybox", "foobox") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("error tagging foobox: %s", err) + } + defer deleteImages("foobox") + + c = exec.Command(dockerBinary, "images", "-q", "-f", "dangling=true") + out, _, err = runCommandWithOutput(c) + if err != nil { + t.Fatalf("listing images failed with errors: %s, %v", out, err) + } + + if e, a := 1, strings.Count(out, imageId); e != a { + t.Fatalf("expected 1 dangling image, got %d: %s", a, out) + } + + logDone("images - dangling image only listed once") +} diff --git a/integration-cli/docker_cli_links_test.go b/integration-cli/docker_cli_links_test.go index 47635a1942..efee8d04e3 100644 --- a/integration-cli/docker_cli_links_test.go +++ b/integration-cli/docker_cli_links_test.go @@ -132,14 +132,14 @@ func TestLinksIpTablesRulesWhenLinkAndUnlink(t *testing.T) { childIP := findContainerIP(t, "child") parentIP := findContainerIP(t, "parent") - sourceRule := []string{"DOCKER", "-i", "docker0", "-o", "docker0", "-p", "tcp", "-s", childIP, "--sport", "80", "-d", parentIP, "-j", "ACCEPT"} - destinationRule := []string{"DOCKER", "-i", "docker0", "-o", "docker0", "-p", "tcp", "-s", parentIP, "--dport", "80", "-d", childIP, "-j", "ACCEPT"} - if !iptables.Exists(sourceRule...) || !iptables.Exists(destinationRule...) { + sourceRule := []string{"-i", "docker0", "-o", "docker0", "-p", "tcp", "-s", childIP, "--sport", "80", "-d", parentIP, "-j", "ACCEPT"} + destinationRule := []string{"-i", "docker0", "-o", "docker0", "-p", "tcp", "-s", parentIP, "--dport", "80", "-d", childIP, "-j", "ACCEPT"} + if !iptables.Exists("filter", "DOCKER", sourceRule...) || !iptables.Exists("filter", "DOCKER", destinationRule...) { t.Fatal("Iptables rules not found") } dockerCmd(t, "rm", "--link", "parent/http") - if iptables.Exists(sourceRule...) || iptables.Exists(destinationRule...) { + if iptables.Exists("filter", "DOCKER", sourceRule...) || iptables.Exists("filter", "DOCKER", destinationRule...) { t.Fatal("Iptables rules should be removed when unlink") } diff --git a/integration-cli/docker_cli_pause_test.go b/integration-cli/docker_cli_pause_test.go index 1d57c5729d..f1ccde9cfc 100644 --- a/integration-cli/docker_cli_pause_test.go +++ b/integration-cli/docker_cli_pause_test.go @@ -5,7 +5,6 @@ import ( "os/exec" "strings" "testing" - "time" ) func TestPause(t *testing.T) { @@ -28,7 +27,7 @@ func TestPause(t *testing.T) { dockerCmd(t, "unpause", name) - eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ = runCommandWithOutput(eventsCmd) events := strings.Split(out, "\n") if len(events) <= 1 { @@ -77,7 +76,7 @@ func TestPauseMultipleContainers(t *testing.T) { dockerCmd(t, append([]string{"unpause"}, containers...)...) - eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", time.Now().Unix())) + eventsCmd := exec.Command(dockerBinary, "events", "--since=0", fmt.Sprintf("--until=%d", daemonTime(t).Unix())) out, _, _ = runCommandWithOutput(eventsCmd) events := strings.Split(out, "\n") if len(events) <= len(containers)*3-2 { diff --git a/integration-cli/docker_cli_proxy_test.go b/integration-cli/docker_cli_proxy_test.go index 98129bdae8..b39dd5634d 100644 --- a/integration-cli/docker_cli_proxy_test.go +++ b/integration-cli/docker_cli_proxy_test.go @@ -8,8 +8,10 @@ import ( ) func TestCliProxyDisableProxyUnixSock(t *testing.T) { + testRequires(t, SameHostDaemon) // test is valid when DOCKER_HOST=unix://.. + cmd := exec.Command(dockerBinary, "info") - cmd.Env = appendDockerHostEnv([]string{"HTTP_PROXY=http://127.0.0.1:9999"}) + cmd.Env = appendBaseEnv([]string{"HTTP_PROXY=http://127.0.0.1:9999"}) if out, _, err := runCommandWithOutput(cmd); err != nil { t.Fatal(err, out) diff --git a/integration-cli/docker_cli_ps_test.go b/integration-cli/docker_cli_ps_test.go index d5f9d00dcb..c2e108576d 100644 --- a/integration-cli/docker_cli_ps_test.go +++ b/integration-cli/docker_cli_ps_test.go @@ -412,6 +412,74 @@ func TestPsListContainersFilterName(t *testing.T) { logDone("ps - test ps filter name") } +func TestPsListContainersFilterLabel(t *testing.T) { + // start container + runCmd := exec.Command(dockerBinary, "run", "-d", "-l", "match=me", "-l", "second=tag", "busybox") + out, _, err := runCommandWithOutput(runCmd) + if err != nil { + t.Fatal(out, err) + } + firstID := stripTrailingCharacters(out) + + // start another container + runCmd = exec.Command(dockerBinary, "run", "-d", "-l", "match=me too", "busybox") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + secondID := stripTrailingCharacters(out) + + // start third container + runCmd = exec.Command(dockerBinary, "run", "-d", "-l", "nomatch=me", "busybox") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + thirdID := stripTrailingCharacters(out) + + // filter containers by exact match + runCmd = exec.Command(dockerBinary, "ps", "-a", "-q", "--no-trunc", "--filter=label=match=me") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + containerOut := strings.TrimSpace(out) + if containerOut != firstID { + t.Fatalf("Expected id %s, got %s for exited filter, output: %q", firstID, containerOut, out) + } + + // filter containers by two labels + runCmd = exec.Command(dockerBinary, "ps", "-a", "-q", "--no-trunc", "--filter=label=match=me", "--filter=label=second=tag") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + containerOut = strings.TrimSpace(out) + if containerOut != firstID { + t.Fatalf("Expected id %s, got %s for exited filter, output: %q", firstID, containerOut, out) + } + + // filter containers by two labels, but expect not found because of AND behavior + runCmd = exec.Command(dockerBinary, "ps", "-a", "-q", "--no-trunc", "--filter=label=match=me", "--filter=label=second=tag-no") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + containerOut = strings.TrimSpace(out) + if containerOut != "" { + t.Fatalf("Expected nothing, got %s for exited filter, output: %q", containerOut, out) + } + + // filter containers by exact key + runCmd = exec.Command(dockerBinary, "ps", "-a", "-q", "--no-trunc", "--filter=label=match") + if out, _, err = runCommandWithOutput(runCmd); err != nil { + t.Fatal(out, err) + } + containerOut = strings.TrimSpace(out) + if (!strings.Contains(containerOut, firstID) || !strings.Contains(containerOut, secondID)) || strings.Contains(containerOut, thirdID) { + t.Fatalf("Expected ids %s,%s, got %s for exited filter, output: %q", firstID, secondID, containerOut, out) + } + + deleteAllContainers() + + logDone("ps - test ps filter label") +} + func TestPsListContainersFilterExited(t *testing.T) { defer deleteAllContainers() diff --git a/integration-cli/docker_cli_push_test.go b/integration-cli/docker_cli_push_test.go index ee1c2bc076..f1274ba706 100644 --- a/integration-cli/docker_cli_push_test.go +++ b/integration-cli/docker_cli_push_test.go @@ -17,7 +17,7 @@ func TestPushBusyboxImage(t *testing.T) { defer setupRegistry(t)() repoName := fmt.Sprintf("%v/dockercli/busybox", privateRegistryURL) - // tag the image to upload it tot he private registry + // tag the image to upload it to the private registry tagCmd := exec.Command(dockerBinary, "tag", "busybox", repoName) if out, _, err := runCommandWithOutput(tagCmd); err != nil { t.Fatalf("image tagging failed: %s, %v", out, err) @@ -45,7 +45,7 @@ func TestPushUntagged(t *testing.T) { repoName := fmt.Sprintf("%v/dockercli/busybox", privateRegistryURL) - expected := "No tags to push" + expected := "Repository does not exist" pushCmd := exec.Command(dockerBinary, "push", repoName) if out, _, err := runCommandWithOutput(pushCmd); err == nil { t.Fatalf("pushing the image to the private registry should have failed: outuput %q", out) diff --git a/integration-cli/docker_cli_restart_test.go b/integration-cli/docker_cli_restart_test.go index b0f63d6e18..7b97c0725b 100644 --- a/integration-cli/docker_cli_restart_test.go +++ b/integration-cli/docker_cli_restart_test.go @@ -214,3 +214,33 @@ func TestRestartPolicyOnFailure(t *testing.T) { logDone("restart - recording restart policy name for --restart=on-failure") } + +// a good container with --restart=on-failure:3 +// MaximumRetryCount!=0; RestartCount=0 +func TestContainerRestartwithGoodContainer(t *testing.T) { + defer deleteAllContainers() + out, err := exec.Command(dockerBinary, "run", "-d", "--restart=on-failure:3", "busybox", "true").CombinedOutput() + if err != nil { + t.Fatal(string(out), err) + } + id := strings.TrimSpace(string(out)) + if err := waitInspect(id, "{{ .State.Restarting }} {{ .State.Running }}", "false false", 5); err != nil { + t.Fatal(err) + } + count, err := inspectField(id, "RestartCount") + if err != nil { + t.Fatal(err) + } + if count != "0" { + t.Fatalf("Container was restarted %s times, expected %d", count, 0) + } + MaximumRetryCount, err := inspectField(id, "HostConfig.RestartPolicy.MaximumRetryCount") + if err != nil { + t.Fatal(err) + } + if MaximumRetryCount != "3" { + t.Fatalf("Container Maximum Retry Count is %s, expected %s", MaximumRetryCount, "3") + } + + logDone("restart - for a good container with restart policy, MaximumRetryCount is not 0 and RestartCount is 0") +} diff --git a/integration-cli/docker_cli_run_test.go b/integration-cli/docker_cli_run_test.go index 76ec09f16d..3cab284331 100644 --- a/integration-cli/docker_cli_run_test.go +++ b/integration-cli/docker_cli_run_test.go @@ -144,18 +144,17 @@ func TestRunLeakyFileDescriptors(t *testing.T) { logDone("run - check file descriptor leakage") } -// it should be possible to ping Google DNS resolver +// it should be possible to lookup Google DNS // this will fail when Internet access is unavailable -func TestRunPingGoogle(t *testing.T) { +func TestRunLookupGoogleDns(t *testing.T) { defer deleteAllContainers() - runCmd := exec.Command(dockerBinary, "run", "busybox", "ping", "-c", "1", "8.8.8.8") - out, _, _, err := runCommandWithStdoutStderr(runCmd) + out, _, _, err := runCommandWithStdoutStderr(exec.Command(dockerBinary, "run", "busybox", "nslookup", "google.com")) if err != nil { t.Fatalf("failed to run container: %v, output: %q", err, out) } - logDone("run - ping 8.8.8.8") + logDone("run - nslookup google.com") } // the exit code should be 0 @@ -380,6 +379,39 @@ func TestRunLinksContainerWithContainerId(t *testing.T) { logDone("run - use a container id to link target work") } +func TestRunLinkToContainerNetMode(t *testing.T) { + defer deleteAllContainers() + + cmd := exec.Command(dockerBinary, "run", "--name", "test", "-d", "busybox", "top") + out, _, err := runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + cmd = exec.Command(dockerBinary, "run", "--name", "parent", "-d", "--net=container:test", "busybox", "top") + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + cmd = exec.Command(dockerBinary, "run", "-d", "--link=parent:parent", "busybox", "top") + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + + cmd = exec.Command(dockerBinary, "run", "--name", "child", "-d", "--net=container:parent", "busybox", "top") + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + cmd = exec.Command(dockerBinary, "run", "-d", "--link=child:child", "busybox", "top") + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + + logDone("run - link to a container which net mode is container success") +} + // Regression test for #4741 func TestRunWithVolumesAsFiles(t *testing.T) { defer deleteAllContainers() @@ -869,7 +901,7 @@ func TestRunEnvironmentErase(t *testing.T) { defer deleteAllContainers() cmd := exec.Command(dockerBinary, "run", "-e", "FOO", "-e", "HOSTNAME", "busybox", "env") - cmd.Env = appendDockerHostEnv([]string{}) + cmd.Env = appendBaseEnv([]string{}) out, _, err := runCommandWithOutput(cmd) if err != nil { @@ -908,7 +940,7 @@ func TestRunEnvironmentOverride(t *testing.T) { defer deleteAllContainers() cmd := exec.Command(dockerBinary, "run", "-e", "HOSTNAME", "-e", "HOME=/root2", "busybox", "env") - cmd.Env = appendDockerHostEnv([]string{"HOSTNAME=bar"}) + cmd.Env = appendBaseEnv([]string{"HOSTNAME=bar"}) out, _, err := runCommandWithOutput(cmd) if err != nil { @@ -1283,6 +1315,17 @@ func TestRunWithCpuset(t *testing.T) { logDone("run - cpuset 0") } +func TestRunWithCpusetCpus(t *testing.T) { + defer deleteAllContainers() + + cmd := exec.Command(dockerBinary, "run", "--cpuset-cpus", "0", "busybox", "true") + if code, err := runCommand(cmd); err != nil || code != 0 { + t.Fatalf("container should run successfuly with cpuset-cpus of 0: %s", err) + } + + logDone("run - cpuset-cpus 0") +} + func TestRunDeviceNumbers(t *testing.T) { defer deleteAllContainers() @@ -1463,11 +1506,16 @@ func TestRunDnsOptions(t *testing.T) { cmd := exec.Command(dockerBinary, "run", "--dns=127.0.0.1", "--dns-search=mydomain", "busybox", "cat", "/etc/resolv.conf") - out, _, err := runCommandWithOutput(cmd) + out, stderr, _, err := runCommandWithStdoutStderr(cmd) if err != nil { t.Fatal(err, out) } + // The client will get a warning on stderr when setting DNS to a localhost address; verify this: + if !strings.Contains(stderr, "Localhost DNS setting") { + t.Fatalf("Expected warning on stderr about localhost resolver, but got %q", stderr) + } + actual := strings.Replace(strings.Trim(out, "\r\n"), "\n", " ", -1) if actual != "nameserver 127.0.0.1 search mydomain" { t.Fatalf("expected 'nameserver 127.0.0.1 search mydomain', but says: %q", actual) @@ -1475,7 +1523,7 @@ func TestRunDnsOptions(t *testing.T) { cmd = exec.Command(dockerBinary, "run", "--dns=127.0.0.1", "--dns-search=.", "busybox", "cat", "/etc/resolv.conf") - out, _, err = runCommandWithOutput(cmd) + out, _, _, err = runCommandWithStdoutStderr(cmd) if err != nil { t.Fatal(err, out) } @@ -1502,7 +1550,7 @@ func TestRunDnsOptionsBasedOnHostResolvConf(t *testing.T) { var out string cmd := exec.Command(dockerBinary, "run", "--dns=127.0.0.1", "busybox", "cat", "/etc/resolv.conf") - if out, _, err = runCommandWithOutput(cmd); err != nil { + if out, _, _, err = runCommandWithStdoutStderr(cmd); err != nil { t.Fatal(err, out) } @@ -2289,11 +2337,12 @@ func TestRunCidFileCleanupIfEmpty(t *testing.T) { } defer os.RemoveAll(tmpDir) tmpCidFile := path.Join(tmpDir, "cid") - cmd := exec.Command(dockerBinary, "run", "--cidfile", tmpCidFile, "scratch") + cmd := exec.Command(dockerBinary, "run", "--cidfile", tmpCidFile, "emptyfs") out, _, err := runCommandWithOutput(cmd) - t.Log(out) if err == nil { - t.Fatal("Run without command must fail") + t.Fatalf("Run without command must fail. out=%s", out) + } else if !strings.Contains(out, "No command specified") { + t.Fatalf("Run without command failed with wrong output. out=%s\nerr=%v", out, err) } if _, err := os.Stat(tmpCidFile); err == nil { @@ -3177,6 +3226,25 @@ func TestRunOOMExitCode(t *testing.T) { logDone("run - exit code on oom") } +func TestRunSetDefaultRestartPolicy(t *testing.T) { + defer deleteAllContainers() + runCmd := exec.Command(dockerBinary, "run", "-d", "--name", "test", "busybox", "top") + if out, _, err := runCommandWithOutput(runCmd); err != nil { + t.Fatalf("failed to run container: %v, output: %q", err, out) + } + cmd := exec.Command(dockerBinary, "inspect", "-f", "{{.HostConfig.RestartPolicy.Name}}", "test") + out, _, err := runCommandWithOutput(cmd) + if err != nil { + t.Fatalf("failed to inspect container: %v, output: %q", err, out) + } + out = strings.Trim(out, "\r\n") + if out != "no" { + t.Fatalf("Set default restart policy failed") + } + + logDone("run - set default restart policy success") +} + func TestRunRestartMaxRetries(t *testing.T) { defer deleteAllContainers() out, err := exec.Command(dockerBinary, "run", "-d", "--restart=on-failure:3", "busybox", "false").CombinedOutput() @@ -3194,6 +3262,13 @@ func TestRunRestartMaxRetries(t *testing.T) { if count != "3" { t.Fatalf("Container was restarted %s times, expected %d", count, 3) } + MaximumRetryCount, err := inspectField(id, "HostConfig.RestartPolicy.MaximumRetryCount") + if err != nil { + t.Fatal(err) + } + if MaximumRetryCount != "3" { + t.Fatalf("Container Maximum Retry Count is %s, expected %s", MaximumRetryCount, "3") + } logDone("run - test max-retries for --restart") } diff --git a/integration-cli/docker_cli_run_unix_test.go b/integration-cli/docker_cli_run_unix_test.go index 477325bf55..6fe416f9d2 100644 --- a/integration-cli/docker_cli_run_unix_test.go +++ b/integration-cli/docker_cli_run_unix_test.go @@ -7,6 +7,7 @@ import ( "io/ioutil" "os" "os/exec" + "path" "path/filepath" "strings" "testing" @@ -107,3 +108,85 @@ func TestRunWithUlimits(t *testing.T) { logDone("run - ulimits are set") } + +func getCgroupPaths(test string) map[string]string { + cgroupPaths := map[string]string{} + for _, line := range strings.Split(test, "\n") { + line = strings.TrimSpace(line) + if line == "" { + continue + } + parts := strings.Split(line, ":") + if len(parts) != 3 { + fmt.Printf("unexpected file format for /proc/self/cgroup - %q\n", line) + continue + } + cgroupPaths[parts[1]] = parts[2] + } + return cgroupPaths +} + +func TestRunContainerWithCgroupParent(t *testing.T) { + testRequires(t, NativeExecDriver) + defer deleteAllContainers() + + cgroupParent := "test" + data, err := ioutil.ReadFile("/proc/self/cgroup") + if err != nil { + t.Fatalf("failed to read '/proc/self/cgroup - %v", err) + } + selfCgroupPaths := getCgroupPaths(string(data)) + selfCpuCgroup, found := selfCgroupPaths["memory"] + if !found { + t.Fatalf("unable to find self cpu cgroup path. CgroupsPath: %v", selfCgroupPaths) + } + + out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "--cgroup-parent", cgroupParent, "--rm", "busybox", "cat", "/proc/self/cgroup")) + if err != nil { + t.Fatalf("unexpected failure when running container with --cgroup-parent option - %s\n%v", string(out), err) + } + cgroupPaths := getCgroupPaths(string(out)) + if len(cgroupPaths) == 0 { + t.Fatalf("unexpected output - %q", string(out)) + } + found = false + expectedCgroupPrefix := path.Join(selfCpuCgroup, cgroupParent) + for _, path := range cgroupPaths { + if strings.HasPrefix(path, expectedCgroupPrefix) { + found = true + break + } + } + if !found { + t.Fatalf("unexpected cgroup paths. Expected at least one cgroup path to have prefix %q. Cgroup Paths: %v", expectedCgroupPrefix, cgroupPaths) + } + logDone("run - cgroup parent") +} + +func TestRunContainerWithCgroupParentAbsPath(t *testing.T) { + testRequires(t, NativeExecDriver) + defer deleteAllContainers() + + cgroupParent := "/cgroup-parent/test" + + out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "run", "--cgroup-parent", cgroupParent, "--rm", "busybox", "cat", "/proc/self/cgroup")) + if err != nil { + t.Fatalf("unexpected failure when running container with --cgroup-parent option - %s\n%v", string(out), err) + } + cgroupPaths := getCgroupPaths(string(out)) + if len(cgroupPaths) == 0 { + t.Fatalf("unexpected output - %q", string(out)) + } + found := false + for _, path := range cgroupPaths { + if strings.HasPrefix(path, cgroupParent) { + found = true + break + } + } + if !found { + t.Fatalf("unexpected cgroup paths. Expected at least one cgroup path to have prefix %q. Cgroup Paths: %v", cgroupParent, cgroupPaths) + } + + logDone("run - cgroup parent with absolute cgroup path") +} diff --git a/integration-cli/docker_cli_start_test.go b/integration-cli/docker_cli_start_test.go index 8604f3a338..01f0ef95a1 100644 --- a/integration-cli/docker_cli_start_test.go +++ b/integration-cli/docker_cli_start_test.go @@ -187,3 +187,56 @@ func TestStartPausedContainer(t *testing.T) { logDone("start - error should show if trying to start paused container") } + +func TestStartMultipleContainers(t *testing.T) { + defer deleteAllContainers() + // run a container named 'parent' and create two container link to `parent` + cmd := exec.Command(dockerBinary, "run", "-d", "--name", "parent", "busybox", "top") + if out, _, err := runCommandWithOutput(cmd); err != nil { + t.Fatal(out, err) + } + for _, container := range []string{"child_first", "child_second"} { + cmd = exec.Command(dockerBinary, "create", "--name", container, "--link", "parent:parent", "busybox", "top") + if out, _, err := runCommandWithOutput(cmd); err != nil { + t.Fatal(out, err) + } + } + + // stop 'parent' container + cmd = exec.Command(dockerBinary, "stop", "parent") + if out, _, err := runCommandWithOutput(cmd); err != nil { + t.Fatal(out, err) + } + cmd = exec.Command(dockerBinary, "inspect", "-f", "{{.State.Running}}", "parent") + out, _, err := runCommandWithOutput(cmd) + if err != nil { + t.Fatal(out, err) + } + out = strings.Trim(out, "\r\n") + if out != "false" { + t.Fatal("Container should be stopped") + } + + // start all the three containers, container `child_first` start first which should be faild + // container 'parent' start second and then start container 'child_second' + cmd = exec.Command(dockerBinary, "start", "child_first", "parent", "child_second") + out, _, err = runCommandWithOutput(cmd) + if !strings.Contains(out, "Cannot start container child_first") || err == nil { + t.Fatal("Expected error but got none") + } + + for container, expected := range map[string]string{"parent": "true", "child_first": "false", "child_second": "true"} { + cmd = exec.Command(dockerBinary, "inspect", "-f", "{{.State.Running}}", container) + out, _, err = runCommandWithOutput(cmd) + if err != nil { + t.Fatal(out, err) + } + out = strings.Trim(out, "\r\n") + if out != expected { + t.Fatal("Container running state wrong") + } + + } + + logDone("start - start multiple containers continue on one failed") +} diff --git a/integration-cli/docker_utils.go b/integration-cli/docker_utils.go index 228c45a6ad..10cc6c9218 100644 --- a/integration-cli/docker_utils.go +++ b/integration-cli/docker_utils.go @@ -267,6 +267,10 @@ func (d *Daemon) Cmd(name string, arg ...string) (string, error) { return string(b), err } +func (d *Daemon) LogfileName() string { + return d.logFile.Name() +} + func daemonHost() string { daemonUrlStr := "unix://" + api.DEFAULTUNIXSOCKET if daemonHostVar := os.Getenv("DOCKER_HOST"); daemonHostVar != "" { @@ -469,7 +473,7 @@ func dockerCmd(t *testing.T, args ...string) (string, int, error) { return out, status, err } -// execute a docker ocmmand with a timeout +// execute a docker command with a timeout func dockerCmdWithTimeout(timeout time.Duration, args ...string) (string, int, error) { out, status, err := runCommandWithOutputAndTimeout(exec.Command(dockerBinary, args...), timeout) if err != nil { @@ -559,7 +563,11 @@ func (f *FakeContext) Close() error { return os.RemoveAll(f.Dir) } -func fakeContext(dockerfile string, files map[string]string) (*FakeContext, error) { +func fakeContextFromDir(dir string) *FakeContext { + return &FakeContext{dir} +} + +func fakeContextWithFiles(files map[string]string) (*FakeContext, error) { tmp, err := ioutil.TempDir("", "fake-context") if err != nil { return nil, err @@ -567,78 +575,188 @@ func fakeContext(dockerfile string, files map[string]string) (*FakeContext, erro if err := os.Chmod(tmp, 0755); err != nil { return nil, err } - ctx := &FakeContext{tmp} + + ctx := fakeContextFromDir(tmp) for file, content := range files { if err := ctx.Add(file, content); err != nil { ctx.Close() return nil, err } } + return ctx, nil +} + +func fakeContextAddDockerfile(ctx *FakeContext, dockerfile string) error { if err := ctx.Add("Dockerfile", dockerfile); err != nil { ctx.Close() + return err + } + return nil +} + +func fakeContext(dockerfile string, files map[string]string) (*FakeContext, error) { + ctx, err := fakeContextWithFiles(files) + if err != nil { + ctx.Close() + return nil, err + } + if err := fakeContextAddDockerfile(ctx, dockerfile); err != nil { return nil, err } return ctx, nil } -type FakeStorage struct { +// FakeStorage is a static file server. It might be running locally or remotely +// on test host. +type FakeStorage interface { + Close() error + URL() string + CtxDir() string +} + +// fakeStorage returns either a local or remote (at daemon machine) file server +func fakeStorage(files map[string]string) (FakeStorage, error) { + ctx, err := fakeContextWithFiles(files) + if err != nil { + return nil, err + } + return fakeStorageWithContext(ctx) +} + +// fakeStorageWithContext returns either a local or remote (at daemon machine) file server +func fakeStorageWithContext(ctx *FakeContext) (FakeStorage, error) { + if isLocalDaemon { + return newLocalFakeStorage(ctx) + } + return newRemoteFileServer(ctx) +} + +// localFileStorage is a file storage on the running machine +type localFileStorage struct { *FakeContext *httptest.Server } -func (f *FakeStorage) Close() error { - f.Server.Close() - return f.FakeContext.Close() +func (s *localFileStorage) URL() string { + return s.Server.URL } -func fakeStorage(files map[string]string) (*FakeStorage, error) { - tmp, err := ioutil.TempDir("", "fake-storage") - if err != nil { - return nil, err - } - ctx := &FakeContext{tmp} - for file, content := range files { - if err := ctx.Add(file, content); err != nil { - ctx.Close() - return nil, err - } - } +func (s *localFileStorage) CtxDir() string { + return s.FakeContext.Dir +} + +func (s *localFileStorage) Close() error { + defer s.Server.Close() + return s.FakeContext.Close() +} + +func newLocalFakeStorage(ctx *FakeContext) (*localFileStorage, error) { handler := http.FileServer(http.Dir(ctx.Dir)) server := httptest.NewServer(handler) - return &FakeStorage{ + return &localFileStorage{ FakeContext: ctx, Server: server, }, nil } -func inspectField(name, field string) (string, error) { - format := fmt.Sprintf("{{.%s}}", field) +// remoteFileServer is a containerized static file server started on the remote +// testing machine to be used in URL-accepting docker build functionality. +type remoteFileServer struct { + host string // hostname/port web server is listening to on docker host e.g. 0.0.0.0:43712 + container string + image string + ctx *FakeContext +} + +func (f *remoteFileServer) URL() string { + u := url.URL{ + Scheme: "http", + Host: f.host} + return u.String() +} + +func (f *remoteFileServer) CtxDir() string { + return f.ctx.Dir +} + +func (f *remoteFileServer) Close() error { + defer func() { + if f.ctx != nil { + f.ctx.Close() + } + if f.image != "" { + deleteImages(f.image) + } + }() + if f.container == "" { + return nil + } + return deleteContainer(f.container) +} + +func newRemoteFileServer(ctx *FakeContext) (*remoteFileServer, error) { + var ( + image = fmt.Sprintf("fileserver-img-%s", strings.ToLower(makeRandomString(10))) + container = fmt.Sprintf("fileserver-cnt-%s", strings.ToLower(makeRandomString(10))) + ) + + // Build the image + if err := fakeContextAddDockerfile(ctx, `FROM httpserver +COPY . /static`); err != nil { + return nil, fmt.Errorf("Cannot add Dockerfile to context: %v", err) + } + if _, err := buildImageFromContext(image, ctx, false); err != nil { + return nil, fmt.Errorf("failed building file storage container image: %v", err) + } + + // Start the container + runCmd := exec.Command(dockerBinary, "run", "-d", "-P", "--name", container, image) + if out, ec, err := runCommandWithOutput(runCmd); err != nil { + return nil, fmt.Errorf("failed to start file storage container. ec=%v\nout=%s\nerr=%v", ec, out, err) + } + + // Find out the system assigned port + out, _, err := runCommandWithOutput(exec.Command(dockerBinary, "port", container, "80/tcp")) + if err != nil { + return nil, fmt.Errorf("failed to find container port: err=%v\nout=%s", err, out) + } + + return &remoteFileServer{ + container: container, + image: image, + host: strings.Trim(out, "\n"), + ctx: ctx}, nil +} + +func inspectFieldAndMarshall(name, field string, output interface{}) error { + str, err := inspectFieldJSON(name, field) + if err != nil { + return err + } + + return json.Unmarshal([]byte(str), output) +} + +func inspectFilter(name, filter string) (string, error) { + format := fmt.Sprintf("{{%s}}", filter) inspectCmd := exec.Command(dockerBinary, "inspect", "-f", format, name) out, exitCode, err := runCommandWithOutput(inspectCmd) if err != nil || exitCode != 0 { - return "", fmt.Errorf("failed to inspect %s: %s", name, out) + return "", fmt.Errorf("failed to inspect container %s: %s", name, out) } return strings.TrimSpace(out), nil } +func inspectField(name, field string) (string, error) { + return inspectFilter(name, fmt.Sprintf(".%s", field)) +} + func inspectFieldJSON(name, field string) (string, error) { - format := fmt.Sprintf("{{json .%s}}", field) - inspectCmd := exec.Command(dockerBinary, "inspect", "-f", format, name) - out, exitCode, err := runCommandWithOutput(inspectCmd) - if err != nil || exitCode != 0 { - return "", fmt.Errorf("failed to inspect %s: %s", name, out) - } - return strings.TrimSpace(out), nil + return inspectFilter(name, fmt.Sprintf("json .%s", field)) } func inspectFieldMap(name, path, field string) (string, error) { - format := fmt.Sprintf("{{index .%s %q}}", path, field) - inspectCmd := exec.Command(dockerBinary, "inspect", "-f", format, name) - out, exitCode, err := runCommandWithOutput(inspectCmd) - if err != nil || exitCode != 0 { - return "", fmt.Errorf("failed to inspect %s: %s", name, out) - } - return strings.TrimSpace(out), nil + return inspectFilter(name, fmt.Sprintf("index .%s %q", path, field)) } func getIDByName(name string) (string, error) { @@ -747,29 +865,40 @@ func buildImageFromPath(name, path string, useCache bool) (string, error) { return getIDByName(name) } -type FakeGIT struct { +type GitServer interface { + URL() string + Close() error +} + +type localGitServer struct { *httptest.Server - Root string +} + +func (r *localGitServer) Close() error { + r.Server.Close() + return nil +} + +func (r *localGitServer) URL() string { + return r.Server.URL +} + +type FakeGIT struct { + root string + server GitServer RepoURL string } func (g *FakeGIT) Close() { - g.Server.Close() - os.RemoveAll(g.Root) + g.server.Close() + os.RemoveAll(g.root) } -func fakeGIT(name string, files map[string]string) (*FakeGIT, error) { - tmp, err := ioutil.TempDir("", "fake-git-repo") +func fakeGIT(name string, files map[string]string, enforceLocalServer bool) (*FakeGIT, error) { + ctx, err := fakeContextWithFiles(files) if err != nil { return nil, err } - ctx := &FakeContext{tmp} - for file, content := range files { - if err := ctx.Add(file, content); err != nil { - ctx.Close() - return nil, err - } - } defer ctx.Close() curdir, err := os.Getwd() if err != nil { @@ -820,12 +949,23 @@ func fakeGIT(name string, files map[string]string) (*FakeGIT, error) { os.RemoveAll(root) return nil, err } - handler := http.FileServer(http.Dir(root)) - server := httptest.NewServer(handler) + + var server GitServer + if !enforceLocalServer { + // use fakeStorage server, which might be local or remote (at test daemon) + server, err = fakeStorageWithContext(fakeContextFromDir(root)) + if err != nil { + return nil, fmt.Errorf("cannot start fake storage: %v", err) + } + } else { + // always start a local http server on CLI test machin + httpServer := httptest.NewServer(http.FileServer(http.Dir(root))) + server = &localGitServer{httpServer} + } return &FakeGIT{ - Server: server, - Root: root, - RepoURL: fmt.Sprintf("%s/%s.git", server.URL, name), + root: root, + server: server, + RepoURL: fmt.Sprintf("%s/%s.git", server.URL(), name), }, nil } @@ -897,6 +1037,32 @@ func readContainerFileWithExec(containerId, filename string) ([]byte, error) { return []byte(out), err } +// daemonTime provides the current time on the daemon host +func daemonTime(t *testing.T) time.Time { + if isLocalDaemon { + return time.Now() + } + + body, err := sockRequest("GET", "/info", nil) + if err != nil { + t.Fatal("daemonTime: failed to get /info: %v", err) + } + + type infoJSON struct { + SystemTime string + } + var info infoJSON + if err = json.Unmarshal(body, &info); err != nil { + t.Fatalf("unable to unmarshal /info response: %v", err) + } + + dt, err := time.Parse(time.RFC3339Nano, info.SystemTime) + if err != nil { + t.Fatal(err) + } + return dt +} + func setupRegistry(t *testing.T) func() { testRequires(t, RegistryHosting) reg, err := newTestRegistryV2(t) @@ -919,12 +1085,23 @@ func setupRegistry(t *testing.T) func() { return func() { reg.Close() } } -// appendDockerHostEnv adds given env slice DOCKER_HOST value if set in the -// environment. Useful when environment is cleared but we want to preserve DOCKER_HOST -// to execute tests against a remote daemon. -func appendDockerHostEnv(env []string) []string { - if dockerHost := os.Getenv("DOCKER_HOST"); dockerHost != "" { - env = append(env, fmt.Sprintf("DOCKER_HOST=%s", dockerHost)) +// appendBaseEnv appends the minimum set of environment variables to exec the +// docker cli binary for testing with correct configuration to the given env +// list. +func appendBaseEnv(env []string) []string { + preserveList := []string{ + // preserve remote test host + "DOCKER_HOST", + + // windows: requires preserving SystemRoot, otherwise dial tcp fails + // with "GetAddrInfoW: A non-recoverable error occurred during a database lookup." + "SystemRoot", + } + + for _, key := range preserveList { + if val := os.Getenv(key); val != "" { + env = append(env, fmt.Sprintf("%s=%s", key, val)) + } } return env } diff --git a/integration-cli/test_vars_unix.go b/integration-cli/test_vars_unix.go index 988d3c4721..1ab8a5ca48 100644 --- a/integration-cli/test_vars_unix.go +++ b/integration-cli/test_vars_unix.go @@ -5,4 +5,6 @@ package main const ( // identifies if test suite is running on a unix platform isUnixCli = true + + expectedFileChmod = "-rw-r--r--" ) diff --git a/integration-cli/test_vars_windows.go b/integration-cli/test_vars_windows.go index f9ad163981..3cad4bceef 100644 --- a/integration-cli/test_vars_windows.go +++ b/integration-cli/test_vars_windows.go @@ -5,4 +5,7 @@ package main const ( // identifies if test suite is running on a unix platform isUnixCli = false + + // this is the expected file permission set on windows: gh#11047 + expectedFileChmod = "-rwx------" ) diff --git a/integration/MAINTAINERS b/integration/MAINTAINERS deleted file mode 100644 index ad2d2d2b31..0000000000 --- a/integration/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Tibor Vass (@tiborvass) -Cristian Staretu (@unclejack) diff --git a/integration/runtime_test.go b/integration/runtime_test.go index f421facde3..153c385627 100644 --- a/integration/runtime_test.go +++ b/integration/runtime_test.go @@ -307,7 +307,7 @@ func TestDaemonCreate(t *testing.T) { "conflictname", ) if _, _, err := daemon.Create(&runconfig.Config{Image: GetTestImage(daemon).ID, Cmd: []string{"ls", "-al"}}, &runconfig.HostConfig{}, testContainer.Name); err == nil || !strings.Contains(err.Error(), common.TruncateID(testContainer.ID)) { - t.Fatalf("Name conflict error doesn't include the correct short id. Message was: %s", err.Error()) + t.Fatalf("Name conflict error doesn't include the correct short id. Message was: %v", err) } // Make sure create with bad parameters returns an error diff --git a/integration/utils_test.go b/integration/utils_test.go index d5a068fe02..2e90e4f515 100644 --- a/integration/utils_test.go +++ b/integration/utils_test.go @@ -191,6 +191,7 @@ func newTestEngine(t Fataler, autorestart bool, root string) *engine.Engine { // otherwise NewDaemon will fail because of conflicting settings. InterContainerCommunication: true, TrustKeyPath: filepath.Join(root, "key.json"), + LogConfig: runconfig.LogConfig{Type: "json-file"}, } d, err := daemon.NewDaemon(cfg, eng) if err != nil { diff --git a/opts/opts.go b/opts/opts.go index cd720c9a92..e867c0a21d 100644 --- a/opts/opts.go +++ b/opts/opts.go @@ -211,7 +211,7 @@ func validateDomain(val string) (string, error) { return "", fmt.Errorf("%s is not a valid domain", val) } ns := domainRegexp.FindSubmatch([]byte(val)) - if len(ns) > 0 { + if len(ns) > 0 && len(ns[1]) < 255 { return string(ns[1]), nil } return "", fmt.Errorf("%s is not a valid domain", val) diff --git a/opts/opts_test.go b/opts/opts_test.go index 631d4c6b60..8370926da5 100644 --- a/opts/opts_test.go +++ b/opts/opts_test.go @@ -105,6 +105,7 @@ func TestValidateDnsSearch(t *testing.T) { `foo.bar-.baz`, `foo.-bar`, `foo.-bar.baz`, + `foo.bar.baz.this.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbethis.should.fail.on.long.name.beause.it.is.longer.thanisshouldbe`, } for _, domain := range valid { diff --git a/pkg/archive/MAINTAINERS b/pkg/archive/MAINTAINERS deleted file mode 100644 index 2aac7265d2..0000000000 --- a/pkg/archive/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Cristian Staretu (@unclejack) -Tibor Vass (@tiborvass) diff --git a/pkg/archive/archive.go b/pkg/archive/archive.go index bce66a505a..bfa6e18462 100644 --- a/pkg/archive/archive.go +++ b/pkg/archive/archive.go @@ -204,6 +204,7 @@ func (ta *tarAppender) addTarFile(path, name string) error { if err != nil { return err } + hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode))) name, err = canonicalTarName(name, fi.IsDir()) if err != nil { @@ -696,6 +697,8 @@ func (archiver *Archiver) CopyFileWithTar(src, dst string) (err error) { return err } hdr.Name = filepath.Base(dst) + hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode))) + tw := tar.NewWriter(w) defer tw.Close() if err := tw.WriteHeader(hdr); err != nil { diff --git a/pkg/archive/archive_unix.go b/pkg/archive/archive_unix.go index 8c7079f85a..cbce65e31d 100644 --- a/pkg/archive/archive_unix.go +++ b/pkg/archive/archive_unix.go @@ -4,6 +4,7 @@ package archive import ( "errors" + "os" "syscall" "github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar" @@ -16,6 +17,13 @@ func CanonicalTarNameForPath(p string) (string, error) { return p, nil // already unix-style } +// chmodTarEntry is used to adjust the file permissions used in tar header based +// on the platform the archival is done. + +func chmodTarEntry(perm os.FileMode) os.FileMode { + return perm // noop for unix as golang APIs provide perm bits correctly +} + func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (nlink uint32, inode uint64, err error) { s, ok := stat.(*syscall.Stat_t) diff --git a/pkg/archive/archive_unix_test.go b/pkg/archive/archive_unix_test.go index 52f28e20f0..18f45c480f 100644 --- a/pkg/archive/archive_unix_test.go +++ b/pkg/archive/archive_unix_test.go @@ -3,6 +3,7 @@ package archive import ( + "os" "testing" ) @@ -40,3 +41,20 @@ func TestCanonicalTarName(t *testing.T) { } } } + +func TestChmodTarEntry(t *testing.T) { + cases := []struct { + in, expected os.FileMode + }{ + {0000, 0000}, + {0777, 0777}, + {0644, 0644}, + {0755, 0755}, + {0444, 0444}, + } + for _, v := range cases { + if out := chmodTarEntry(v.in); out != v.expected { + t.Fatalf("wrong chmod. expected:%v got:%v", v.expected, out) + } + } +} diff --git a/pkg/archive/archive_windows.go b/pkg/archive/archive_windows.go index b95aa178d4..96a93ee7af 100644 --- a/pkg/archive/archive_windows.go +++ b/pkg/archive/archive_windows.go @@ -4,6 +4,7 @@ package archive import ( "fmt" + "os" "strings" "github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar" @@ -20,7 +21,19 @@ func CanonicalTarNameForPath(p string) (string, error) { if strings.Contains(p, "/") { return "", fmt.Errorf("windows path contains forward slash: %s", p) } - return strings.Replace(p, "\\", "/", -1), nil + return strings.Replace(p, string(os.PathSeparator), "/", -1), nil + +} + +// chmodTarEntry is used to adjust the file permissions used in tar header based +// on the platform the archival is done. +func chmodTarEntry(perm os.FileMode) os.FileMode { + // Clear r/w on grp/others: no precise equivalen of group/others on NTFS. + perm &= 0711 + // Add the x bit: make everything +x from windows + perm |= 0100 + + return perm } func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (nlink uint32, inode uint64, err error) { diff --git a/pkg/archive/archive_windows_test.go b/pkg/archive/archive_windows_test.go index 64f47264db..0c97a1040d 100644 --- a/pkg/archive/archive_windows_test.go +++ b/pkg/archive/archive_windows_test.go @@ -3,6 +3,7 @@ package archive import ( + "os" "testing" ) @@ -45,3 +46,20 @@ func TestCanonicalTarName(t *testing.T) { } } } + +func TestChmodTarEntry(t *testing.T) { + cases := []struct { + in, expected os.FileMode + }{ + {0000, 0100}, + {0777, 0711}, + {0644, 0700}, + {0755, 0711}, + {0444, 0500}, + } + for _, v := range cases { + if out := chmodTarEntry(v.in); out != v.expected { + t.Fatalf("wrong chmod. expected:%v got:%v", v.expected, out) + } + } +} diff --git a/pkg/archive/changes.go b/pkg/archive/changes.go index 9f16b76f16..f2ac2a3561 100644 --- a/pkg/archive/changes.go +++ b/pkg/archive/changes.go @@ -143,7 +143,7 @@ func Changes(layers []string, rw string) ([]Change, error) { type FileInfo struct { parent *FileInfo name string - stat *system.Stat + stat *system.Stat_t children map[string]*FileInfo capability []byte added bool diff --git a/pkg/devicemapper/MAINTAINERS b/pkg/devicemapper/MAINTAINERS deleted file mode 100644 index 4428dec019..0000000000 --- a/pkg/devicemapper/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Vincent Batts (@vbatts) diff --git a/pkg/directory/directory_linux.go b/pkg/directory/directory_linux.go new file mode 100644 index 0000000000..80fb9a8332 --- /dev/null +++ b/pkg/directory/directory_linux.go @@ -0,0 +1,39 @@ +// +build linux + +package directory + +import ( + "os" + "path/filepath" + "syscall" +) + +// Size walks a directory tree and returns its total size in bytes. +func Size(dir string) (size int64, err error) { + data := make(map[uint64]struct{}) + err = filepath.Walk(dir, func(d string, fileInfo os.FileInfo, e error) error { + // Ignore directory sizes + if fileInfo == nil { + return nil + } + + s := fileInfo.Size() + if fileInfo.IsDir() || s == 0 { + return nil + } + + // Check inode to handle hard links correctly + inode := fileInfo.Sys().(*syscall.Stat_t).Ino + // inode is not a uint64 on all platforms. Cast it to avoid issues. + if _, exists := data[uint64(inode)]; exists { + return nil + } + // inode is not a uint64 on all platforms. Cast it to avoid issues. + data[uint64(inode)] = struct{}{} + + size += s + + return nil + }) + return +} diff --git a/pkg/directory/directory_test.go b/pkg/directory/directory_test.go new file mode 100644 index 0000000000..a8da1ac651 --- /dev/null +++ b/pkg/directory/directory_test.go @@ -0,0 +1,137 @@ +package directory + +import ( + "io/ioutil" + "os" + "testing" +) + +// Size of an empty directory should be 0 +func TestSizeEmpty(t *testing.T) { + var dir string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "testSizeEmptyDirectory"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + + var size int64 + if size, _ = Size(dir); size != 0 { + t.Fatalf("empty directory has size: %d", size) + } +} + +// Size of a directory with one empty file should be 0 +func TestSizeEmptyFile(t *testing.T) { + var dir string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "testSizeEmptyFile"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + + var file *os.File + if file, err = ioutil.TempFile(dir, "file"); err != nil { + t.Fatalf("failed to create file: %s", err) + } + + var size int64 + if size, _ = Size(file.Name()); size != 0 { + t.Fatalf("directory with one file has size: %d", size) + } +} + +// Size of a directory with one 5-byte file should be 5 +func TestSizeNonemptyFile(t *testing.T) { + var dir string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "testSizeNonemptyFile"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + + var file *os.File + if file, err = ioutil.TempFile(dir, "file"); err != nil { + t.Fatalf("failed to create file: %s", err) + } + + d := []byte{97, 98, 99, 100, 101} + file.Write(d) + + var size int64 + if size, _ = Size(file.Name()); size != 5 { + t.Fatalf("directory with one 5-byte file has size: %d", size) + } +} + +// Size of a directory with one empty directory should be 0 +func TestSizeNestedDirectoryEmpty(t *testing.T) { + var dir string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "testSizeNestedDirectoryEmpty"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + if dir, err = ioutil.TempDir(dir, "nested"); err != nil { + t.Fatalf("failed to create nested directory: %s", err) + } + + var size int64 + if size, _ = Size(dir); size != 0 { + t.Fatalf("directory with one empty directory has size: %d", size) + } +} + +// Test directory with 1 file and 1 empty directory +func TestSizeFileAndNestedDirectoryEmpty(t *testing.T) { + var dir string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "testSizeFileAndNestedDirectoryEmpty"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + if dir, err = ioutil.TempDir(dir, "nested"); err != nil { + t.Fatalf("failed to create nested directory: %s", err) + } + + var file *os.File + if file, err = ioutil.TempFile(dir, "file"); err != nil { + t.Fatalf("failed to create file: %s", err) + } + + d := []byte{100, 111, 99, 107, 101, 114} + file.Write(d) + + var size int64 + if size, _ = Size(dir); size != 6 { + t.Fatalf("directory with 6-byte file and empty directory has size: %d", size) + } +} + +// Test directory with 1 file and 1 non-empty directory +func TestSizeFileAndNestedDirectoryNonempty(t *testing.T) { + var dir, dirNested string + var err error + if dir, err = ioutil.TempDir(os.TempDir(), "TestSizeFileAndNestedDirectoryNonempty"); err != nil { + t.Fatalf("failed to create directory: %s", err) + } + if dirNested, err = ioutil.TempDir(dir, "nested"); err != nil { + t.Fatalf("failed to create nested directory: %s", err) + } + + var file *os.File + if file, err = ioutil.TempFile(dir, "file"); err != nil { + t.Fatalf("failed to create file: %s", err) + } + + data := []byte{100, 111, 99, 107, 101, 114} + file.Write(data) + + var nestedFile *os.File + if nestedFile, err = ioutil.TempFile(dirNested, "file"); err != nil { + t.Fatalf("failed to create file in nested directory: %s", err) + } + + nestedData := []byte{100, 111, 99, 107, 101, 114} + nestedFile.Write(nestedData) + + var size int64 + if size, _ = Size(dir); size != 12 { + t.Fatalf("directory with 6-byte file and nested directory with 6-byte file has size: %d", size) + } +} diff --git a/pkg/directory/directory_windows.go b/pkg/directory/directory_windows.go new file mode 100644 index 0000000000..7a9f8cb68c --- /dev/null +++ b/pkg/directory/directory_windows.go @@ -0,0 +1,28 @@ +// +build windows + +package directory + +import ( + "os" + "path/filepath" +) + +// Size walks a directory tree and returns its total size in bytes. +func Size(dir string) (size int64, err error) { + err = filepath.Walk(dir, func(d string, fileInfo os.FileInfo, e error) error { + // Ignore directory sizes + if fileInfo == nil { + return nil + } + + s := fileInfo.Size() + if fileInfo.IsDir() || s == 0 { + return nil + } + + size += s + + return nil + }) + return +} diff --git a/pkg/graphdb/MAINTAINERS b/pkg/graphdb/MAINTAINERS deleted file mode 100644 index 1e998f8ac1..0000000000 --- a/pkg/graphdb/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Michael Crosby (@crosbymichael) diff --git a/pkg/homedir/MAINTAINERS b/pkg/homedir/MAINTAINERS deleted file mode 100644 index 82733b88f7..0000000000 --- a/pkg/homedir/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Ahmet Alp Balkan (@ahmetalpbalkan) diff --git a/pkg/homedir/homedir.go b/pkg/homedir/homedir.go index 3ffb297559..61137a8f5d 100644 --- a/pkg/homedir/homedir.go +++ b/pkg/homedir/homedir.go @@ -3,6 +3,8 @@ package homedir import ( "os" "runtime" + + "github.com/docker/libcontainer/user" ) // Key returns the env var name for the user's home dir based on @@ -18,7 +20,13 @@ func Key() string { // environment variables depending on the target operating system. // Returned path should be used with "path/filepath" to form new paths. func Get() string { - return os.Getenv(Key()) + home := os.Getenv(Key()) + if home == "" && runtime.GOOS != "windows" { + if u, err := user.CurrentUser(); err == nil { + return u.Home + } + } + return home } // GetShortcutString returns the string that is shortcut to user's home directory diff --git a/pkg/httputils/MAINTAINERS b/pkg/httputils/MAINTAINERS deleted file mode 100644 index 6dde4769d7..0000000000 --- a/pkg/httputils/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Cristian Staretu (@unclejack) diff --git a/pkg/iptables/MAINTAINERS b/pkg/iptables/MAINTAINERS deleted file mode 100644 index 134b02a071..0000000000 --- a/pkg/iptables/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Michael Crosby (@crosbymichael) -Jessie Frazelle (@jfrazelle) diff --git a/pkg/iptables/iptables.go b/pkg/iptables/iptables.go index 010c99b15c..3e083a43ad 100644 --- a/pkg/iptables/iptables.go +++ b/pkg/iptables/iptables.go @@ -21,6 +21,7 @@ const ( Insert Action = "-I" Nat Table = "nat" Filter Table = "filter" + Mangle Table = "mangle" ) var ( @@ -82,7 +83,7 @@ func NewChain(name, bridge string, table Table) (*Chain, error) { preroute := []string{ "-m", "addrtype", "--dst-type", "LOCAL"} - if !Exists(preroute...) { + if !Exists(Nat, "PREROUTING", preroute...) { if err := c.Prerouting(Append, preroute...); err != nil { return nil, fmt.Errorf("Failed to inject docker in PREROUTING chain: %s", err) } @@ -91,17 +92,17 @@ func NewChain(name, bridge string, table Table) (*Chain, error) { "-m", "addrtype", "--dst-type", "LOCAL", "!", "--dst", "127.0.0.0/8"} - if !Exists(output...) { + if !Exists(Nat, "OUTPUT", output...) { if err := c.Output(Append, output...); err != nil { return nil, fmt.Errorf("Failed to inject docker in OUTPUT chain: %s", err) } } case Filter: - link := []string{"FORWARD", + link := []string{ "-o", c.Bridge, "-j", c.Name} - if !Exists(link...) { - insert := append([]string{string(Insert)}, link...) + if !Exists(Filter, "FORWARD", link...) { + insert := append([]string{string(Insert), "FORWARD"}, link...) if output, err := Raw(insert...); err != nil { return nil, err } else if len(output) != 0 { @@ -242,19 +243,25 @@ func (c *Chain) Remove() error { } // Check if a rule exists -func Exists(args ...string) bool { +func Exists(table Table, chain string, rule ...string) bool { + if string(table) == "" { + table = Filter + } + // iptables -C, --check option was added in v.1.4.11 // http://ftp.netfilter.org/pub/iptables/changes-iptables-1.4.11.txt // try -C // if exit status is 0 then return true, the rule exists - if _, err := Raw(append([]string{"-C"}, args...)...); err == nil { + if _, err := Raw(append([]string{ + "-t", string(table), "-C", chain}, rule...)...); err == nil { return true } - // parse iptables-save for the rule - rule := strings.Replace(strings.Join(args, " "), "-t nat ", "", -1) - existingRules, _ := exec.Command("iptables-save").Output() + // parse "iptables -S" for the rule (this checks rules in a specific chain + // in a specific table) + rule_string := strings.Join(rule, " ") + existingRules, _ := exec.Command("iptables", "-t", string(table), "-S", chain).Output() // regex to replace ips in rule // because MASQUERADE rule will not be exactly what was passed @@ -262,7 +269,7 @@ func Exists(args ...string) bool { return strings.Contains( re.ReplaceAllString(string(existingRules), "?"), - re.ReplaceAllString(rule, "?"), + re.ReplaceAllString(rule_string, "?"), ) } diff --git a/pkg/iptables/iptables_test.go b/pkg/iptables/iptables_test.go index 8aaf429c94..ced4262ce2 100644 --- a/pkg/iptables/iptables_test.go +++ b/pkg/iptables/iptables_test.go @@ -39,8 +39,7 @@ func TestForward(t *testing.T) { t.Fatal(err) } - dnatRule := []string{natChain.Name, - "-t", string(natChain.Table), + dnatRule := []string{ "!", "-i", filterChain.Bridge, "-d", ip.String(), "-p", proto, @@ -49,12 +48,11 @@ func TestForward(t *testing.T) { "--to-destination", dstAddr + ":" + strconv.Itoa(dstPort), } - if !Exists(dnatRule...) { + if !Exists(natChain.Table, natChain.Name, dnatRule...) { t.Fatalf("DNAT rule does not exist") } - filterRule := []string{filterChain.Name, - "-t", string(filterChain.Table), + filterRule := []string{ "!", "-i", filterChain.Bridge, "-o", filterChain.Bridge, "-d", dstAddr, @@ -63,12 +61,11 @@ func TestForward(t *testing.T) { "-j", "ACCEPT", } - if !Exists(filterRule...) { + if !Exists(filterChain.Table, filterChain.Name, filterRule...) { t.Fatalf("filter rule does not exist") } - masqRule := []string{"POSTROUTING", - "-t", string(natChain.Table), + masqRule := []string{ "-d", dstAddr, "-s", dstAddr, "-p", proto, @@ -76,7 +73,7 @@ func TestForward(t *testing.T) { "-j", "MASQUERADE", } - if !Exists(masqRule...) { + if !Exists(natChain.Table, "POSTROUTING", masqRule...) { t.Fatalf("MASQUERADE rule does not exist") } } @@ -94,8 +91,7 @@ func TestLink(t *testing.T) { t.Fatal(err) } - rule1 := []string{filterChain.Name, - "-t", string(filterChain.Table), + rule1 := []string{ "-i", filterChain.Bridge, "-o", filterChain.Bridge, "-p", proto, @@ -104,12 +100,11 @@ func TestLink(t *testing.T) { "--dport", strconv.Itoa(port), "-j", "ACCEPT"} - if !Exists(rule1...) { + if !Exists(filterChain.Table, filterChain.Name, rule1...) { t.Fatalf("rule1 does not exist") } - rule2 := []string{filterChain.Name, - "-t", string(filterChain.Table), + rule2 := []string{ "-i", filterChain.Bridge, "-o", filterChain.Bridge, "-p", proto, @@ -118,7 +113,7 @@ func TestLink(t *testing.T) { "--sport", strconv.Itoa(port), "-j", "ACCEPT"} - if !Exists(rule2...) { + if !Exists(filterChain.Table, filterChain.Name, rule2...) { t.Fatalf("rule2 does not exist") } } @@ -133,17 +128,16 @@ func TestPrerouting(t *testing.T) { t.Fatal(err) } - rule := []string{"PREROUTING", - "-t", string(Nat), + rule := []string{ "-j", natChain.Name} rule = append(rule, args...) - if !Exists(rule...) { + if !Exists(natChain.Table, "PREROUTING", rule...) { t.Fatalf("rule does not exist") } - delRule := append([]string{"-D"}, rule...) + delRule := append([]string{"-D", "PREROUTING", "-t", string(Nat)}, rule...) if _, err = Raw(delRule...); err != nil { t.Fatal(err) } @@ -159,17 +153,17 @@ func TestOutput(t *testing.T) { t.Fatal(err) } - rule := []string{"OUTPUT", - "-t", string(natChain.Table), + rule := []string{ "-j", natChain.Name} rule = append(rule, args...) - if !Exists(rule...) { + if !Exists(natChain.Table, "OUTPUT", rule...) { t.Fatalf("rule does not exist") } - delRule := append([]string{"-D"}, rule...) + delRule := append([]string{"-D", "OUTPUT", "-t", + string(natChain.Table)}, rule...) if _, err = Raw(delRule...); err != nil { t.Fatal(err) } diff --git a/pkg/mflag/MAINTAINERS b/pkg/mflag/MAINTAINERS deleted file mode 100644 index e0f18f14f1..0000000000 --- a/pkg/mflag/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Victor Vieux (@vieux) diff --git a/pkg/mflag/flag.go b/pkg/mflag/flag.go index d02c7b1e01..b35692bfd3 100644 --- a/pkg/mflag/flag.go +++ b/pkg/mflag/flag.go @@ -506,15 +506,11 @@ func Set(name, value string) error { // otherwise, the default values of all defined flags in the set. func (f *FlagSet) PrintDefaults() { writer := tabwriter.NewWriter(f.Out(), 20, 1, 3, ' ', 0) - var home string - if runtime.GOOS != "windows" { - // Only do this on non-windows systems - home = homedir.Get() + home := homedir.Get() - // Don't substitute when HOME is / - if home == "/" { - home = "" - } + // Don't substitute when HOME is / + if runtime.GOOS != "windows" && home == "/" { + home = "" } f.VisitAll(func(flag *Flag) { format := " -%s=%s" diff --git a/pkg/mount/MAINTAINERS b/pkg/mount/MAINTAINERS deleted file mode 100644 index 1e998f8ac1..0000000000 --- a/pkg/mount/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Michael Crosby (@crosbymichael) diff --git a/pkg/networkfs/MAINTAINERS b/pkg/networkfs/MAINTAINERS deleted file mode 100644 index e0f18f14f1..0000000000 --- a/pkg/networkfs/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Victor Vieux (@vieux) diff --git a/pkg/networkfs/resolvconf/resolvconf.go b/pkg/networkfs/resolvconf/resolvconf.go index d88074f598..61f92d9ae5 100644 --- a/pkg/networkfs/resolvconf/resolvconf.go +++ b/pkg/networkfs/resolvconf/resolvconf.go @@ -23,11 +23,13 @@ var ( // For readability and sufficiency for Docker purposes this seemed more reasonable than a // 1000+ character regexp with exact and complete IPv6 validation ipv6Address = `([0-9A-Fa-f]{0,4}:){2,7}([0-9A-Fa-f]{0,4})` + ipLocalhost = `((127\.([0-9]{1,3}.){2}[0-9]{1,3})|(::1))` - localhostRegexp = regexp.MustCompile(`(?m)^nameserver\s+((127\.([0-9]{1,3}.){2}[0-9]{1,3})|(::1))\s*\n*`) - nsIPv6Regexp = regexp.MustCompile(`(?m)^nameserver\s+` + ipv6Address + `\s*\n*`) - nsRegexp = regexp.MustCompile(`^\s*nameserver\s*((` + ipv4Address + `)|(` + ipv6Address + `))\s*$`) - searchRegexp = regexp.MustCompile(`^\s*search\s*(([^\s]+\s*)*)$`) + localhostIPRegexp = regexp.MustCompile(ipLocalhost) + localhostNSRegexp = regexp.MustCompile(`(?m)^nameserver\s+` + ipLocalhost + `\s*\n*`) + nsIPv6Regexp = regexp.MustCompile(`(?m)^nameserver\s+` + ipv6Address + `\s*\n*`) + nsRegexp = regexp.MustCompile(`^\s*nameserver\s*((` + ipv4Address + `)|(` + ipv6Address + `))\s*$`) + searchRegexp = regexp.MustCompile(`^\s*search\s*(([^\s]+\s*)*)$`) ) var lastModified struct { @@ -87,7 +89,7 @@ func GetLastModified() ([]byte, string) { // It also returns a boolean to notify the caller if changes were made at all func FilterResolvDns(resolvConf []byte, ipv6Enabled bool) ([]byte, bool) { changed := false - cleanedResolvConf := localhostRegexp.ReplaceAll(resolvConf, []byte{}) + cleanedResolvConf := localhostNSRegexp.ReplaceAll(resolvConf, []byte{}) // if IPv6 is not enabled, also clean out any IPv6 address nameserver if !ipv6Enabled { cleanedResolvConf = nsIPv6Regexp.ReplaceAll(cleanedResolvConf, []byte{}) @@ -124,6 +126,13 @@ func getLines(input []byte, commentMarker []byte) [][]byte { return output } +// returns true if the IP string matches the localhost IP regular expression. +// Used for determining if nameserver settings are being passed which are +// localhost addresses +func IsLocalhost(ip string) bool { + return localhostIPRegexp.MatchString(ip) +} + // GetNameservers returns nameservers (if any) listed in /etc/resolv.conf func GetNameservers(resolvConf []byte) []string { nameservers := []string{} diff --git a/pkg/parsers/MAINTAINERS b/pkg/parsers/MAINTAINERS deleted file mode 100644 index 8c8902530a..0000000000 --- a/pkg/parsers/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Erik Hollensbe (@erikh) diff --git a/pkg/parsers/filters/parse.go b/pkg/parsers/filters/parse.go index 8b045a3098..9c056bb3cf 100644 --- a/pkg/parsers/filters/parse.go +++ b/pkg/parsers/filters/parse.go @@ -65,6 +65,38 @@ func FromParam(p string) (Args, error) { return args, nil } +func (filters Args) MatchKVList(field string, sources map[string]string) bool { + fieldValues := filters[field] + + //do not filter if there is no filter set or cannot determine filter + if len(fieldValues) == 0 { + return true + } + + if sources == nil || len(sources) == 0 { + return false + } + +outer: + for _, name2match := range fieldValues { + testKV := strings.SplitN(name2match, "=", 2) + + for k, v := range sources { + if len(testKV) == 1 { + if k == testKV[0] { + continue outer + } + } else if k == testKV[0] && v == testKV[1] { + continue outer + } + } + + return false + } + + return true +} + func (filters Args) Match(field, source string) bool { fieldValues := filters[field] diff --git a/pkg/parsers/parsers.go b/pkg/parsers/parsers.go index 6563190410..59e294dc22 100644 --- a/pkg/parsers/parsers.go +++ b/pkg/parsers/parsers.go @@ -62,11 +62,17 @@ func ParseTCPAddr(addr string, defaultAddr string) (string, error) { return fmt.Sprintf("tcp://%s:%d", host, p), nil } -// Get a repos name and returns the right reposName + tag +// Get a repos name and returns the right reposName + tag|digest // The tag can be confusing because of a port in a repository name. // Ex: localhost.localdomain:5000/samalba/hipache:latest +// Digest ex: localhost:5000/foo/bar@sha256:bc8813ea7b3603864987522f02a76101c17ad122e1c46d790efc0fca78ca7bfb func ParseRepositoryTag(repos string) (string, string) { - n := strings.LastIndex(repos, ":") + n := strings.Index(repos, "@") + if n >= 0 { + parts := strings.Split(repos, "@") + return parts[0], parts[1] + } + n = strings.LastIndex(repos, ":") if n < 0 { return repos, "" } diff --git a/pkg/parsers/parsers_test.go b/pkg/parsers/parsers_test.go index aac1e33e35..bc9a1e943c 100644 --- a/pkg/parsers/parsers_test.go +++ b/pkg/parsers/parsers_test.go @@ -49,18 +49,27 @@ func TestParseRepositoryTag(t *testing.T) { if repo, tag := ParseRepositoryTag("root:tag"); repo != "root" || tag != "tag" { t.Errorf("Expected repo: '%s' and tag: '%s', got '%s' and '%s'", "root", "tag", repo, tag) } + if repo, digest := ParseRepositoryTag("root@sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"); repo != "root" || digest != "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" { + t.Errorf("Expected repo: '%s' and digest: '%s', got '%s' and '%s'", "root", "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", repo, digest) + } if repo, tag := ParseRepositoryTag("user/repo"); repo != "user/repo" || tag != "" { t.Errorf("Expected repo: '%s' and tag: '%s', got '%s' and '%s'", "user/repo", "", repo, tag) } if repo, tag := ParseRepositoryTag("user/repo:tag"); repo != "user/repo" || tag != "tag" { t.Errorf("Expected repo: '%s' and tag: '%s', got '%s' and '%s'", "user/repo", "tag", repo, tag) } + if repo, digest := ParseRepositoryTag("user/repo@sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"); repo != "user/repo" || digest != "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" { + t.Errorf("Expected repo: '%s' and digest: '%s', got '%s' and '%s'", "user/repo", "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", repo, digest) + } if repo, tag := ParseRepositoryTag("url:5000/repo"); repo != "url:5000/repo" || tag != "" { t.Errorf("Expected repo: '%s' and tag: '%s', got '%s' and '%s'", "url:5000/repo", "", repo, tag) } if repo, tag := ParseRepositoryTag("url:5000/repo:tag"); repo != "url:5000/repo" || tag != "tag" { t.Errorf("Expected repo: '%s' and tag: '%s', got '%s' and '%s'", "url:5000/repo", "tag", repo, tag) } + if repo, digest := ParseRepositoryTag("url:5000/repo@sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"); repo != "url:5000/repo" || digest != "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" { + t.Errorf("Expected repo: '%s' and digest: '%s', got '%s' and '%s'", "url:5000/repo", "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", repo, digest) + } } func TestParsePortMapping(t *testing.T) { diff --git a/pkg/progressreader/progressreader.go b/pkg/progressreader/progressreader.go new file mode 100644 index 0000000000..730559e9fb --- /dev/null +++ b/pkg/progressreader/progressreader.go @@ -0,0 +1,69 @@ +package progressreader + +import ( + "io" +) + +type StreamFormatter interface { + FormatProg(string, string, interface{}) []byte + FormatStatus(string, string, ...interface{}) []byte + FormatError(error) []byte +} + +type PR_JSONProgress interface { + GetCurrent() int + GetTotal() int +} + +type JSONProg struct { + Current int + Total int +} + +func (j *JSONProg) GetCurrent() int { + return j.Current +} +func (j *JSONProg) GetTotal() int { + return j.Total +} + +// Reader with progress bar +type Config struct { + In io.ReadCloser // Stream to read from + Out io.Writer // Where to send progress bar to + Formatter StreamFormatter + Size int + Current int + LastUpdate int + NewLines bool + ID string + Action string +} + +func New(newReader Config) *Config { + return &newReader +} +func (config *Config) Read(p []byte) (n int, err error) { + read, err := config.In.Read(p) + config.Current += read + updateEvery := 1024 * 512 //512kB + if config.Size > 0 { + // Update progress for every 1% read if 1% < 512kB + if increment := int(0.01 * float64(config.Size)); increment < updateEvery { + updateEvery = increment + } + } + if config.Current-config.LastUpdate > updateEvery || err != nil { + config.Out.Write(config.Formatter.FormatProg(config.ID, config.Action, &JSONProg{Current: config.Current, Total: config.Size})) + config.LastUpdate = config.Current + } + // Send newline when complete + if config.NewLines && err != nil && read == 0 { + config.Out.Write(config.Formatter.FormatStatus("", "")) + } + return read, err +} +func (config *Config) Close() error { + config.Out.Write(config.Formatter.FormatProg(config.ID, config.Action, &JSONProg{Current: config.Current, Total: config.Size})) + return config.In.Close() +} diff --git a/pkg/proxy/MAINTAINERS b/pkg/proxy/MAINTAINERS deleted file mode 100644 index 8c8902530a..0000000000 --- a/pkg/proxy/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Erik Hollensbe (@erikh) diff --git a/pkg/reexec/MAINTAINERS b/pkg/reexec/MAINTAINERS deleted file mode 100644 index e48a0c7d4d..0000000000 --- a/pkg/reexec/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Michael Crosby (@crosbymichael) diff --git a/pkg/reexec/reexec.go b/pkg/reexec/reexec.go index 774e71c76d..a5f01a26e3 100644 --- a/pkg/reexec/reexec.go +++ b/pkg/reexec/reexec.go @@ -35,8 +35,14 @@ func Self() string { name := os.Args[0] if filepath.Base(name) == name { if lp, err := exec.LookPath(name); err == nil { - name = lp + return lp } } + // handle conversion of relative paths to absolute + if absName, err := filepath.Abs(name); err == nil { + return absName + } + // if we coudn't get absolute name, return original + // (NOTE: Go only errors on Abs() if os.Getwd fails) return name } diff --git a/pkg/stdcopy/MAINTAINERS b/pkg/stdcopy/MAINTAINERS deleted file mode 100644 index 6dde4769d7..0000000000 --- a/pkg/stdcopy/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Cristian Staretu (@unclejack) diff --git a/pkg/symlink/MAINTAINERS b/pkg/symlink/MAINTAINERS deleted file mode 100644 index 51a41a5b60..0000000000 --- a/pkg/symlink/MAINTAINERS +++ /dev/null @@ -1,3 +0,0 @@ -Tibor Vass (@tiborvass) -Cristian Staretu (@unclejack) -Tianon Gravi (@tianon) diff --git a/pkg/sysinfo/MAINTAINERS b/pkg/sysinfo/MAINTAINERS deleted file mode 100644 index 68a97d2fc2..0000000000 --- a/pkg/sysinfo/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Michael Crosby (@crosbymichael) -Victor Vieux (@vieux) diff --git a/pkg/sysinfo/sysinfo.go b/pkg/sysinfo/sysinfo.go index 001111f43d..1d540d2e7d 100644 --- a/pkg/sysinfo/sysinfo.go +++ b/pkg/sysinfo/sysinfo.go @@ -20,20 +20,20 @@ func New(quiet bool) *SysInfo { sysInfo := &SysInfo{} if cgroupMemoryMountpoint, err := cgroups.FindCgroupMountpoint("memory"); err != nil { if !quiet { - log.Printf("WARNING: %s\n", err) + log.Warnf("%s", err) } } else { _, err1 := ioutil.ReadFile(path.Join(cgroupMemoryMountpoint, "memory.limit_in_bytes")) _, err2 := ioutil.ReadFile(path.Join(cgroupMemoryMountpoint, "memory.soft_limit_in_bytes")) sysInfo.MemoryLimit = err1 == nil && err2 == nil if !sysInfo.MemoryLimit && !quiet { - log.Printf("WARNING: Your kernel does not support cgroup memory limit.") + log.Warnf("Your kernel does not support cgroup memory limit.") } _, err = ioutil.ReadFile(path.Join(cgroupMemoryMountpoint, "memory.memsw.limit_in_bytes")) sysInfo.SwapLimit = err == nil if !sysInfo.SwapLimit && !quiet { - log.Printf("WARNING: Your kernel does not support cgroup swap limit.") + log.Warnf("Your kernel does not support cgroup swap limit.") } } diff --git a/pkg/system/MAINTAINERS b/pkg/system/MAINTAINERS deleted file mode 100644 index 68a97d2fc2..0000000000 --- a/pkg/system/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Michael Crosby (@crosbymichael) -Victor Vieux (@vieux) diff --git a/pkg/system/lstat.go b/pkg/system/lstat.go index 9ef82d5523..6c1ed2e386 100644 --- a/pkg/system/lstat.go +++ b/pkg/system/lstat.go @@ -6,7 +6,7 @@ import ( "syscall" ) -func Lstat(path string) (*Stat, error) { +func Lstat(path string) (*Stat_t, error) { s := &syscall.Stat_t{} err := syscall.Lstat(path, s) if err != nil { diff --git a/pkg/system/lstat_windows.go b/pkg/system/lstat_windows.go index 213a7c7ade..801e756d8b 100644 --- a/pkg/system/lstat_windows.go +++ b/pkg/system/lstat_windows.go @@ -2,7 +2,7 @@ package system -func Lstat(path string) (*Stat, error) { +func Lstat(path string) (*Stat_t, error) { // should not be called on cli code path return nil, ErrNotSupportedPlatform } diff --git a/pkg/system/stat.go b/pkg/system/stat.go index 5d47494d21..186e85287d 100644 --- a/pkg/system/stat.go +++ b/pkg/system/stat.go @@ -4,7 +4,7 @@ import ( "syscall" ) -type Stat struct { +type Stat_t struct { mode uint32 uid uint32 gid uint32 @@ -13,30 +13,30 @@ type Stat struct { mtim syscall.Timespec } -func (s Stat) Mode() uint32 { +func (s Stat_t) Mode() uint32 { return s.mode } -func (s Stat) Uid() uint32 { +func (s Stat_t) Uid() uint32 { return s.uid } -func (s Stat) Gid() uint32 { +func (s Stat_t) Gid() uint32 { return s.gid } -func (s Stat) Rdev() uint64 { +func (s Stat_t) Rdev() uint64 { return s.rdev } -func (s Stat) Size() int64 { +func (s Stat_t) Size() int64 { return s.size } -func (s Stat) Mtim() syscall.Timespec { +func (s Stat_t) Mtim() syscall.Timespec { return s.mtim } -func (s Stat) GetLastModification() syscall.Timespec { +func (s Stat_t) GetLastModification() syscall.Timespec { return s.Mtim() } diff --git a/pkg/system/stat_linux.go b/pkg/system/stat_linux.go index 47cebef5cf..072728d0a1 100644 --- a/pkg/system/stat_linux.go +++ b/pkg/system/stat_linux.go @@ -4,11 +4,20 @@ import ( "syscall" ) -func fromStatT(s *syscall.Stat_t) (*Stat, error) { - return &Stat{size: s.Size, +func fromStatT(s *syscall.Stat_t) (*Stat_t, error) { + return &Stat_t{size: s.Size, mode: s.Mode, uid: s.Uid, gid: s.Gid, rdev: s.Rdev, mtim: s.Mtim}, nil } + +func Stat(path string) (*Stat_t, error) { + s := &syscall.Stat_t{} + err := syscall.Stat(path, s) + if err != nil { + return nil, err + } + return fromStatT(s) +} diff --git a/pkg/system/stat_unsupported.go b/pkg/system/stat_unsupported.go index c4d53e6cd6..66323eee21 100644 --- a/pkg/system/stat_unsupported.go +++ b/pkg/system/stat_unsupported.go @@ -6,8 +6,8 @@ import ( "syscall" ) -func fromStatT(s *syscall.Stat_t) (*Stat, error) { - return &Stat{size: s.Size, +func fromStatT(s *syscall.Stat_t) (*Stat_t, error) { + return &Stat_t{size: s.Size, mode: uint32(s.Mode), uid: s.Uid, gid: s.Gid, diff --git a/pkg/system/stat_windows.go b/pkg/system/stat_windows.go index 584e8940cc..42d29d6cca 100644 --- a/pkg/system/stat_windows.go +++ b/pkg/system/stat_windows.go @@ -7,6 +7,11 @@ import ( "syscall" ) -func fromStatT(s *syscall.Win32FileAttributeData) (*Stat, error) { +func fromStatT(s *syscall.Win32FileAttributeData) (*Stat_t, error) { return nil, errors.New("fromStatT should not be called on windows path") } + +func Stat(path string) (*Stat_t, error) { + // should not be called on cli code path + return nil, ErrNotSupportedPlatform +} diff --git a/pkg/systemd/MAINTAINERS b/pkg/systemd/MAINTAINERS deleted file mode 100644 index 51228b368a..0000000000 --- a/pkg/systemd/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Brandon Philips (@philips) diff --git a/pkg/tarsum/MAINTAINERS b/pkg/tarsum/MAINTAINERS deleted file mode 100644 index 9571a14a38..0000000000 --- a/pkg/tarsum/MAINTAINERS +++ /dev/null @@ -1,4 +0,0 @@ -Derek McGowan (github: dmcgowan) -Eric Windisch (github: ewindisch) -Josh Hawn (github: jlhawn) -Vincent Batts (github: vbatts) diff --git a/pkg/term/MAINTAINERS b/pkg/term/MAINTAINERS deleted file mode 100644 index aee10c8421..0000000000 --- a/pkg/term/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Solomon Hykes (@shykes) diff --git a/pkg/testutils/MAINTAINERS b/pkg/testutils/MAINTAINERS deleted file mode 100644 index f2e8c52e51..0000000000 --- a/pkg/testutils/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Solomon Hykes (@shykes) -Cristian Staretu (@unclejack) diff --git a/pkg/timeutils/MAINTAINERS b/pkg/timeutils/MAINTAINERS deleted file mode 100644 index 6dde4769d7..0000000000 --- a/pkg/timeutils/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Cristian Staretu (@unclejack) diff --git a/pkg/truncindex/MAINTAINERS b/pkg/truncindex/MAINTAINERS deleted file mode 100644 index 6dde4769d7..0000000000 --- a/pkg/truncindex/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -Cristian Staretu (@unclejack) diff --git a/pkg/units/MAINTAINERS b/pkg/units/MAINTAINERS deleted file mode 100644 index 96abeae570..0000000000 --- a/pkg/units/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Victor Vieux (@vieux) -Jessie Frazelle (@jfrazelle) diff --git a/project/MAINTAINERS b/project/MAINTAINERS deleted file mode 100644 index 15e4433e8b..0000000000 --- a/project/MAINTAINERS +++ /dev/null @@ -1,4 +0,0 @@ -Tianon Gravi (@tianon) -Cristian Staretu (@unclejack) -Tibor Vass (@tiborvass) -dind: Jerome Petazzoni (@jpetazzo) diff --git a/project/RELEASE-CHECKLIST.md b/project/RELEASE-CHECKLIST.md index d8e342f4cd..d9382b901c 100644 --- a/project/RELEASE-CHECKLIST.md +++ b/project/RELEASE-CHECKLIST.md @@ -217,7 +217,7 @@ We recommend announcing the release candidate on: - In a comment on the pull request to notify subscribed people on GitHub - The [docker-dev](https://groups.google.com/forum/#!forum/docker-dev) group - The [docker-maintainers](https://groups.google.com/a/dockerproject.org/forum/#!forum/maintainers) group -- Any social media that can get bring some attention to the release cabdidate +- Any social media that can bring some attention to the release candidate ### 7. Iterate on successive release candidates diff --git a/project/allmaintainers.sh b/project/allmaintainers.sh deleted file mode 100755 index 1ea5a9f743..0000000000 --- a/project/allmaintainers.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/sh - -find $1 -name MAINTAINERS -exec cat {} ';' | sed -E -e 's/^[^:]*: *(.*)$/\1/' | grep -E -v -e '^ *$' -e '^ *#.*$' | sort -u diff --git a/project/getmaintainer.sh b/project/getmaintainer.sh deleted file mode 100755 index ca532d42ec..0000000000 --- a/project/getmaintainer.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/usr/bin/env bash -set -e - -if [ $# -ne 1 ]; then - echo >&2 "Usage: $0 PATH" - echo >&2 "Show the primary and secondary maintainers for a given path" - exit 1 -fi - -set -e - -DEST=$1 -DESTFILE="" -if [ ! -d $DEST ]; then - DESTFILE=$(basename $DEST) - DEST=$(dirname $DEST) -fi - -MAINTAINERS=() -cd $DEST -while true; do - if [ -e ./MAINTAINERS ]; then - { - while read line; do - re='^([^:]*): *(.*)$' - file=$(echo $line | sed -E -n "s/$re/\1/p") - if [ ! -z "$file" ]; then - if [ "$file" = "$DESTFILE" ]; then - echo "Override: $line" - maintainer=$(echo $line | sed -E -n "s/$re/\2/p") - MAINTAINERS=("$maintainer" "${MAINTAINERS[@]}") - fi - else - MAINTAINERS+=("$line"); - fi - done; - } < MAINTAINERS - break - fi - if [ -d .git ]; then - break - fi - if [ "$(pwd)" = "/" ]; then - break - fi - cd .. -done - -PRIMARY="${MAINTAINERS[0]}" -PRIMARY_FIRSTNAME=$(echo $PRIMARY | cut -d' ' -f1) -LGTM_COUNT=${#MAINTAINERS[@]} -LGTM_COUNT=$((LGTM_COUNT%2 +1)) - -firstname() { - echo $1 | cut -d' ' -f1 -} - -echo "A pull request in $1 will need $LGTM_COUNT LGTM's to be merged." -echo "--- $PRIMARY is the PRIMARY MAINTAINER of $1." -for SECONDARY in "${MAINTAINERS[@]:1}"; do - echo "--- $SECONDARY" -done diff --git a/project/make/.ensure-busybox b/project/make/.ensure-busybox deleted file mode 100644 index 24ba3052db..0000000000 --- a/project/make/.ensure-busybox +++ /dev/null @@ -1,10 +0,0 @@ -#!/bin/bash -set -e - -if ! docker inspect busybox &> /dev/null; then - if [ -d /docker-busybox ]; then - ( set -x; docker build -t busybox /docker-busybox ) - else - ( set -x; docker pull busybox ) - fi -fi diff --git a/registry/MAINTAINERS b/registry/MAINTAINERS deleted file mode 100644 index fdb03ed573..0000000000 --- a/registry/MAINTAINERS +++ /dev/null @@ -1,5 +0,0 @@ -Sam Alba (@samalba) -Joffrey Fuhrer (@shin-) -Ken Cochrane (@kencochrane) -Vincent Batts (@vbatts) -Olivier Gambier (@dmp42) diff --git a/registry/auth.go b/registry/auth.go index 3207c87e82..bb91c95c00 100644 --- a/registry/auth.go +++ b/registry/auth.go @@ -1,6 +1,7 @@ package registry import ( + "crypto/tls" "encoding/base64" "encoding/json" "errors" @@ -70,10 +71,19 @@ func (auth *RequestAuthorization) getToken() (string, error) { return auth.tokenCache, nil } + tlsConfig := tls.Config{ + MinVersion: tls.VersionTLS10, + } + if !auth.registryEndpoint.IsSecure { + tlsConfig.InsecureSkipVerify = true + } + client := &http.Client{ Transport: &http.Transport{ DisableKeepAlives: true, - Proxy: http.ProxyFromEnvironment}, + Proxy: http.ProxyFromEnvironment, + TLSClientConfig: &tlsConfig, + }, CheckRedirect: AddRequiredHeadersToRedirectedRequests, } factory := HTTPRequestFactory(nil) @@ -362,10 +372,18 @@ func loginV1(authConfig *AuthConfig, registryEndpoint *Endpoint, factory *utils. func loginV2(authConfig *AuthConfig, registryEndpoint *Endpoint, factory *utils.HTTPRequestFactory) (string, error) { log.Debugf("attempting v2 login to registry endpoint %s", registryEndpoint) + tlsConfig := tls.Config{ + MinVersion: tls.VersionTLS10, + } + if !registryEndpoint.IsSecure { + tlsConfig.InsecureSkipVerify = true + } + client := &http.Client{ Transport: &http.Transport{ DisableKeepAlives: true, Proxy: http.ProxyFromEnvironment, + TLSClientConfig: &tlsConfig, }, CheckRedirect: AddRequiredHeadersToRedirectedRequests, } diff --git a/registry/config.go b/registry/config.go index 3d7e41e3e6..a706f17e6e 100644 --- a/registry/config.go +++ b/registry/config.go @@ -223,8 +223,8 @@ func validateRemoteName(remoteName string) error { if !validNamespaceChars.MatchString(namespace) { return fmt.Errorf("Invalid namespace name (%s). Only [a-z0-9-_] are allowed.", namespace) } - if len(namespace) < 4 || len(namespace) > 30 { - return fmt.Errorf("Invalid namespace name (%s). Cannot be fewer than 4 or more than 30 characters.", namespace) + if len(namespace) < 2 || len(namespace) > 255 { + return fmt.Errorf("Invalid namespace name (%s). Cannot be fewer than 2 or more than 255 characters.", namespace) } if strings.HasPrefix(namespace, "-") || strings.HasSuffix(namespace, "-") { return fmt.Errorf("Invalid namespace name (%s). Cannot begin or end with a hyphen.", namespace) diff --git a/registry/endpoint.go b/registry/endpoint.go index de9c1f867a..b1785e4fdc 100644 --- a/registry/endpoint.go +++ b/registry/endpoint.go @@ -11,6 +11,7 @@ import ( log "github.com/Sirupsen/logrus" "github.com/docker/docker/registry/v2" + "github.com/docker/docker/utils" ) // for mocking in unit tests @@ -133,24 +134,25 @@ func (e *Endpoint) Path(path string) string { func (e *Endpoint) Ping() (RegistryInfo, error) { // The ping logic to use is determined by the registry endpoint version. + factory := HTTPRequestFactory(nil) switch e.Version { case APIVersion1: - return e.pingV1() + return e.pingV1(factory) case APIVersion2: - return e.pingV2() + return e.pingV2(factory) } // APIVersionUnknown // We should try v2 first... e.Version = APIVersion2 - regInfo, errV2 := e.pingV2() + regInfo, errV2 := e.pingV2(factory) if errV2 == nil { return regInfo, nil } // ... then fallback to v1. e.Version = APIVersion1 - regInfo, errV1 := e.pingV1() + regInfo, errV1 := e.pingV1(factory) if errV1 == nil { return regInfo, nil } @@ -159,7 +161,7 @@ func (e *Endpoint) Ping() (RegistryInfo, error) { return RegistryInfo{}, fmt.Errorf("unable to ping registry endpoint %s\nv2 ping attempt failed with error: %s\n v1 ping attempt failed with error: %s", e, errV2, errV1) } -func (e *Endpoint) pingV1() (RegistryInfo, error) { +func (e *Endpoint) pingV1(factory *utils.HTTPRequestFactory) (RegistryInfo, error) { log.Debugf("attempting v1 ping for registry endpoint %s", e) if e.String() == IndexServerAddress() { @@ -168,7 +170,7 @@ func (e *Endpoint) pingV1() (RegistryInfo, error) { return RegistryInfo{Standalone: false}, nil } - req, err := http.NewRequest("GET", e.Path("_ping"), nil) + req, err := factory.NewRequest("GET", e.Path("_ping"), nil) if err != nil { return RegistryInfo{Standalone: false}, err } @@ -213,10 +215,10 @@ func (e *Endpoint) pingV1() (RegistryInfo, error) { return info, nil } -func (e *Endpoint) pingV2() (RegistryInfo, error) { +func (e *Endpoint) pingV2(factory *utils.HTTPRequestFactory) (RegistryInfo, error) { log.Debugf("attempting v2 ping for registry endpoint %s", e) - req, err := http.NewRequest("GET", e.Path(""), nil) + req, err := factory.NewRequest("GET", e.Path(""), nil) if err != nil { return RegistryInfo{}, err } diff --git a/registry/registry_test.go b/registry/registry_test.go index 6bf31505eb..d96630d90e 100644 --- a/registry/registry_test.go +++ b/registry/registry_test.go @@ -751,6 +751,9 @@ func TestValidRemoteName(t *testing.T) { // Allow underscores everywhere (as opposed to hyphens). "____/____", + + //Username doc and image name docker being tested. + "doc/docker", } for _, repositoryName := range validRepositoryNames { if err := validateRemoteName(repositoryName); err != nil { @@ -776,11 +779,14 @@ func TestValidRemoteName(t *testing.T) { // Disallow consecutive hyphens. "dock--er/docker", - // Namespace too short. - "doc/docker", - // No repository. "docker/", + + //namespace too short + "d/docker", + + //namespace too long + "this_is_not_a_valid_namespace_because_its_lenth_is_greater_than_255_this_is_not_a_valid_namespace_because_its_lenth_is_greater_than_255_this_is_not_a_valid_namespace_because_its_lenth_is_greater_than_255_this_is_not_a_valid_namespace_because_its_lenth_is_greater_than_255/docker", } for _, repositoryName := range invalidRepositoryNames { if err := validateRemoteName(repositoryName); err == nil { diff --git a/registry/session.go b/registry/session.go index a668dfeaf5..470aeab4cb 100644 --- a/registry/session.go +++ b/registry/session.go @@ -281,7 +281,11 @@ func (r *Session) GetRepositoryData(remote string) (*RepositoryData, error) { // TODO: Right now we're ignoring checksums in the response body. // In the future, we need to use them to check image validity. if res.StatusCode != 200 { - return nil, utils.NewHTTPRequestError(fmt.Sprintf("HTTP code: %d", res.StatusCode), res) + errBody, err := ioutil.ReadAll(res.Body) + if err != nil { + log.Debugf("Error reading response body: %s", err) + } + return nil, utils.NewHTTPRequestError(fmt.Sprintf("Error: Status %d trying to pull repository %s: %q", res.StatusCode, remote, errBody), res) } var tokens []string @@ -349,7 +353,7 @@ func (r *Session) PushImageChecksumRegistry(imgData *ImgData, registry string, t } else if jsonBody["error"] == "Image already exists" { return ErrAlreadyExists } - return fmt.Errorf("HTTP code %d while uploading metadata: %s", res.StatusCode, errBody) + return fmt.Errorf("HTTP code %d while uploading metadata: %q", res.StatusCode, errBody) } return nil } @@ -385,7 +389,7 @@ func (r *Session) PushImageJSONRegistry(imgData *ImgData, jsonRaw []byte, regist } else if jsonBody["error"] == "Image already exists" { return ErrAlreadyExists } - return utils.NewHTTPRequestError(fmt.Sprintf("HTTP code %d while uploading metadata: %s", res.StatusCode, errBody), res) + return utils.NewHTTPRequestError(fmt.Sprintf("HTTP code %d while uploading metadata: %q", res.StatusCode, errBody), res) } return nil } @@ -427,7 +431,7 @@ func (r *Session) PushImageLayerRegistry(imgID string, layer io.Reader, registry if err != nil { return "", "", utils.NewHTTPRequestError(fmt.Sprintf("HTTP code %d while uploading metadata and error when trying to parse response body: %s", res.StatusCode, err), res) } - return "", "", utils.NewHTTPRequestError(fmt.Sprintf("Received HTTP code %d while uploading layer: %s", res.StatusCode, errBody), res) + return "", "", utils.NewHTTPRequestError(fmt.Sprintf("Received HTTP code %d while uploading layer: %q", res.StatusCode, errBody), res) } checksumPayload = "sha256:" + hex.EncodeToString(h.Sum(nil)) @@ -510,9 +514,9 @@ func (r *Session) PushImageJSONIndex(remote string, imgList []*ImgData, validate if res.StatusCode != 200 && res.StatusCode != 201 { errBody, err := ioutil.ReadAll(res.Body) if err != nil { - return nil, err + log.Debugf("Error reading response body: %s", err) } - return nil, utils.NewHTTPRequestError(fmt.Sprintf("Error: Status %d trying to push repository %s: %s", res.StatusCode, remote, errBody), res) + return nil, utils.NewHTTPRequestError(fmt.Sprintf("Error: Status %d trying to push repository %s: %q", res.StatusCode, remote, errBody), res) } if res.Header.Get("X-Docker-Token") != "" { tokens = res.Header["X-Docker-Token"] @@ -534,9 +538,9 @@ func (r *Session) PushImageJSONIndex(remote string, imgList []*ImgData, validate if res.StatusCode != 204 { errBody, err := ioutil.ReadAll(res.Body) if err != nil { - return nil, err + log.Debugf("Error reading response body: %s", err) } - return nil, utils.NewHTTPRequestError(fmt.Sprintf("Error: Status %d trying to push checksums %s: %s", res.StatusCode, remote, errBody), res) + return nil, utils.NewHTTPRequestError(fmt.Sprintf("Error: Status %d trying to push checksums %s: %q", res.StatusCode, remote, errBody), res) } } diff --git a/registry/session_v2.go b/registry/session_v2.go index da5371d83b..ec628ad115 100644 --- a/registry/session_v2.go +++ b/registry/session_v2.go @@ -1,6 +1,7 @@ package registry import ( + "bytes" "encoding/json" "fmt" "io" @@ -8,10 +9,13 @@ import ( "strconv" log "github.com/Sirupsen/logrus" + "github.com/docker/distribution/digest" "github.com/docker/docker/registry/v2" "github.com/docker/docker/utils" ) +const DockerDigestHeader = "Docker-Content-Digest" + func getV2Builder(e *Endpoint) *v2.URLBuilder { if e.URLBuilder == nil { e.URLBuilder = v2.NewURLBuilder(e.URL) @@ -63,10 +67,10 @@ func (r *Session) GetV2Authorization(ep *Endpoint, imageName string, readOnly bo // 1.c) if anything else, err // 2) PUT the created/signed manifest // -func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, auth *RequestAuthorization) ([]byte, error) { +func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, auth *RequestAuthorization) ([]byte, string, error) { routeURL, err := getV2Builder(ep).BuildManifestURL(imageName, tagName) if err != nil { - return nil, err + return nil, "", err } method := "GET" @@ -74,30 +78,31 @@ func (r *Session) GetV2ImageManifest(ep *Endpoint, imageName, tagName string, au req, err := r.reqFactory.NewRequest(method, routeURL, nil) if err != nil { - return nil, err + return nil, "", err } if err := auth.Authorize(req); err != nil { - return nil, err + return nil, "", err } res, _, err := r.doRequest(req) if err != nil { - return nil, err + return nil, "", err } defer res.Body.Close() if res.StatusCode != 200 { if res.StatusCode == 401 { - return nil, errLoginRequired + return nil, "", errLoginRequired } else if res.StatusCode == 404 { - return nil, ErrDoesNotExist + return nil, "", ErrDoesNotExist } - return nil, utils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to fetch for %s:%s", res.StatusCode, imageName, tagName), res) + return nil, "", utils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to fetch for %s:%s", res.StatusCode, imageName, tagName), res) } - buf, err := ioutil.ReadAll(res.Body) + manifestBytes, err := ioutil.ReadAll(res.Body) if err != nil { - return nil, fmt.Errorf("Error while reading the http response: %s", err) + return nil, "", fmt.Errorf("Error while reading the http response: %s", err) } - return buf, nil + + return manifestBytes, res.Header.Get(DockerDigestHeader), nil } // - Succeeded to head image blob (already exists) @@ -261,41 +266,58 @@ func (r *Session) PutV2ImageBlob(ep *Endpoint, imageName, sumType, sumStr string } // Finally Push the (signed) manifest of the blobs we've just pushed -func (r *Session) PutV2ImageManifest(ep *Endpoint, imageName, tagName string, manifestRdr io.Reader, auth *RequestAuthorization) error { +func (r *Session) PutV2ImageManifest(ep *Endpoint, imageName, tagName string, signedManifest, rawManifest []byte, auth *RequestAuthorization) (digest.Digest, error) { routeURL, err := getV2Builder(ep).BuildManifestURL(imageName, tagName) if err != nil { - return err + return "", err } method := "PUT" log.Debugf("[registry] Calling %q %s", method, routeURL) - req, err := r.reqFactory.NewRequest(method, routeURL, manifestRdr) + req, err := r.reqFactory.NewRequest(method, routeURL, bytes.NewReader(signedManifest)) if err != nil { - return err + return "", err } if err := auth.Authorize(req); err != nil { - return err + return "", err } res, _, err := r.doRequest(req) if err != nil { - return err + return "", err } defer res.Body.Close() // All 2xx and 3xx responses can be accepted for a put. if res.StatusCode >= 400 { if res.StatusCode == 401 { - return errLoginRequired + return "", errLoginRequired } errBody, err := ioutil.ReadAll(res.Body) if err != nil { - return err + return "", err } log.Debugf("Unexpected response from server: %q %#v", errBody, res.Header) - return utils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to push %s:%s manifest", res.StatusCode, imageName, tagName), res) + return "", utils.NewHTTPRequestError(fmt.Sprintf("Server error: %d trying to push %s:%s manifest", res.StatusCode, imageName, tagName), res) } - return nil + hdrDigest, err := digest.ParseDigest(res.Header.Get(DockerDigestHeader)) + if err != nil { + return "", fmt.Errorf("invalid manifest digest from registry: %s", err) + } + + dgstVerifier, err := digest.NewDigestVerifier(hdrDigest) + if err != nil { + return "", fmt.Errorf("invalid manifest digest from registry: %s", err) + } + + dgstVerifier.Write(rawManifest) + + if !dgstVerifier.Verified() { + computedDigest, _ := digest.FromBytes(rawManifest) + return "", fmt.Errorf("unable to verify manifest digest: registry has %q, computed %q", hdrDigest, computedDigest) + } + + return hdrDigest, nil } type remoteTags struct { diff --git a/registry/v2/regexp.go b/registry/v2/regexp.go index b7e95b9ff3..07484dcd69 100644 --- a/registry/v2/regexp.go +++ b/registry/v2/regexp.go @@ -11,9 +11,12 @@ import "regexp" // separated by one period, dash or underscore. var RepositoryNameComponentRegexp = regexp.MustCompile(`[a-z0-9]+(?:[._-][a-z0-9]+)*`) -// RepositoryNameRegexp builds on RepositoryNameComponentRegexp to allow 2 to +// RepositoryNameRegexp builds on RepositoryNameComponentRegexp to allow 1 to // 5 path components, separated by a forward slash. -var RepositoryNameRegexp = regexp.MustCompile(`(?:` + RepositoryNameComponentRegexp.String() + `/){1,4}` + RepositoryNameComponentRegexp.String()) +var RepositoryNameRegexp = regexp.MustCompile(`(?:` + RepositoryNameComponentRegexp.String() + `/){0,4}` + RepositoryNameComponentRegexp.String()) // TagNameRegexp matches valid tag names. From docker/docker:graph/tags.go. var TagNameRegexp = regexp.MustCompile(`[\w][\w.-]{0,127}`) + +// DigestRegexp matches valid digest types. +var DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-zA-Z0-9-_+.=]+`) diff --git a/registry/v2/routes.go b/registry/v2/routes.go index 08f36e2f71..de0a38fb81 100644 --- a/registry/v2/routes.go +++ b/registry/v2/routes.go @@ -33,11 +33,11 @@ func Router() *mux.Router { Path("/v2/"). Name(RouteNameBase) - // GET /v2//manifest/ Image Manifest Fetch the image manifest identified by name and tag. - // PUT /v2//manifest/ Image Manifest Upload the image manifest identified by name and tag. - // DELETE /v2//manifest/ Image Manifest Delete the image identified by name and tag. + // GET /v2//manifest/ Image Manifest Fetch the image manifest identified by name and reference where reference can be a tag or digest. + // PUT /v2//manifest/ Image Manifest Upload the image manifest identified by name and reference where reference can be a tag or digest. + // DELETE /v2//manifest/ Image Manifest Delete the image identified by name and reference where reference can be a tag or digest. router. - Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/manifests/{tag:" + TagNameRegexp.String() + "}"). + Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/manifests/{reference:" + TagNameRegexp.String() + "|" + DigestRegexp.String() + "}"). Name(RouteNameManifest) // GET /v2//tags/list Tags Fetch the tags under the repository identified by name. diff --git a/registry/v2/routes_test.go b/registry/v2/routes_test.go index 9969ebcc44..0191feed00 100644 --- a/registry/v2/routes_test.go +++ b/registry/v2/routes_test.go @@ -51,12 +51,20 @@ func TestRouter(t *testing.T) { RequestURI: "/v2/", Vars: map[string]string{}, }, + { + RouteName: RouteNameManifest, + RequestURI: "/v2/foo/manifests/bar", + Vars: map[string]string{ + "name": "foo", + "reference": "bar", + }, + }, { RouteName: RouteNameManifest, RequestURI: "/v2/foo/bar/manifests/tag", Vars: map[string]string{ - "name": "foo/bar", - "tag": "tag", + "name": "foo/bar", + "reference": "tag", }, }, { @@ -120,8 +128,8 @@ func TestRouter(t *testing.T) { RouteName: RouteNameManifest, RequestURI: "/v2/foo/bar/manifests/manifests/tags", Vars: map[string]string{ - "name": "foo/bar/manifests", - "tag": "tags", + "name": "foo/bar/manifests", + "reference": "tags", }, }, { diff --git a/registry/v2/urls.go b/registry/v2/urls.go index d1380b47ab..38fa98af01 100644 --- a/registry/v2/urls.go +++ b/registry/v2/urls.go @@ -74,11 +74,11 @@ func (ub *URLBuilder) BuildTagsURL(name string) (string, error) { return tagsURL.String(), nil } -// BuildManifestURL constructs a url for the manifest identified by name and tag. -func (ub *URLBuilder) BuildManifestURL(name, tag string) (string, error) { +// BuildManifestURL constructs a url for the manifest identified by name and reference. +func (ub *URLBuilder) BuildManifestURL(name, reference string) (string, error) { route := ub.cloneRoute(RouteNameManifest) - manifestURL, err := route.URL("name", name, "tag", tag) + manifestURL, err := route.URL("name", name, "reference", reference) if err != nil { return "", err } diff --git a/runconfig/compare.go b/runconfig/compare.go index 5c1bf46575..60a21a79c0 100644 --- a/runconfig/compare.go +++ b/runconfig/compare.go @@ -19,6 +19,7 @@ func Compare(a, b *Config) bool { } if len(a.Cmd) != len(b.Cmd) || len(a.Env) != len(b.Env) || + len(a.Labels) != len(b.Labels) || len(a.PortSpecs) != len(b.PortSpecs) || len(a.ExposedPorts) != len(b.ExposedPorts) || len(a.Entrypoint) != len(b.Entrypoint) || @@ -36,6 +37,11 @@ func Compare(a, b *Config) bool { return false } } + for k, v := range a.Labels { + if v != b.Labels[k] { + return false + } + } for i := 0; i < len(a.PortSpecs); i++ { if a.PortSpecs[i] != b.PortSpecs[i] { return false diff --git a/runconfig/config.go b/runconfig/config.go index ca5c3240b6..3e32a1e346 100644 --- a/runconfig/config.go +++ b/runconfig/config.go @@ -12,10 +12,10 @@ type Config struct { Hostname string Domainname string User string - Memory int64 // Memory limit (in bytes) - MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap - CpuShares int64 // CPU shares (relative weight vs. other containers) - Cpuset string // Cpuset 0-2, 0,1 + Memory int64 // FIXME: we keep it for backward compatibility, it has been moved to hostConfig. + MemorySwap int64 // FIXME: it has been moved to hostConfig. + CpuShares int64 // FIXME: it has been moved to hostConfig. + Cpuset string // FIXME: it has been moved to hostConfig and renamed to CpusetCpus. AttachStdin bool AttachStdout bool AttachStderr bool @@ -33,6 +33,8 @@ type Config struct { NetworkDisabled bool MacAddress string OnBuild []string + SecurityOpt []string + Labels map[string]string } func ContainerConfigFromJob(job *engine.Job) *Config { @@ -66,6 +68,9 @@ func ContainerConfigFromJob(job *engine.Job) *Config { if Cmd := job.GetenvList("Cmd"); Cmd != nil { config.Cmd = Cmd } + + job.GetenvJson("Labels", &config.Labels) + if Entrypoint := job.GetenvList("Entrypoint"); Entrypoint != nil { config.Entrypoint = Entrypoint } diff --git a/runconfig/hostconfig.go b/runconfig/hostconfig.go index 85db438b7d..84d636b5c4 100644 --- a/runconfig/hostconfig.go +++ b/runconfig/hostconfig.go @@ -99,10 +99,19 @@ type RestartPolicy struct { MaximumRetryCount int } +type LogConfig struct { + Type string + Config map[string]string +} + type HostConfig struct { Binds []string ContainerIDFile string LxcConf []utils.KeyValuePair + Memory int64 // Memory limit (in bytes) + MemorySwap int64 // Total memory usage (memory + swap); set `-1` to disable swap + CpuShares int64 // CPU shares (relative weight vs. other containers) + CpusetCpus string // CpusetCpus 0-2, 0,1 Privileged bool PortBindings nat.PortMap Links []string @@ -121,6 +130,8 @@ type HostConfig struct { SecurityOpt []string ReadonlyRootfs bool Ulimits []*ulimit.Ulimit + LogConfig LogConfig + CgroupParent string // Parent cgroup. } // This is used by the create command when you want to set both the @@ -141,26 +152,53 @@ func ContainerHostConfigFromJob(job *engine.Job) *HostConfig { if job.EnvExists("HostConfig") { hostConfig := HostConfig{} job.GetenvJson("HostConfig", &hostConfig) + + // FIXME: These are for backward compatibility, if people use these + // options with `HostConfig`, we should still make them workable. + if job.EnvExists("Memory") && hostConfig.Memory == 0 { + hostConfig.Memory = job.GetenvInt64("Memory") + } + if job.EnvExists("MemorySwap") && hostConfig.MemorySwap == 0 { + hostConfig.MemorySwap = job.GetenvInt64("MemorySwap") + } + if job.EnvExists("CpuShares") && hostConfig.CpuShares == 0 { + hostConfig.CpuShares = job.GetenvInt64("CpuShares") + } + if job.EnvExists("Cpuset") && hostConfig.CpusetCpus == "" { + hostConfig.CpusetCpus = job.Getenv("Cpuset") + } + return &hostConfig } hostConfig := &HostConfig{ ContainerIDFile: job.Getenv("ContainerIDFile"), + Memory: job.GetenvInt64("Memory"), + MemorySwap: job.GetenvInt64("MemorySwap"), + CpuShares: job.GetenvInt64("CpuShares"), + CpusetCpus: job.Getenv("CpusetCpus"), Privileged: job.GetenvBool("Privileged"), PublishAllPorts: job.GetenvBool("PublishAllPorts"), NetworkMode: NetworkMode(job.Getenv("NetworkMode")), IpcMode: IpcMode(job.Getenv("IpcMode")), PidMode: PidMode(job.Getenv("PidMode")), ReadonlyRootfs: job.GetenvBool("ReadonlyRootfs"), + CgroupParent: job.Getenv("CgroupParent"), + } + + // FIXME: This is for backward compatibility, if people use `Cpuset` + // in json, make it workable, we will only pass hostConfig.CpusetCpus + // to execDriver. + if job.EnvExists("Cpuset") && hostConfig.CpusetCpus == "" { + hostConfig.CpusetCpus = job.Getenv("Cpuset") } job.GetenvJson("LxcConf", &hostConfig.LxcConf) job.GetenvJson("PortBindings", &hostConfig.PortBindings) job.GetenvJson("Devices", &hostConfig.Devices) job.GetenvJson("RestartPolicy", &hostConfig.RestartPolicy) - job.GetenvJson("Ulimits", &hostConfig.Ulimits) - + job.GetenvJson("LogConfig", &hostConfig.LogConfig) hostConfig.SecurityOpt = job.GetenvList("SecurityOpt") if Binds := job.GetenvList("Binds"); Binds != nil { hostConfig.Binds = Binds diff --git a/runconfig/merge.go b/runconfig/merge.go index 9bc4748446..9bbdc6ad25 100644 --- a/runconfig/merge.go +++ b/runconfig/merge.go @@ -84,6 +84,16 @@ func Merge(userConf, imageConf *Config) error { } } + if userConf.Labels == nil { + userConf.Labels = map[string]string{} + } + if imageConf.Labels != nil { + for l := range userConf.Labels { + imageConf.Labels[l] = userConf.Labels[l] + } + userConf.Labels = imageConf.Labels + } + if len(userConf.Entrypoint) == 0 { if len(userConf.Cmd) == 0 { userConf.Cmd = imageConf.Cmd diff --git a/runconfig/parse.go b/runconfig/parse.go index fb81759b2d..ccd8056cf9 100644 --- a/runconfig/parse.go +++ b/runconfig/parse.go @@ -31,6 +31,7 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe flVolumes = opts.NewListOpts(opts.ValidatePath) flLinks = opts.NewListOpts(opts.ValidateLink) flEnv = opts.NewListOpts(opts.ValidateEnv) + flLabels = opts.NewListOpts(opts.ValidateEnv) flDevices = opts.NewListOpts(opts.ValidatePath) ulimits = make(map[string]*ulimit.Ulimit) @@ -47,6 +48,7 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe flCapAdd = opts.NewListOpts(nil) flCapDrop = opts.NewListOpts(nil) flSecurityOpt = opts.NewListOpts(nil) + flLabelsFile = opts.NewListOpts(nil) flNetwork = cmd.Bool([]string{"#n", "#-networking"}, true, "Enable networking for this container") flPrivileged = cmd.Bool([]string{"#privileged", "-privileged"}, false, "Give extended privileges to this container") @@ -62,18 +64,22 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe flUser = cmd.String([]string{"u", "-user"}, "", "Username or UID (format: [:])") flWorkingDir = cmd.String([]string{"w", "-workdir"}, "", "Working directory inside the container") flCpuShares = cmd.Int64([]string{"c", "-cpu-shares"}, 0, "CPU shares (relative weight)") - flCpuset = cmd.String([]string{"-cpuset"}, "", "CPUs in which to allow execution (0-3, 0,1)") + flCpusetCpus = cmd.String([]string{"#-cpuset", "-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)") flNetMode = cmd.String([]string{"-net"}, "bridge", "Set the Network mode for the container") flMacAddress = cmd.String([]string{"-mac-address"}, "", "Container MAC address (e.g. 92:d0:c6:0a:29:33)") flIpcMode = cmd.String([]string{"-ipc"}, "", "IPC namespace to use") - flRestartPolicy = cmd.String([]string{"-restart"}, "", "Restart policy to apply when a container exits") + flRestartPolicy = cmd.String([]string{"-restart"}, "no", "Restart policy to apply when a container exits") flReadonlyRootfs = cmd.Bool([]string{"-read-only"}, false, "Mount the container's root filesystem as read only") + flLoggingDriver = cmd.String([]string{"-log-driver"}, "", "Logging driver for container") + flCgroupParent = cmd.String([]string{"-cgroup-parent"}, "", "Optional parent cgroup for the container") ) cmd.Var(&flAttach, []string{"a", "-attach"}, "Attach to STDIN, STDOUT or STDERR") cmd.Var(&flVolumes, []string{"v", "-volume"}, "Bind mount a volume") cmd.Var(&flLinks, []string{"#link", "-link"}, "Add link to another container") cmd.Var(&flDevices, []string{"-device"}, "Add a host device to the container") + cmd.Var(&flLabels, []string{"l", "-label"}, "Set meta data on a container") + cmd.Var(&flLabelsFile, []string{"-label-file"}, "Read in a line delimited file of labels") cmd.Var(&flEnv, []string{"e", "-env"}, "Set environment variables") cmd.Var(&flEnvFile, []string{"-env-file"}, "Read in a file of environment variables") cmd.Var(&flPublish, []string{"p", "-publish"}, "Publish a container's port(s) to the host") @@ -243,16 +249,16 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe } // collect all the environment variables for the container - envVariables := []string{} - for _, ef := range flEnvFile.GetAll() { - parsedVars, err := opts.ParseEnvFile(ef) - if err != nil { - return nil, nil, cmd, err - } - envVariables = append(envVariables, parsedVars...) + envVariables, err := readKVStrings(flEnvFile.GetAll(), flEnv.GetAll()) + if err != nil { + return nil, nil, cmd, err + } + + // collect all the labels for the container + labels, err := readKVStrings(flLabelsFile.GetAll(), flLabels.GetAll()) + if err != nil { + return nil, nil, cmd, err } - // parse the '-e' and '--env' after, to allow override - envVariables = append(envVariables, flEnv.GetAll()...) ipcMode := IpcMode(*flIpcMode) if !ipcMode.Valid() { @@ -283,10 +289,10 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe Tty: *flTty, NetworkDisabled: !*flNetwork, OpenStdin: *flStdin, - Memory: flMemory, - MemorySwap: MemorySwap, - CpuShares: *flCpuShares, - Cpuset: *flCpuset, + Memory: flMemory, // FIXME: for backward compatibility + MemorySwap: MemorySwap, // FIXME: for backward compatibility + CpuShares: *flCpuShares, // FIXME: for backward compatibility + Cpuset: *flCpusetCpus, // FIXME: for backward compatibility AttachStdin: attachStdin, AttachStdout: attachStdout, AttachStderr: attachStderr, @@ -297,12 +303,17 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe MacAddress: *flMacAddress, Entrypoint: entrypoint, WorkingDir: *flWorkingDir, + Labels: convertKVStringsToMap(labels), } hostConfig := &HostConfig{ Binds: binds, ContainerIDFile: *flContainerIDFile, LxcConf: lxcConf, + Memory: flMemory, + MemorySwap: MemorySwap, + CpuShares: *flCpuShares, + CpusetCpus: *flCpusetCpus, Privileged: *flPrivileged, PortBindings: portBindings, Links: flLinks.GetAll(), @@ -321,6 +332,8 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe SecurityOpt: flSecurityOpt.GetAll(), ReadonlyRootfs: *flReadonlyRootfs, Ulimits: flUlimits.GetList(), + LogConfig: LogConfig{Type: *flLoggingDriver}, + CgroupParent: *flCgroupParent, } // When allocating stdin in attached mode, close stdin at client disconnect @@ -330,6 +343,37 @@ func Parse(cmd *flag.FlagSet, args []string) (*Config, *HostConfig, *flag.FlagSe return config, hostConfig, cmd, nil } +// reads a file of line terminated key=value pairs and override that with override parameter +func readKVStrings(files []string, override []string) ([]string, error) { + envVariables := []string{} + for _, ef := range files { + parsedVars, err := opts.ParseEnvFile(ef) + if err != nil { + return nil, err + } + envVariables = append(envVariables, parsedVars...) + } + // parse the '-e' and '--env' after, to allow override + envVariables = append(envVariables, override...) + + return envVariables, nil +} + +// converts ["key=value"] to {"key":"value"} +func convertKVStringsToMap(values []string) map[string]string { + result := make(map[string]string, len(values)) + for _, value := range values { + kv := strings.SplitN(value, "=", 2) + if len(kv) == 1 { + result[kv[0]] = "" + } else { + result[kv[0]] = kv[1] + } + } + + return result +} + // parseRestartPolicy returns the parsed policy or an error indicating what is incorrect func parseRestartPolicy(policy string) (RestartPolicy, error) { p := RestartPolicy{} diff --git a/utils/progressreader.go b/utils/progressreader.go deleted file mode 100644 index 87eae8ba73..0000000000 --- a/utils/progressreader.go +++ /dev/null @@ -1,55 +0,0 @@ -package utils - -import ( - "io" - "time" -) - -// Reader with progress bar -type progressReader struct { - reader io.ReadCloser // Stream to read from - output io.Writer // Where to send progress bar to - progress JSONProgress - lastUpdate int // How many bytes read at least update - ID string - action string - sf *StreamFormatter - newLine bool -} - -func (r *progressReader) Read(p []byte) (n int, err error) { - read, err := r.reader.Read(p) - r.progress.Current += read - updateEvery := 1024 * 512 //512kB - if r.progress.Total > 0 { - // Update progress for every 1% read if 1% < 512kB - if increment := int(0.01 * float64(r.progress.Total)); increment < updateEvery { - updateEvery = increment - } - } - if r.progress.Current-r.lastUpdate > updateEvery || err != nil { - r.output.Write(r.sf.FormatProgress(r.ID, r.action, &r.progress)) - r.lastUpdate = r.progress.Current - } - // Send newline when complete - if r.newLine && err != nil && read == 0 { - r.output.Write(r.sf.FormatStatus("", "")) - } - return read, err -} -func (r *progressReader) Close() error { - r.progress.Current = r.progress.Total - r.output.Write(r.sf.FormatProgress(r.ID, r.action, &r.progress)) - return r.reader.Close() -} -func ProgressReader(r io.ReadCloser, size int, output io.Writer, sf *StreamFormatter, newline bool, ID, action string) *progressReader { - return &progressReader{ - reader: r, - output: NewWriteFlusher(output), - ID: ID, - action: action, - progress: JSONProgress{Total: size, Start: time.Now().UTC().Unix()}, - sf: sf, - newLine: newline, - } -} diff --git a/utils/streamformatter.go b/utils/streamformatter.go index d0bc295bb3..e5b15f9835 100644 --- a/utils/streamformatter.go +++ b/utils/streamformatter.go @@ -3,6 +3,7 @@ package utils import ( "encoding/json" "fmt" + "github.com/docker/docker/pkg/progressreader" "io" ) @@ -54,7 +55,15 @@ func (sf *StreamFormatter) FormatError(err error) []byte { } return []byte("Error: " + err.Error() + streamNewline) } - +func (sf *StreamFormatter) FormatProg(id, action string, p interface{}) []byte { + switch progress := p.(type) { + case *JSONProgress: + return sf.FormatProgress(id, action, progress) + case progressreader.PR_JSONProgress: + return sf.FormatProgress(id, action, &JSONProgress{Current: progress.GetCurrent(), Total: progress.GetTotal()}) + } + return nil +} func (sf *StreamFormatter) FormatProgress(id, action string, progress *JSONProgress) []byte { if progress == nil { progress = &JSONProgress{} diff --git a/utils/utils.go b/utils/utils.go index cc3b499f65..540ae6f572 100644 --- a/utils/utils.go +++ b/utils/utils.go @@ -535,3 +535,20 @@ func (wc *WriteCounter) Write(p []byte) (count int, err error) { wc.Count += int64(count) return } + +// ImageReference combines `repo` and `ref` and returns a string representing +// the combination. If `ref` is a digest (meaning it's of the form +// :, the returned string is @. Otherwise, +// ref is assumed to be a tag, and the returned string is :. +func ImageReference(repo, ref string) string { + if DigestReference(ref) { + return repo + "@" + ref + } + return repo + ":" + ref +} + +// DigestReference returns true if ref is a digest reference; i.e. if it +// is of the form :. +func DigestReference(ref string) bool { + return strings.Contains(ref, ":") +} diff --git a/utils/utils_daemon.go b/utils/utils_daemon.go index 9989f05e31..3f8f4d569f 100644 --- a/utils/utils_daemon.go +++ b/utils/utils_daemon.go @@ -3,45 +3,14 @@ package utils import ( + "github.com/docker/docker/pkg/system" "os" - "path/filepath" - "syscall" ) -// TreeSize walks a directory tree and returns its total size in bytes. -func TreeSize(dir string) (size int64, err error) { - data := make(map[uint64]struct{}) - err = filepath.Walk(dir, func(d string, fileInfo os.FileInfo, e error) error { - // Ignore directory sizes - if fileInfo == nil { - return nil - } - - s := fileInfo.Size() - if fileInfo.IsDir() || s == 0 { - return nil - } - - // Check inode to handle hard links correctly - inode := fileInfo.Sys().(*syscall.Stat_t).Ino - // inode is not a uint64 on all platforms. Cast it to avoid issues. - if _, exists := data[uint64(inode)]; exists { - return nil - } - // inode is not a uint64 on all platforms. Cast it to avoid issues. - data[uint64(inode)] = struct{}{} - - size += s - - return nil - }) - return -} - // IsFileOwner checks whether the current user is the owner of the given file. func IsFileOwner(f string) bool { - if fileInfo, err := os.Stat(f); err == nil && fileInfo != nil { - if stat, ok := fileInfo.Sys().(*syscall.Stat_t); ok && int(stat.Uid) == os.Getuid() { + if fileInfo, err := system.Stat(f); err == nil && fileInfo != nil { + if int(fileInfo.Uid()) == os.Getuid() { return true } } diff --git a/utils/utils_daemon_test.go b/utils/utils_daemon_test.go new file mode 100644 index 0000000000..e8361489b7 --- /dev/null +++ b/utils/utils_daemon_test.go @@ -0,0 +1,26 @@ +package utils + +import ( + "os" + "path" + "testing" +) + +func TestIsFileOwner(t *testing.T) { + var err error + var file *os.File + + if file, err = os.Create(path.Join(os.TempDir(), "testIsFileOwner")); err != nil { + t.Fatalf("failed to create file: %s", err) + } + file.Close() + + if ok := IsFileOwner(path.Join(os.TempDir(), "testIsFileOwner")); !ok { + t.Fatalf("User should be owner of file") + } + + if err = os.Remove(path.Join(os.TempDir(), "testIsFileOwner")); err != nil { + t.Fatalf("failed to remove file: %s", err) + } + +} diff --git a/utils/utils_test.go b/utils/utils_test.go index ef1f7af03b..94303a0e96 100644 --- a/utils/utils_test.go +++ b/utils/utils_test.go @@ -122,3 +122,33 @@ func TestWriteCounter(t *testing.T) { t.Error("Wrong message written") } } + +func TestImageReference(t *testing.T) { + tests := []struct { + repo string + ref string + expected string + }{ + {"repo", "tag", "repo:tag"}, + {"repo", "sha256:c100b11b25d0cacd52c14e0e7bf525e1a4c0e6aec8827ae007055545909d1a64", "repo@sha256:c100b11b25d0cacd52c14e0e7bf525e1a4c0e6aec8827ae007055545909d1a64"}, + } + + for i, test := range tests { + actual := ImageReference(test.repo, test.ref) + if test.expected != actual { + t.Errorf("%d: expected %q, got %q", i, test.expected, actual) + } + } +} + +func TestDigestReference(t *testing.T) { + input := "sha256:c100b11b25d0cacd52c14e0e7bf525e1a4c0e6aec8827ae007055545909d1a64" + if !DigestReference(input) { + t.Errorf("Expected DigestReference=true for input %q", input) + } + + input = "latest" + if DigestReference(input) { + t.Errorf("Unexpected DigestReference=true for input %q", input) + } +} diff --git a/vendor/MAINTAINERS b/vendor/MAINTAINERS deleted file mode 120000 index 72e53509b2..0000000000 --- a/vendor/MAINTAINERS +++ /dev/null @@ -1 +0,0 @@ -../hack/MAINTAINERS \ No newline at end of file diff --git a/vendor/src/github.com/Sirupsen/logrus/.travis.yml b/vendor/src/github.com/Sirupsen/logrus/.travis.yml index d5a559f840..2d8c086617 100644 --- a/vendor/src/github.com/Sirupsen/logrus/.travis.yml +++ b/vendor/src/github.com/Sirupsen/logrus/.travis.yml @@ -2,8 +2,7 @@ language: go go: - 1.2 - 1.3 + - 1.4 - tip install: - - go get github.com/stretchr/testify - - go get github.com/stvp/go-udp-testing - - go get github.com/tobi/airbrake-go + - go get -t ./... diff --git a/vendor/src/github.com/Sirupsen/logrus/README.md b/vendor/src/github.com/Sirupsen/logrus/README.md index 01769c723f..e755e7c180 100644 --- a/vendor/src/github.com/Sirupsen/logrus/README.md +++ b/vendor/src/github.com/Sirupsen/logrus/README.md @@ -1,10 +1,11 @@ -# Logrus :walrus: [![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus) +# Logrus :walrus: [![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus) [![godoc reference](https://godoc.org/github.com/Sirupsen/logrus?status.png)][godoc] Logrus is a structured logger for Go (golang), completely API compatible with the standard library logger. [Godoc][godoc]. **Please note the Logrus API is not -yet stable (pre 1.0), the core API is unlikely change much but please version -control your Logrus to make sure you aren't fetching latest `master` on every -build.** +yet stable (pre 1.0). Logrus itself is completely stable and has been used in +many large deployments. The core API is unlikely to change much but please +version control your Logrus to make sure you aren't fetching latest `master` on +every build.** Nicely color-coded in development (when a TTY is attached, otherwise just plain text): @@ -33,7 +34,7 @@ ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"} With the default `log.Formatter = new(logrus.TextFormatter)` when a TTY is not attached, the output is compatible with the -[l2met](http://r.32k.io/l2met-introduction) format: +[logfmt](http://godoc.org/github.com/kr/logfmt) format: ```text time="2014-04-20 15:36:23.830442383 -0400 EDT" level="info" msg="A group of walrus emerges from the ocean" animal="walrus" size=10 @@ -206,11 +207,18 @@ import ( log "github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus/hooks/airbrake" "github.com/Sirupsen/logrus/hooks/syslog" + "log/syslog" ) func init() { log.AddHook(new(logrus_airbrake.AirbrakeHook)) - log.AddHook(logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "")) + + hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "") + if err != nil { + log.Error("Unable to connect to local syslog daemon") + } else { + log.AddHook(hook) + } } ``` @@ -228,6 +236,15 @@ func init() { * [`github.com/nubo/hiprus`](https://github.com/nubo/hiprus) Send errors to a channel in hipchat. +* [`github.com/sebest/logrusly`](https://github.com/sebest/logrusly) + Send logs to Loggly (https://www.loggly.com/) + +* [`github.com/johntdyer/slackrus`](https://github.com/johntdyer/slackrus) + Hook for Slack chat. + +* [`github.com/wercker/journalhook`](https://github.com/wercker/journalhook). + Hook for logging to `systemd-journald`. + #### Level logging Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic. @@ -307,7 +324,7 @@ The built-in logging formatters are: Third party logging formatters: -* [`zalgo`](https://github.com/aybabtme/logzalgo): invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦. +* [`zalgo`](https://github.com/aybabtme/logzalgo): invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦. You can define your formatter by implementing the `Formatter` interface, requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a @@ -332,10 +349,28 @@ func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) { } ``` +#### Logger as an `io.Writer` + +Logrus can be transormed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it. + +```go +w := logger.Writer() +defer w.Close() + +srv := http.Server{ + // create a stdlib log.Logger that writes to + // logrus.Logger. + ErrorLog: log.New(w, "", 0), +} +``` + +Each line written to that writer will be printed the usual way, using formatters +and hooks. The level for those entries is `info`. + #### Rotation Log rotation is not provided with Logrus. Log rotation should be done by an -external program (like `logrotated(8)`) that can compress and delete old log +external program (like `logrotate(8)`) that can compress and delete old log entries. It should not be a feature of the application-level logger. diff --git a/vendor/src/github.com/Sirupsen/logrus/entry.go b/vendor/src/github.com/Sirupsen/logrus/entry.go index a77c4b0ed1..17fe6f707b 100644 --- a/vendor/src/github.com/Sirupsen/logrus/entry.go +++ b/vendor/src/github.com/Sirupsen/logrus/entry.go @@ -100,7 +100,7 @@ func (entry *Entry) log(level Level, msg string) { // panic() to use in Entry#Panic(), we avoid the allocation by checking // directly here. if level <= PanicLevel { - panic(reader.String()) + panic(entry) } } @@ -126,6 +126,10 @@ func (entry *Entry) Warn(args ...interface{}) { } } +func (entry *Entry) Warning(args ...interface{}) { + entry.Warn(args...) +} + func (entry *Entry) Error(args ...interface{}) { if entry.Logger.Level >= ErrorLevel { entry.log(ErrorLevel, fmt.Sprint(args...)) diff --git a/vendor/src/github.com/Sirupsen/logrus/entry_test.go b/vendor/src/github.com/Sirupsen/logrus/entry_test.go new file mode 100644 index 0000000000..98717df490 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/entry_test.go @@ -0,0 +1,53 @@ +package logrus + +import ( + "bytes" + "fmt" + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestEntryPanicln(t *testing.T) { + errBoom := fmt.Errorf("boom time") + + defer func() { + p := recover() + assert.NotNil(t, p) + + switch pVal := p.(type) { + case *Entry: + assert.Equal(t, "kaboom", pVal.Message) + assert.Equal(t, errBoom, pVal.Data["err"]) + default: + t.Fatalf("want type *Entry, got %T: %#v", pVal, pVal) + } + }() + + logger := New() + logger.Out = &bytes.Buffer{} + entry := NewEntry(logger) + entry.WithField("err", errBoom).Panicln("kaboom") +} + +func TestEntryPanicf(t *testing.T) { + errBoom := fmt.Errorf("boom again") + + defer func() { + p := recover() + assert.NotNil(t, p) + + switch pVal := p.(type) { + case *Entry: + assert.Equal(t, "kaboom true", pVal.Message) + assert.Equal(t, errBoom, pVal.Data["err"]) + default: + t.Fatalf("want type *Entry, got %T: %#v", pVal, pVal) + } + }() + + logger := New() + logger.Out = &bytes.Buffer{} + entry := NewEntry(logger) + entry.WithField("err", errBoom).Panicf("kaboom %v", true) +} diff --git a/vendor/src/github.com/Sirupsen/logrus/examples/basic/basic.go b/vendor/src/github.com/Sirupsen/logrus/examples/basic/basic.go index 35945509c3..a1623ec003 100644 --- a/vendor/src/github.com/Sirupsen/logrus/examples/basic/basic.go +++ b/vendor/src/github.com/Sirupsen/logrus/examples/basic/basic.go @@ -9,9 +9,26 @@ var log = logrus.New() func init() { log.Formatter = new(logrus.JSONFormatter) log.Formatter = new(logrus.TextFormatter) // default + log.Level = logrus.DebugLevel } func main() { + defer func() { + err := recover() + if err != nil { + log.WithFields(logrus.Fields{ + "omg": true, + "err": err, + "number": 100, + }).Fatal("The ice breaks!") + } + }() + + log.WithFields(logrus.Fields{ + "animal": "walrus", + "number": 8, + }).Debug("Started observing beach") + log.WithFields(logrus.Fields{ "animal": "walrus", "size": 10, @@ -23,7 +40,11 @@ func main() { }).Warn("The group's number increased tremendously!") log.WithFields(logrus.Fields{ - "omg": true, - "number": 100, - }).Fatal("The ice breaks!") + "temperature": -4, + }).Debug("Temperature changes") + + log.WithFields(logrus.Fields{ + "animal": "orca", + "size": 9009, + }).Panic("It's over 9000!") } diff --git a/vendor/src/github.com/Sirupsen/logrus/exported.go b/vendor/src/github.com/Sirupsen/logrus/exported.go index 0e2d59f19a..a67e1b802d 100644 --- a/vendor/src/github.com/Sirupsen/logrus/exported.go +++ b/vendor/src/github.com/Sirupsen/logrus/exported.go @@ -9,6 +9,10 @@ var ( std = New() ) +func StandardLogger() *Logger { + return std +} + // SetOutput sets the standard logger output. func SetOutput(out io.Writer) { std.mu.Lock() @@ -30,6 +34,13 @@ func SetLevel(level Level) { std.Level = level } +// GetLevel returns the standard logger level. +func GetLevel() Level { + std.mu.Lock() + defer std.mu.Unlock() + return std.Level +} + // AddHook adds a hook to the standard logger hooks. func AddHook(hook Hook) { std.mu.Lock() diff --git a/vendor/src/github.com/Sirupsen/logrus/formatter.go b/vendor/src/github.com/Sirupsen/logrus/formatter.go index 74c49a0e0e..038ce9fd29 100644 --- a/vendor/src/github.com/Sirupsen/logrus/formatter.go +++ b/vendor/src/github.com/Sirupsen/logrus/formatter.go @@ -26,19 +26,19 @@ type Formatter interface { // // It's not exported because it's still using Data in an opinionated way. It's to // avoid code duplication between the two default formatters. -func prefixFieldClashes(entry *Entry) { - _, ok := entry.Data["time"] +func prefixFieldClashes(data Fields) { + _, ok := data["time"] if ok { - entry.Data["fields.time"] = entry.Data["time"] + data["fields.time"] = data["time"] } - _, ok = entry.Data["msg"] + _, ok = data["msg"] if ok { - entry.Data["fields.msg"] = entry.Data["msg"] + data["fields.msg"] = data["msg"] } - _, ok = entry.Data["level"] + _, ok = data["level"] if ok { - entry.Data["fields.level"] = entry.Data["level"] + data["fields.level"] = data["level"] } } diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/airbrake/airbrake.go b/vendor/src/github.com/Sirupsen/logrus/hooks/airbrake/airbrake.go index 880d21ecdc..75f4db1513 100644 --- a/vendor/src/github.com/Sirupsen/logrus/hooks/airbrake/airbrake.go +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/airbrake/airbrake.go @@ -9,7 +9,7 @@ import ( // with the Airbrake API. You must set: // * airbrake.Endpoint // * airbrake.ApiKey -// * airbrake.Environment (only sends exceptions when set to "production") +// * airbrake.Environment // // Before using this hook, to send an error. Entries that trigger an Error, // Fatal or Panic should now include an "error" field to send to Airbrake. diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/papertrail/papertrail.go b/vendor/src/github.com/Sirupsen/logrus/hooks/papertrail/papertrail.go index 48e2feaeb5..c0f10c1bda 100644 --- a/vendor/src/github.com/Sirupsen/logrus/hooks/papertrail/papertrail.go +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/papertrail/papertrail.go @@ -30,7 +30,8 @@ func NewPapertrailHook(host string, port int, appName string) (*PapertrailHook, // Fire is called when a log event is fired. func (hook *PapertrailHook) Fire(entry *logrus.Entry) error { date := time.Now().Format(format) - payload := fmt.Sprintf("<22> %s %s: [%s] %s", date, hook.AppName, entry.Data["level"], entry.Message) + msg, _ := entry.String() + payload := fmt.Sprintf("<22> %s %s: %s", date, hook.AppName, msg) bytesWritten, err := hook.UDPConn.Write([]byte(payload)) if err != nil { diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/README.md b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/README.md new file mode 100644 index 0000000000..19e58bb457 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/README.md @@ -0,0 +1,61 @@ +# Sentry Hook for Logrus :walrus: + +[Sentry](https://getsentry.com) provides both self-hosted and hosted +solutions for exception tracking. +Both client and server are +[open source](https://github.com/getsentry/sentry). + +## Usage + +Every sentry application defined on the server gets a different +[DSN](https://www.getsentry.com/docs/). In the example below replace +`YOUR_DSN` with the one created for your application. + +```go +import ( + "github.com/Sirupsen/logrus" + "github.com/Sirupsen/logrus/hooks/sentry" +) + +func main() { + log := logrus.New() + hook, err := logrus_sentry.NewSentryHook(YOUR_DSN, []logrus.Level{ + logrus.PanicLevel, + logrus.FatalLevel, + logrus.ErrorLevel, + }) + + if err == nil { + log.Hooks.Add(hook) + } +} +``` + +## Special fields + +Some logrus fields have a special meaning in this hook, +these are server_name and logger. +When logs are sent to sentry these fields are treated differently. +- server_name (also known as hostname) is the name of the server which +is logging the event (hostname.example.com) +- logger is the part of the application which is logging the event. +In go this usually means setting it to the name of the package. + +## Timeout + +`Timeout` is the time the sentry hook will wait for a response +from the sentry server. + +If this time elapses with no response from +the server an error will be returned. + +If `Timeout` is set to 0 the SentryHook will not wait for a reply +and will assume a correct delivery. + +The SentryHook has a default timeout of `100 milliseconds` when created +with a call to `NewSentryHook`. This can be changed by assigning a value to the `Timeout` field: + +```go +hook, _ := logrus_sentry.NewSentryHook(...) +hook.Timeout = 20*time.Second +``` diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry.go b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry.go new file mode 100644 index 0000000000..379f281c53 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry.go @@ -0,0 +1,100 @@ +package logrus_sentry + +import ( + "fmt" + "time" + + "github.com/Sirupsen/logrus" + "github.com/getsentry/raven-go" +) + +var ( + severityMap = map[logrus.Level]raven.Severity{ + logrus.DebugLevel: raven.DEBUG, + logrus.InfoLevel: raven.INFO, + logrus.WarnLevel: raven.WARNING, + logrus.ErrorLevel: raven.ERROR, + logrus.FatalLevel: raven.FATAL, + logrus.PanicLevel: raven.FATAL, + } +) + +func getAndDel(d logrus.Fields, key string) (string, bool) { + var ( + ok bool + v interface{} + val string + ) + if v, ok = d[key]; !ok { + return "", false + } + + if val, ok = v.(string); !ok { + return "", false + } + delete(d, key) + return val, true +} + +// SentryHook delivers logs to a sentry server. +type SentryHook struct { + // Timeout sets the time to wait for a delivery error from the sentry server. + // If this is set to zero the server will not wait for any response and will + // consider the message correctly sent + Timeout time.Duration + + client *raven.Client + levels []logrus.Level +} + +// NewSentryHook creates a hook to be added to an instance of logger +// and initializes the raven client. +// This method sets the timeout to 100 milliseconds. +func NewSentryHook(DSN string, levels []logrus.Level) (*SentryHook, error) { + client, err := raven.NewClient(DSN, nil) + if err != nil { + return nil, err + } + return &SentryHook{100 * time.Millisecond, client, levels}, nil +} + +// Called when an event should be sent to sentry +// Special fields that sentry uses to give more information to the server +// are extracted from entry.Data (if they are found) +// These fields are: logger and server_name +func (hook *SentryHook) Fire(entry *logrus.Entry) error { + packet := &raven.Packet{ + Message: entry.Message, + Timestamp: raven.Timestamp(entry.Time), + Level: severityMap[entry.Level], + Platform: "go", + } + + d := entry.Data + + if logger, ok := getAndDel(d, "logger"); ok { + packet.Logger = logger + } + if serverName, ok := getAndDel(d, "server_name"); ok { + packet.ServerName = serverName + } + packet.Extra = map[string]interface{}(d) + + _, errCh := hook.client.Capture(packet, nil) + timeout := hook.Timeout + if timeout != 0 { + timeoutCh := time.After(timeout) + select { + case err := <-errCh: + return err + case <-timeoutCh: + return fmt.Errorf("no response from sentry server in %s", timeout) + } + } + return nil +} + +// Levels returns the available logging levels. +func (hook *SentryHook) Levels() []logrus.Level { + return hook.levels +} diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry_test.go b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry_test.go new file mode 100644 index 0000000000..45f18d1704 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/sentry/sentry_test.go @@ -0,0 +1,97 @@ +package logrus_sentry + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "net/http" + "net/http/httptest" + "strings" + "testing" + + "github.com/Sirupsen/logrus" + "github.com/getsentry/raven-go" +) + +const ( + message = "error message" + server_name = "testserver.internal" + logger_name = "test.logger" +) + +func getTestLogger() *logrus.Logger { + l := logrus.New() + l.Out = ioutil.Discard + return l +} + +func WithTestDSN(t *testing.T, tf func(string, <-chan *raven.Packet)) { + pch := make(chan *raven.Packet, 1) + s := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) { + defer req.Body.Close() + d := json.NewDecoder(req.Body) + p := &raven.Packet{} + err := d.Decode(p) + if err != nil { + t.Fatal(err.Error()) + } + + pch <- p + })) + defer s.Close() + + fragments := strings.SplitN(s.URL, "://", 2) + dsn := fmt.Sprintf( + "%s://public:secret@%s/sentry/project-id", + fragments[0], + fragments[1], + ) + tf(dsn, pch) +} + +func TestSpecialFields(t *testing.T) { + WithTestDSN(t, func(dsn string, pch <-chan *raven.Packet) { + logger := getTestLogger() + + hook, err := NewSentryHook(dsn, []logrus.Level{ + logrus.ErrorLevel, + }) + + if err != nil { + t.Fatal(err.Error()) + } + logger.Hooks.Add(hook) + logger.WithFields(logrus.Fields{ + "server_name": server_name, + "logger": logger_name, + }).Error(message) + + packet := <-pch + if packet.Logger != logger_name { + t.Errorf("logger should have been %s, was %s", logger_name, packet.Logger) + } + + if packet.ServerName != server_name { + t.Errorf("server_name should have been %s, was %s", server_name, packet.ServerName) + } + }) +} + +func TestSentryHandler(t *testing.T) { + WithTestDSN(t, func(dsn string, pch <-chan *raven.Packet) { + logger := getTestLogger() + hook, err := NewSentryHook(dsn, []logrus.Level{ + logrus.ErrorLevel, + }) + if err != nil { + t.Fatal(err.Error()) + } + logger.Hooks.Add(hook) + + logger.Error(message) + packet := <-pch + if packet.Message != message { + t.Errorf("message should have been %s, was %s", message, packet.Message) + } + }) +} diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/README.md b/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/README.md index cd706bc1b1..4dbb8e7290 100644 --- a/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/README.md +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/README.md @@ -6,7 +6,7 @@ import ( "log/syslog" "github.com/Sirupsen/logrus" - "github.com/Sirupsen/logrus/hooks/syslog" + logrus_syslog "github.com/Sirupsen/logrus/hooks/syslog" ) func main() { @@ -17,4 +17,4 @@ func main() { log.Hooks.Add(hook) } } -``` \ No newline at end of file +``` diff --git a/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/syslog.go b/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/syslog.go index 2a18ce6130..b6fa374628 100644 --- a/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/syslog.go +++ b/vendor/src/github.com/Sirupsen/logrus/hooks/syslog/syslog.go @@ -29,18 +29,18 @@ func (hook *SyslogHook) Fire(entry *logrus.Entry) error { return err } - switch entry.Data["level"] { - case "panic": + switch entry.Level { + case logrus.PanicLevel: return hook.Writer.Crit(line) - case "fatal": + case logrus.FatalLevel: return hook.Writer.Crit(line) - case "error": + case logrus.ErrorLevel: return hook.Writer.Err(line) - case "warn": + case logrus.WarnLevel: return hook.Writer.Warning(line) - case "info": + case logrus.InfoLevel: return hook.Writer.Info(line) - case "debug": + case logrus.DebugLevel: return hook.Writer.Debug(line) default: return nil diff --git a/vendor/src/github.com/Sirupsen/logrus/json_formatter.go b/vendor/src/github.com/Sirupsen/logrus/json_formatter.go index 9d11b642d4..0e38a61919 100644 --- a/vendor/src/github.com/Sirupsen/logrus/json_formatter.go +++ b/vendor/src/github.com/Sirupsen/logrus/json_formatter.go @@ -9,12 +9,22 @@ import ( type JSONFormatter struct{} func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) { - prefixFieldClashes(entry) - entry.Data["time"] = entry.Time.Format(time.RFC3339) - entry.Data["msg"] = entry.Message - entry.Data["level"] = entry.Level.String() + data := make(Fields, len(entry.Data)+3) + for k, v := range entry.Data { + // Otherwise errors are ignored by `encoding/json` + // https://github.com/Sirupsen/logrus/issues/137 + if err, ok := v.(error); ok { + data[k] = err.Error() + } else { + data[k] = v + } + } + prefixFieldClashes(data) + data["time"] = entry.Time.Format(time.RFC3339) + data["msg"] = entry.Message + data["level"] = entry.Level.String() - serialized, err := json.Marshal(entry.Data) + serialized, err := json.Marshal(data) if err != nil { return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err) } diff --git a/vendor/src/github.com/Sirupsen/logrus/json_formatter_test.go b/vendor/src/github.com/Sirupsen/logrus/json_formatter_test.go new file mode 100644 index 0000000000..1d70873254 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/json_formatter_test.go @@ -0,0 +1,120 @@ +package logrus + +import ( + "encoding/json" + "errors" + + "testing" +) + +func TestErrorNotLost(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("error", errors.New("wild walrus"))) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + entry := make(map[string]interface{}) + err = json.Unmarshal(b, &entry) + if err != nil { + t.Fatal("Unable to unmarshal formatted entry: ", err) + } + + if entry["error"] != "wild walrus" { + t.Fatal("Error field not set") + } +} + +func TestErrorNotLostOnFieldNotNamedError(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("omg", errors.New("wild walrus"))) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + entry := make(map[string]interface{}) + err = json.Unmarshal(b, &entry) + if err != nil { + t.Fatal("Unable to unmarshal formatted entry: ", err) + } + + if entry["omg"] != "wild walrus" { + t.Fatal("Error field not set") + } +} + +func TestFieldClashWithTime(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("time", "right now!")) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + entry := make(map[string]interface{}) + err = json.Unmarshal(b, &entry) + if err != nil { + t.Fatal("Unable to unmarshal formatted entry: ", err) + } + + if entry["fields.time"] != "right now!" { + t.Fatal("fields.time not set to original time field") + } + + if entry["time"] != "0001-01-01T00:00:00Z" { + t.Fatal("time field not set to current time, was: ", entry["time"]) + } +} + +func TestFieldClashWithMsg(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("msg", "something")) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + entry := make(map[string]interface{}) + err = json.Unmarshal(b, &entry) + if err != nil { + t.Fatal("Unable to unmarshal formatted entry: ", err) + } + + if entry["fields.msg"] != "something" { + t.Fatal("fields.msg not set to original msg field") + } +} + +func TestFieldClashWithLevel(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("level", "something")) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + entry := make(map[string]interface{}) + err = json.Unmarshal(b, &entry) + if err != nil { + t.Fatal("Unable to unmarshal formatted entry: ", err) + } + + if entry["fields.level"] != "something" { + t.Fatal("fields.level not set to original level field") + } +} + +func TestJSONEntryEndsWithNewline(t *testing.T) { + formatter := &JSONFormatter{} + + b, err := formatter.Format(WithField("level", "something")) + if err != nil { + t.Fatal("Unable to format entry: ", err) + } + + if b[len(b)-1] != '\n' { + t.Fatal("Expected JSON log entry to end with a newline") + } +} diff --git a/vendor/src/github.com/Sirupsen/logrus/logger.go b/vendor/src/github.com/Sirupsen/logrus/logger.go index 7374fe365d..b392e547a7 100644 --- a/vendor/src/github.com/Sirupsen/logrus/logger.go +++ b/vendor/src/github.com/Sirupsen/logrus/logger.go @@ -38,7 +38,7 @@ type Logger struct { // Out: os.Stderr, // Formatter: new(JSONFormatter), // Hooks: make(levelHooks), -// Level: logrus.Debug, +// Level: logrus.DebugLevel, // } // // It's recommended to make this a global instance called `log`. diff --git a/vendor/src/github.com/Sirupsen/logrus/logrus_test.go b/vendor/src/github.com/Sirupsen/logrus/logrus_test.go index 15157d172d..d85dba4dcb 100644 --- a/vendor/src/github.com/Sirupsen/logrus/logrus_test.go +++ b/vendor/src/github.com/Sirupsen/logrus/logrus_test.go @@ -5,6 +5,7 @@ import ( "encoding/json" "strconv" "strings" + "sync" "testing" "github.com/stretchr/testify/assert" @@ -44,8 +45,12 @@ func LogAndAssertText(t *testing.T, log func(*Logger), assertions func(fields ma } kvArr := strings.Split(kv, "=") key := strings.TrimSpace(kvArr[0]) - val, err := strconv.Unquote(kvArr[1]) - assert.NoError(t, err) + val := kvArr[1] + if kvArr[1][0] == '"' { + var err error + val, err = strconv.Unquote(val) + assert.NoError(t, err) + } fields[key] = val } assertions(fields) @@ -204,6 +209,38 @@ func TestDefaultFieldsAreNotPrefixed(t *testing.T) { }) } +func TestDoubleLoggingDoesntPrefixPreviousFields(t *testing.T) { + + var buffer bytes.Buffer + var fields Fields + + logger := New() + logger.Out = &buffer + logger.Formatter = new(JSONFormatter) + + llog := logger.WithField("context", "eating raw fish") + + llog.Info("looks delicious") + + err := json.Unmarshal(buffer.Bytes(), &fields) + assert.NoError(t, err, "should have decoded first message") + assert.Equal(t, len(fields), 4, "should only have msg/time/level/context fields") + assert.Equal(t, fields["msg"], "looks delicious") + assert.Equal(t, fields["context"], "eating raw fish") + + buffer.Reset() + + llog.Warn("omg it is!") + + err = json.Unmarshal(buffer.Bytes(), &fields) + assert.NoError(t, err, "should have decoded second message") + assert.Equal(t, len(fields), 4, "should only have msg/time/level/context fields") + assert.Equal(t, fields["msg"], "omg it is!") + assert.Equal(t, fields["context"], "eating raw fish") + assert.Nil(t, fields["fields.msg"], "should not have prefixed previous `msg` entry") + +} + func TestConvertLevelToString(t *testing.T) { assert.Equal(t, "debug", DebugLevel.String()) assert.Equal(t, "info", InfoLevel.String()) @@ -245,3 +282,20 @@ func TestParseLevel(t *testing.T) { l, err = ParseLevel("invalid") assert.Equal(t, "not a valid logrus Level: \"invalid\"", err.Error()) } + +func TestGetSetLevelRace(t *testing.T) { + wg := sync.WaitGroup{} + for i := 0; i < 100; i++ { + wg.Add(1) + go func(i int) { + defer wg.Done() + if i%2 == 0 { + SetLevel(InfoLevel) + } else { + GetLevel() + } + }(i) + + } + wg.Wait() +} diff --git a/vendor/src/github.com/Sirupsen/logrus/terminal_notwindows.go b/vendor/src/github.com/Sirupsen/logrus/terminal_notwindows.go index 276447bd5c..b8bebc13ee 100644 --- a/vendor/src/github.com/Sirupsen/logrus/terminal_notwindows.go +++ b/vendor/src/github.com/Sirupsen/logrus/terminal_notwindows.go @@ -3,7 +3,7 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -// +build linux,!appengine darwin freebsd +// +build linux darwin freebsd openbsd package logrus diff --git a/vendor/src/github.com/Sirupsen/logrus/terminal_openbsd.go b/vendor/src/github.com/Sirupsen/logrus/terminal_openbsd.go new file mode 100644 index 0000000000..d238bfa0b4 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/terminal_openbsd.go @@ -0,0 +1,8 @@ + +package logrus + +import "syscall" + +const ioctlReadTermios = syscall.TIOCGETA + +type Termios syscall.Termios diff --git a/vendor/src/github.com/Sirupsen/logrus/text_formatter.go b/vendor/src/github.com/Sirupsen/logrus/text_formatter.go index fc0a4082a7..71dcb6617a 100644 --- a/vendor/src/github.com/Sirupsen/logrus/text_formatter.go +++ b/vendor/src/github.com/Sirupsen/logrus/text_formatter.go @@ -3,6 +3,7 @@ package logrus import ( "bytes" "fmt" + "regexp" "sort" "strings" "time" @@ -14,11 +15,13 @@ const ( green = 32 yellow = 33 blue = 34 + gray = 37 ) var ( baseTimestamp time.Time isTerminal bool + noQuoteNeeded *regexp.Regexp ) func init() { @@ -32,28 +35,47 @@ func miniTS() int { type TextFormatter struct { // Set to true to bypass checking for a TTY before outputting colors. - ForceColors bool + ForceColors bool + + // Force disabling colors. DisableColors bool + + // Disable timestamp logging. useful when output is redirected to logging + // system that already adds timestamps. + DisableTimestamp bool + + // Enable logging the full timestamp when a TTY is attached instead of just + // the time passed since beginning of execution. + FullTimestamp bool + + // The fields are sorted by default for a consistent output. For applications + // that log extremely frequently and don't use the JSON formatter this may not + // be desired. + DisableSorting bool } func (f *TextFormatter) Format(entry *Entry) ([]byte, error) { - - var keys []string + var keys []string = make([]string, 0, len(entry.Data)) for k := range entry.Data { keys = append(keys, k) } - sort.Strings(keys) + + if !f.DisableSorting { + sort.Strings(keys) + } b := &bytes.Buffer{} - prefixFieldClashes(entry) + prefixFieldClashes(entry.Data) isColored := (f.ForceColors || isTerminal) && !f.DisableColors if isColored { - printColored(b, entry, keys) + f.printColored(b, entry, keys) } else { - f.appendKeyValue(b, "time", entry.Time.Format(time.RFC3339)) + if !f.DisableTimestamp { + f.appendKeyValue(b, "time", entry.Time.Format(time.RFC3339)) + } f.appendKeyValue(b, "level", entry.Level.String()) f.appendKeyValue(b, "msg", entry.Message) for _, key := range keys { @@ -65,9 +87,11 @@ func (f *TextFormatter) Format(entry *Entry) ([]byte, error) { return b.Bytes(), nil } -func printColored(b *bytes.Buffer, entry *Entry, keys []string) { +func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []string) { var levelColor int switch entry.Level { + case DebugLevel: + levelColor = gray case WarnLevel: levelColor = yellow case ErrorLevel, FatalLevel, PanicLevel: @@ -78,17 +102,43 @@ func printColored(b *bytes.Buffer, entry *Entry, keys []string) { levelText := strings.ToUpper(entry.Level.String())[0:4] - fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, miniTS(), entry.Message) + if !f.FullTimestamp { + fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, miniTS(), entry.Message) + } else { + fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s] %-44s ", levelColor, levelText, entry.Time.Format(time.RFC3339), entry.Message) + } for _, k := range keys { v := entry.Data[k] fmt.Fprintf(b, " \x1b[%dm%s\x1b[0m=%v", levelColor, k, v) } } +func needsQuoting(text string) bool { + for _, ch := range text { + if !((ch >= 'a' && ch <= 'z') || + (ch >= 'A' && ch <= 'Z') || + (ch >= '0' && ch <= '9') || + ch == '-' || ch == '.') { + return false + } + } + return true +} + func (f *TextFormatter) appendKeyValue(b *bytes.Buffer, key, value interface{}) { switch value.(type) { - case string, error: - fmt.Fprintf(b, "%v=%q ", key, value) + case string: + if needsQuoting(value.(string)) { + fmt.Fprintf(b, "%v=%s ", key, value) + } else { + fmt.Fprintf(b, "%v=%q ", key, value) + } + case error: + if needsQuoting(value.(error).Error()) { + fmt.Fprintf(b, "%v=%s ", key, value) + } else { + fmt.Fprintf(b, "%v=%q ", key, value) + } default: fmt.Fprintf(b, "%v=%v ", key, value) } diff --git a/vendor/src/github.com/Sirupsen/logrus/text_formatter_test.go b/vendor/src/github.com/Sirupsen/logrus/text_formatter_test.go new file mode 100644 index 0000000000..28a9499079 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/text_formatter_test.go @@ -0,0 +1,37 @@ +package logrus + +import ( + "bytes" + "errors" + + "testing" +) + +func TestQuoting(t *testing.T) { + tf := &TextFormatter{DisableColors: true} + + checkQuoting := func(q bool, value interface{}) { + b, _ := tf.Format(WithField("test", value)) + idx := bytes.Index(b, ([]byte)("test=")) + cont := bytes.Contains(b[idx+5:], []byte{'"'}) + if cont != q { + if q { + t.Errorf("quoting expected for: %#v", value) + } else { + t.Errorf("quoting not expected for: %#v", value) + } + } + } + + checkQuoting(false, "abcd") + checkQuoting(false, "v1.0") + checkQuoting(false, "1234567890") + checkQuoting(true, "/foobar") + checkQuoting(true, "x y") + checkQuoting(true, "x,y") + checkQuoting(false, errors.New("invalid")) + checkQuoting(true, errors.New("invalid argument")) +} + +// TODO add tests for sorting etc., this requires a parser for the text +// formatter output. diff --git a/vendor/src/github.com/Sirupsen/logrus/writer.go b/vendor/src/github.com/Sirupsen/logrus/writer.go new file mode 100644 index 0000000000..90d3e01b45 --- /dev/null +++ b/vendor/src/github.com/Sirupsen/logrus/writer.go @@ -0,0 +1,31 @@ +package logrus + +import ( + "bufio" + "io" + "runtime" +) + +func (logger *Logger) Writer() (*io.PipeWriter) { + reader, writer := io.Pipe() + + go logger.writerScanner(reader) + runtime.SetFinalizer(writer, writerFinalizer) + + return writer +} + +func (logger *Logger) writerScanner(reader *io.PipeReader) { + scanner := bufio.NewScanner(reader) + for scanner.Scan() { + logger.Print(scanner.Text()) + } + if err := scanner.Err(); err != nil { + logger.Errorf("Error while reading from Writer: %s", err) + } + reader.Close() +} + +func writerFinalizer(writer *io.PipeWriter) { + writer.Close() +} diff --git a/vendor/src/github.com/docker/distribution/digest/digest.go b/vendor/src/github.com/docker/distribution/digest/digest.go new file mode 100644 index 0000000000..d640026cb8 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/digest.go @@ -0,0 +1,168 @@ +package digest + +import ( + "bytes" + "crypto/sha256" + "fmt" + "hash" + "io" + "io/ioutil" + "regexp" + "strings" + + "github.com/docker/docker/pkg/tarsum" +) + +const ( + // DigestTarSumV1EmptyTar is the digest for the empty tar file. + DigestTarSumV1EmptyTar = "tarsum.v1+sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" + // DigestSha256EmptyTar is the canonical sha256 digest of empty data + DigestSha256EmptyTar = "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" +) + +// Digest allows simple protection of hex formatted digest strings, prefixed +// by their algorithm. Strings of type Digest have some guarantee of being in +// the correct format and it provides quick access to the components of a +// digest string. +// +// The following is an example of the contents of Digest types: +// +// sha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc +// +// More important for this code base, this type is compatible with tarsum +// digests. For example, the following would be a valid Digest: +// +// tarsum+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b +// +// This allows to abstract the digest behind this type and work only in those +// terms. +type Digest string + +// NewDigest returns a Digest from alg and a hash.Hash object. +func NewDigest(alg string, h hash.Hash) Digest { + return Digest(fmt.Sprintf("%s:%x", alg, h.Sum(nil))) +} + +// NewDigestFromHex returns a Digest from alg and a the hex encoded digest. +func NewDigestFromHex(alg, hex string) Digest { + return Digest(fmt.Sprintf("%s:%s", alg, hex)) +} + +// DigestRegexp matches valid digest types. +var DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`) + +// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match. +var DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`) + +var ( + // ErrDigestInvalidFormat returned when digest format invalid. + ErrDigestInvalidFormat = fmt.Errorf("invalid checksum digest format") + + // ErrDigestUnsupported returned when the digest algorithm is unsupported. + ErrDigestUnsupported = fmt.Errorf("unsupported digest algorithm") +) + +// ParseDigest parses s and returns the validated digest object. An error will +// be returned if the format is invalid. +func ParseDigest(s string) (Digest, error) { + d := Digest(s) + + return d, d.Validate() +} + +// FromReader returns the most valid digest for the underlying content. +func FromReader(rd io.Reader) (Digest, error) { + h := sha256.New() + + if _, err := io.Copy(h, rd); err != nil { + return "", err + } + + return NewDigest("sha256", h), nil +} + +// FromTarArchive produces a tarsum digest from reader rd. +func FromTarArchive(rd io.Reader) (Digest, error) { + ts, err := tarsum.NewTarSum(rd, true, tarsum.Version1) + if err != nil { + return "", err + } + + if _, err := io.Copy(ioutil.Discard, ts); err != nil { + return "", err + } + + d, err := ParseDigest(ts.Sum(nil)) + if err != nil { + return "", err + } + + return d, nil +} + +// FromBytes digests the input and returns a Digest. +func FromBytes(p []byte) (Digest, error) { + return FromReader(bytes.NewReader(p)) +} + +// Validate checks that the contents of d is a valid digest, returning an +// error if not. +func (d Digest) Validate() error { + s := string(d) + // Common case will be tarsum + _, err := ParseTarSum(s) + if err == nil { + return nil + } + + // Continue on for general parser + + if !DigestRegexpAnchored.MatchString(s) { + return ErrDigestInvalidFormat + } + + i := strings.Index(s, ":") + if i < 0 { + return ErrDigestInvalidFormat + } + + // case: "sha256:" with no hex. + if i+1 == len(s) { + return ErrDigestInvalidFormat + } + + switch s[:i] { + case "sha256", "sha384", "sha512": + break + default: + return ErrDigestUnsupported + } + + return nil +} + +// Algorithm returns the algorithm portion of the digest. This will panic if +// the underlying digest is not in a valid format. +func (d Digest) Algorithm() string { + return string(d[:d.sepIndex()]) +} + +// Hex returns the hex digest portion of the digest. This will panic if the +// underlying digest is not in a valid format. +func (d Digest) Hex() string { + return string(d[d.sepIndex()+1:]) +} + +func (d Digest) String() string { + return string(d) +} + +func (d Digest) sepIndex() int { + i := strings.Index(string(d), ":") + + if i < 0 { + panic("could not find ':' in digest: " + d) + } + + return i +} diff --git a/vendor/src/github.com/docker/distribution/digest/digest_test.go b/vendor/src/github.com/docker/distribution/digest/digest_test.go new file mode 100644 index 0000000000..9e9ae35669 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/digest_test.go @@ -0,0 +1,111 @@ +package digest + +import ( + "bytes" + "io" + "testing" +) + +func TestParseDigest(t *testing.T) { + for _, testcase := range []struct { + input string + err error + algorithm string + hex string + }{ + { + input: "tarsum+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + algorithm: "tarsum+sha256", + hex: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + }, + { + input: "tarsum.dev+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + algorithm: "tarsum.dev+sha256", + hex: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + }, + { + input: "tarsum.v1+sha256:220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e", + algorithm: "tarsum.v1+sha256", + hex: "220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e", + }, + { + input: "sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + algorithm: "sha256", + hex: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + }, + { + input: "sha384:d3fc7881460b7e22e3d172954463dddd7866d17597e7248453c48b3e9d26d9596bf9c4a9cf8072c9d5bad76e19af801d", + algorithm: "sha384", + hex: "d3fc7881460b7e22e3d172954463dddd7866d17597e7248453c48b3e9d26d9596bf9c4a9cf8072c9d5bad76e19af801d", + }, + { + // empty hex + input: "sha256:", + err: ErrDigestInvalidFormat, + }, + { + // just hex + input: "d41d8cd98f00b204e9800998ecf8427e", + err: ErrDigestInvalidFormat, + }, + { + // not hex + input: "sha256:d41d8cd98f00b204e9800m98ecf8427e", + err: ErrDigestInvalidFormat, + }, + { + input: "foo:d41d8cd98f00b204e9800998ecf8427e", + err: ErrDigestUnsupported, + }, + } { + digest, err := ParseDigest(testcase.input) + if err != testcase.err { + t.Fatalf("error differed from expected while parsing %q: %v != %v", testcase.input, err, testcase.err) + } + + if testcase.err != nil { + continue + } + + if digest.Algorithm() != testcase.algorithm { + t.Fatalf("incorrect algorithm for parsed digest: %q != %q", digest.Algorithm(), testcase.algorithm) + } + + if digest.Hex() != testcase.hex { + t.Fatalf("incorrect hex for parsed digest: %q != %q", digest.Hex(), testcase.hex) + } + + // Parse string return value and check equality + newParsed, err := ParseDigest(digest.String()) + + if err != nil { + t.Fatalf("unexpected error parsing input %q: %v", testcase.input, err) + } + + if newParsed != digest { + t.Fatalf("expected equal: %q != %q", newParsed, digest) + } + } +} + +// A few test cases used to fix behavior we expect in storage backend. + +func TestFromTarArchiveZeroLength(t *testing.T) { + checkTarsumDigest(t, "zero-length archive", bytes.NewReader([]byte{}), DigestTarSumV1EmptyTar) +} + +func TestFromTarArchiveEmptyTar(t *testing.T) { + // String of 1024 zeros is a valid, empty tar file. + checkTarsumDigest(t, "1024 zero bytes", bytes.NewReader(bytes.Repeat([]byte("\x00"), 1024)), DigestTarSumV1EmptyTar) +} + +func checkTarsumDigest(t *testing.T, msg string, rd io.Reader, expected Digest) { + dgst, err := FromTarArchive(rd) + if err != nil { + t.Fatalf("unexpected error digesting %s: %v", msg, err) + } + + if dgst != expected { + t.Fatalf("unexpected digest for %s: %q != %q", msg, dgst, expected) + } +} diff --git a/vendor/src/github.com/docker/distribution/digest/digester.go b/vendor/src/github.com/docker/distribution/digest/digester.go new file mode 100644 index 0000000000..9094d662e4 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/digester.go @@ -0,0 +1,44 @@ +package digest + +import ( + "crypto/sha256" + "hash" +) + +// Digester calculates the digest of written data. It is functionally +// equivalent to hash.Hash but provides methods for returning the Digest type +// rather than raw bytes. +type Digester struct { + alg string + hash hash.Hash +} + +// NewDigester create a new Digester with the given hashing algorithm and instance +// of that algo's hasher. +func NewDigester(alg string, h hash.Hash) Digester { + return Digester{ + alg: alg, + hash: h, + } +} + +// NewCanonicalDigester is a convenience function to create a new Digester with +// out default settings. +func NewCanonicalDigester() Digester { + return NewDigester("sha256", sha256.New()) +} + +// Write data to the digester. These writes cannot fail. +func (d *Digester) Write(p []byte) (n int, err error) { + return d.hash.Write(p) +} + +// Digest returns the current digest for this digester. +func (d *Digester) Digest() Digest { + return NewDigest(d.alg, d.hash) +} + +// Reset the state of the digester. +func (d *Digester) Reset() { + d.hash.Reset() +} diff --git a/vendor/src/github.com/docker/distribution/digest/doc.go b/vendor/src/github.com/docker/distribution/digest/doc.go new file mode 100644 index 0000000000..278c50e011 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/doc.go @@ -0,0 +1,52 @@ +// Package digest provides a generalized type to opaquely represent message +// digests and their operations within the registry. The Digest type is +// designed to serve as a flexible identifier in a content-addressable system. +// More importantly, it provides tools and wrappers to work with tarsums and +// hash.Hash-based digests with little effort. +// +// Basics +// +// The format of a digest is simply a string with two parts, dubbed the +// "algorithm" and the "digest", separated by a colon: +// +// : +// +// An example of a sha256 digest representation follows: +// +// sha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc +// +// In this case, the string "sha256" is the algorithm and the hex bytes are +// the "digest". A tarsum example will be more illustrative of the use case +// involved in the registry: +// +// tarsum+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b +// +// For this, we consider the algorithm to be "tarsum+sha256". Prudent +// applications will favor the ParseDigest function to verify the format over +// using simple type casts. However, a normal string can be cast as a digest +// with a simple type conversion: +// +// Digest("tarsum+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b") +// +// Because the Digest type is simply a string, once a valid Digest is +// obtained, comparisons are cheap, quick and simple to express with the +// standard equality operator. +// +// Verification +// +// The main benefit of using the Digest type is simple verification against a +// given digest. The Verifier interface, modeled after the stdlib hash.Hash +// interface, provides a common write sink for digest verification. After +// writing is complete, calling the Verifier.Verified method will indicate +// whether or not the stream of bytes matches the target digest. +// +// Missing Features +// +// In addition to the above, we intend to add the following features to this +// package: +// +// 1. A Digester type that supports write sink digest calculation. +// +// 2. Suspend and resume of ongoing digest calculations to support efficient digest verification in the registry. +// +package digest diff --git a/vendor/src/github.com/docker/distribution/digest/tarsum.go b/vendor/src/github.com/docker/distribution/digest/tarsum.go new file mode 100644 index 0000000000..acf878b629 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/tarsum.go @@ -0,0 +1,70 @@ +package digest + +import ( + "fmt" + + "regexp" +) + +// TarSumRegexp defines a reguler expression to match tarsum identifiers. +var TarsumRegexp = regexp.MustCompile("tarsum(?:.[a-z0-9]+)?\\+[a-zA-Z0-9]+:[A-Fa-f0-9]+") + +// TarsumRegexpCapturing defines a reguler expression to match tarsum identifiers with +// capture groups corresponding to each component. +var TarsumRegexpCapturing = regexp.MustCompile("(tarsum)(.([a-z0-9]+))?\\+([a-zA-Z0-9]+):([A-Fa-f0-9]+)") + +// TarSumInfo contains information about a parsed tarsum. +type TarSumInfo struct { + // Version contains the version of the tarsum. + Version string + + // Algorithm contains the algorithm for the final digest + Algorithm string + + // Digest contains the hex-encoded digest. + Digest string +} + +// InvalidTarSumError provides informations about a TarSum that cannot be parsed +// by ParseTarSum. +type InvalidTarSumError string + +func (e InvalidTarSumError) Error() string { + return fmt.Sprintf("invalid tarsum: %q", string(e)) +} + +// ParseTarSum parses a tarsum string into its components of interest. For +// example, this method may receive the tarsum in the following format: +// +// tarsum.v1+sha256:220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e +// +// The function will return the following: +// +// TarSumInfo{ +// Version: "v1", +// Algorithm: "sha256", +// Digest: "220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e", +// } +// +func ParseTarSum(tarSum string) (tsi TarSumInfo, err error) { + components := TarsumRegexpCapturing.FindStringSubmatch(tarSum) + + if len(components) != 1+TarsumRegexpCapturing.NumSubexp() { + return TarSumInfo{}, InvalidTarSumError(tarSum) + } + + return TarSumInfo{ + Version: components[3], + Algorithm: components[4], + Digest: components[5], + }, nil +} + +// String returns the valid, string representation of the tarsum info. +func (tsi TarSumInfo) String() string { + if tsi.Version == "" { + return fmt.Sprintf("tarsum+%s:%s", tsi.Algorithm, tsi.Digest) + } + + return fmt.Sprintf("tarsum.%s+%s:%s", tsi.Version, tsi.Algorithm, tsi.Digest) +} diff --git a/vendor/src/github.com/docker/distribution/digest/tarsum_test.go b/vendor/src/github.com/docker/distribution/digest/tarsum_test.go new file mode 100644 index 0000000000..894c25ab31 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/tarsum_test.go @@ -0,0 +1,79 @@ +package digest + +import ( + "reflect" + "testing" +) + +func TestParseTarSumComponents(t *testing.T) { + for _, testcase := range []struct { + input string + expected TarSumInfo + err error + }{ + { + input: "tarsum.v1+sha256:220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e", + expected: TarSumInfo{ + Version: "v1", + Algorithm: "sha256", + Digest: "220a60ecd4a3c32c282622a625a54db9ba0ff55b5ba9c29c7064a2bc358b6a3e", + }, + }, + { + input: "", + err: InvalidTarSumError(""), + }, + { + input: "purejunk", + err: InvalidTarSumError("purejunk"), + }, + { + input: "tarsum.v23+test:12341234123412341effefefe", + expected: TarSumInfo{ + Version: "v23", + Algorithm: "test", + Digest: "12341234123412341effefefe", + }, + }, + + // The following test cases are ported from docker core + { + // Version 0 tarsum + input: "tarsum+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + expected: TarSumInfo{ + Algorithm: "sha256", + Digest: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + }, + }, + { + // Dev version tarsum + input: "tarsum.dev+sha256:e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + expected: TarSumInfo{ + Version: "dev", + Algorithm: "sha256", + Digest: "e58fcf7418d4390dec8e8fb69d88c06ec07039d651fedd3aa72af9972e7d046b", + }, + }, + } { + tsi, err := ParseTarSum(testcase.input) + if err != nil { + if testcase.err != nil && err == testcase.err { + continue // passes + } + + t.Fatalf("unexpected error parsing tarsum: %v", err) + } + + if testcase.err != nil { + t.Fatalf("expected error not encountered on %q: %v", testcase.input, testcase.err) + } + + if !reflect.DeepEqual(tsi, testcase.expected) { + t.Fatalf("expected tarsum info: %v != %v", tsi, testcase.expected) + } + + if testcase.input != tsi.String() { + t.Fatalf("input should equal output: %q != %q", tsi.String(), testcase.input) + } + } +} diff --git a/vendor/src/github.com/docker/distribution/digest/verifiers.go b/vendor/src/github.com/docker/distribution/digest/verifiers.go new file mode 100644 index 0000000000..11d9d7ae53 --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/verifiers.go @@ -0,0 +1,137 @@ +package digest + +import ( + "crypto/sha256" + "crypto/sha512" + "hash" + "io" + "io/ioutil" + + "github.com/docker/docker/pkg/tarsum" +) + +// Verifier presents a general verification interface to be used with message +// digests and other byte stream verifications. Users instantiate a Verifier +// from one of the various methods, write the data under test to it then check +// the result with the Verified method. +type Verifier interface { + io.Writer + + // Verified will return true if the content written to Verifier matches + // the digest. + Verified() bool +} + +// NewDigestVerifier returns a verifier that compares the written bytes +// against a passed in digest. +func NewDigestVerifier(d Digest) (Verifier, error) { + if err := d.Validate(); err != nil { + return nil, err + } + + alg := d.Algorithm() + switch alg { + case "sha256", "sha384", "sha512": + return hashVerifier{ + hash: newHash(alg), + digest: d, + }, nil + default: + // Assume we have a tarsum. + version, err := tarsum.GetVersionFromTarsum(string(d)) + if err != nil { + return nil, err + } + + pr, pw := io.Pipe() + + // TODO(stevvooe): We may actually want to ban the earlier versions of + // tarsum. That decision may not be the place of the verifier. + + ts, err := tarsum.NewTarSum(pr, true, version) + if err != nil { + return nil, err + } + + // TODO(sday): Ick! A goroutine per digest verification? We'll have to + // get the tarsum library to export an io.Writer variant. + go func() { + if _, err := io.Copy(ioutil.Discard, ts); err != nil { + pr.CloseWithError(err) + } else { + pr.Close() + } + }() + + return &tarsumVerifier{ + digest: d, + ts: ts, + pr: pr, + pw: pw, + }, nil + } +} + +// NewLengthVerifier returns a verifier that returns true when the number of +// read bytes equals the expected parameter. +func NewLengthVerifier(expected int64) Verifier { + return &lengthVerifier{ + expected: expected, + } +} + +type lengthVerifier struct { + expected int64 // expected bytes read + len int64 // bytes read +} + +func (lv *lengthVerifier) Write(p []byte) (n int, err error) { + n = len(p) + lv.len += int64(n) + return n, err +} + +func (lv *lengthVerifier) Verified() bool { + return lv.expected == lv.len +} + +func newHash(name string) hash.Hash { + switch name { + case "sha256": + return sha256.New() + case "sha384": + return sha512.New384() + case "sha512": + return sha512.New() + default: + panic("unsupport algorithm: " + name) + } +} + +type hashVerifier struct { + digest Digest + hash hash.Hash +} + +func (hv hashVerifier) Write(p []byte) (n int, err error) { + return hv.hash.Write(p) +} + +func (hv hashVerifier) Verified() bool { + return hv.digest == NewDigest(hv.digest.Algorithm(), hv.hash) +} + +type tarsumVerifier struct { + digest Digest + ts tarsum.TarSum + pr *io.PipeReader + pw *io.PipeWriter +} + +func (tv *tarsumVerifier) Write(p []byte) (n int, err error) { + return tv.pw.Write(p) +} + +func (tv *tarsumVerifier) Verified() bool { + return tv.digest == Digest(tv.ts.Sum(nil)) +} diff --git a/vendor/src/github.com/docker/distribution/digest/verifiers_test.go b/vendor/src/github.com/docker/distribution/digest/verifiers_test.go new file mode 100644 index 0000000000..408720b5ee --- /dev/null +++ b/vendor/src/github.com/docker/distribution/digest/verifiers_test.go @@ -0,0 +1,162 @@ +package digest + +import ( + "bytes" + "crypto/rand" + "encoding/base64" + "io" + "os" + "strings" + "testing" + + "github.com/docker/distribution/testutil" +) + +func TestDigestVerifier(t *testing.T) { + p := make([]byte, 1<<20) + rand.Read(p) + digest, err := FromBytes(p) + if err != nil { + t.Fatalf("unexpected error digesting bytes: %#v", err) + } + + verifier, err := NewDigestVerifier(digest) + if err != nil { + t.Fatalf("unexpected error getting digest verifier: %s", err) + } + + io.Copy(verifier, bytes.NewReader(p)) + + if !verifier.Verified() { + t.Fatalf("bytes not verified") + } + + tf, tarSum, err := testutil.CreateRandomTarFile() + if err != nil { + t.Fatalf("error creating tarfile: %v", err) + } + + digest, err = FromTarArchive(tf) + if err != nil { + t.Fatalf("error digesting tarsum: %v", err) + } + + if digest.String() != tarSum { + t.Fatalf("unexpected digest: %q != %q", digest.String(), tarSum) + } + + expectedSize, _ := tf.Seek(0, os.SEEK_END) // Get tar file size + tf.Seek(0, os.SEEK_SET) // seek back + + // This is the most relevant example for the registry application. It's + // effectively a read through pipeline, where the final sink is the digest + // verifier. + verifier, err = NewDigestVerifier(digest) + if err != nil { + t.Fatalf("unexpected error getting digest verifier: %s", err) + } + + lengthVerifier := NewLengthVerifier(expectedSize) + rd := io.TeeReader(tf, lengthVerifier) + io.Copy(verifier, rd) + + if !lengthVerifier.Verified() { + t.Fatalf("verifier detected incorrect length") + } + + if !verifier.Verified() { + t.Fatalf("bytes not verified") + } +} + +// TestVerifierUnsupportedDigest ensures that unsupported digest validation is +// flowing through verifier creation. +func TestVerifierUnsupportedDigest(t *testing.T) { + unsupported := Digest("bean:0123456789abcdef") + + _, err := NewDigestVerifier(unsupported) + if err == nil { + t.Fatalf("expected error when creating verifier") + } + + if err != ErrDigestUnsupported { + t.Fatalf("incorrect error for unsupported digest: %v %p %p", err, ErrDigestUnsupported, err) + } +} + +// TestJunkNoDeadlock ensures that junk input into a digest verifier properly +// returns errors from the tarsum library. Specifically, we pass in a file +// with a "bad header" and should see the error from the io.Copy to verifier. +// This has been seen with gzipped tarfiles, mishandled by the tarsum package, +// but also on junk input, such as html. +func TestJunkNoDeadlock(t *testing.T) { + expected := Digest("tarsum.dev+sha256:62e15750aae345f6303469a94892e66365cc5e3abdf8d7cb8b329f8fb912e473") + junk := bytes.Repeat([]byte{'a'}, 1024) + + verifier, err := NewDigestVerifier(expected) + if err != nil { + t.Fatalf("unexpected error creating verifier: %v", err) + } + + rd := bytes.NewReader(junk) + if _, err := io.Copy(verifier, rd); err == nil { + t.Fatalf("unexpected error verifying input data: %v", err) + } +} + +// TestBadTarNoDeadlock runs a tar with a "bad" tar header through digest +// verifier, ensuring that the verifier returns an error properly. +func TestBadTarNoDeadlock(t *testing.T) { + // TODO(stevvooe): This test is exposing a bug in tarsum where if we pass + // a gzipped tar file into tarsum, the library returns an error. This + // should actually work. When the tarsum package is fixed, this test will + // fail and we can remove this test or invert it. + + // This tarfile was causing deadlocks in verifiers due mishandled copy error. + // This is a gzipped tar, which we typically don't see but should handle. + // + // From https://registry-1.docker.io/v2/library/ubuntu/blobs/tarsum.dev+sha256:62e15750aae345f6303469a94892e66365cc5e3abdf8d7cb8b329f8fb912e473 + const badTar = ` +H4sIAAAJbogA/0otSdZnoDEwMDAxMDc1BdJggE6D2YZGJobGBmbGRsZAdYYGBkZGDAqmtHYYCJQW +lyQWAZ1CqTnonhsiAAAAAP//AsV/YkEJTdMAGfFvZmA2Gv/0AAAAAAD//4LFf3F+aVFyarFeTmZx +CbXtAOVnMxMTXPFvbGpmjhb/xobmwPinSyCO8PgHAAAA///EVU9v2z4MvedTEMihl9a5/26/YTkU +yNKiTTDsKMt0rE0WDYmK628/ym7+bFmH2DksQACbIB/5+J7kObwiQsXc/LdYVGibLObRccw01Qv5 +19EZ7hbbZudVgWtiDFCSh4paYII4xOVxNgeHLXrYow+GXAAqgSuEQhzlTR5ZgtlsVmB+aKe8rswe +zzsOjwtoPGoTEGplHHhMCJqxSNUPwesbEGbzOXxR34VCHndQmjfhUKhEq/FURI0FqJKFR5q9NE5Z +qbaoBGoglAB+5TSK0sOh3c3UPkRKE25dEg8dDzzIWmqN2wG3BNY4qRL1VFFAoJJb5SXHU90n34nk +SUS8S0AeGwqGyXdZel1nn7KLGhPO0kDeluvN48ty9Q2269ft8/PTy2b5GfKuh9/2LBIWo6oz+N8G +uodmWLETg0mW4lMP4XYYCL4+rlawftpIO40SA+W6Yci9wRZE1MNOjmyGdhBQRy9OHpqOdOGh/wT7 +nZdOkHZ650uIK+WrVZdkgErJfnNEJysLnI5FSAj4xuiCQNpOIoNWmhyLByVHxEpLf3dkr+k9KMsV +xV0FhiVB21hgD3V5XwSqRdOmsUYr7oNtZXTVzyTHc2/kqokBy2ihRMVRTN+78goP5Ur/aMhz+KOJ +3h2UsK43kdwDo0Q9jfD7ie2RRur7MdpIrx1Z3X4j/Q1qCswN9r/EGCvXiUy0fI4xeSknnH/92T/+ +fgIAAP//GkWjYBSMXAAIAAD//2zZtzAAEgAA` + expected := Digest("tarsum.dev+sha256:62e15750aae345f6303469a94892e66365cc5e3abdf8d7cb8b329f8fb912e473") + + verifier, err := NewDigestVerifier(expected) + if err != nil { + t.Fatalf("unexpected error creating verifier: %v", err) + } + + rd := base64.NewDecoder(base64.StdEncoding, strings.NewReader(badTar)) + + if _, err := io.Copy(verifier, rd); err == nil { + t.Fatalf("unexpected error verifying input data: %v", err) + } + + if verifier.Verified() { + // For now, we expect an error, since tarsum library cannot handle + // compressed tars (!!!). + t.Fatalf("no error received after invalid tar") + } +} + +// TODO(stevvooe): Add benchmarks to measure bytes/second throughput for +// DigestVerifier. We should be tarsum/gzip limited for common cases but we +// want to verify this. +// +// The relevant benchmarks for comparison can be run with the following +// commands: +// +// go test -bench . crypto/sha1 +// go test -bench . github.com/docker/docker/pkg/tarsum +// diff --git a/vendor/src/github.com/docker/libcontainer/.drone.yml b/vendor/src/github.com/docker/libcontainer/.drone.yml deleted file mode 100755 index 80d298f218..0000000000 --- a/vendor/src/github.com/docker/libcontainer/.drone.yml +++ /dev/null @@ -1,9 +0,0 @@ -image: dockercore/libcontainer -script: -# Setup the DockerInDocker environment. - - /dind - - sed -i 's!docker/docker!docker/libcontainer!' /go/src/github.com/docker/docker/hack/make/.validate - - bash /go/src/github.com/docker/docker/hack/make/validate-dco - - bash /go/src/github.com/docker/docker/hack/make/validate-gofmt - - export GOPATH="$GOPATH:/go:$(pwd)/vendor" # Drone mucks with our GOPATH - - make direct-test diff --git a/vendor/src/github.com/docker/libcontainer/.gitignore b/vendor/src/github.com/docker/libcontainer/.gitignore new file mode 100644 index 0000000000..4c2914fc7c --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/.gitignore @@ -0,0 +1 @@ +nsinit/nsinit diff --git a/vendor/src/github.com/docker/libcontainer/Dockerfile b/vendor/src/github.com/docker/libcontainer/Dockerfile index 0771c808ea..fb34c8c95a 100644 --- a/vendor/src/github.com/docker/libcontainer/Dockerfile +++ b/vendor/src/github.com/docker/libcontainer/Dockerfile @@ -7,9 +7,9 @@ RUN go get github.com/docker/docker/pkg/term # setup a playground for us to spawn containers in RUN mkdir /busybox && \ - curl -sSL 'https://github.com/jpetazzo/docker-busybox/raw/buildroot-2014.02/rootfs.tar' | tar -xC /busybox + curl -sSL 'https://github.com/jpetazzo/docker-busybox/raw/buildroot-2014.11/rootfs.tar' | tar -xC /busybox -RUN curl -sSL https://raw.githubusercontent.com/docker/docker/master/project/dind -o /dind && \ +RUN curl -sSL https://raw.githubusercontent.com/docker/docker/master/hack/dind -o /dind && \ chmod +x /dind COPY . /go/src/github.com/docker/libcontainer diff --git a/vendor/src/github.com/docker/libcontainer/MAINTAINERS b/vendor/src/github.com/docker/libcontainer/MAINTAINERS index 5235131722..cea3500f2b 100644 --- a/vendor/src/github.com/docker/libcontainer/MAINTAINERS +++ b/vendor/src/github.com/docker/libcontainer/MAINTAINERS @@ -3,4 +3,5 @@ Rohit Jnagal (@rjnagal) Victor Marmol (@vmarmol) Mrunal Patel (@mrunalp) Alexandr Morozov (@LK4D4) +Daniel, Dao Quang Minh (@dqminh) update-vendor.sh: Tianon Gravi (@tianon) diff --git a/vendor/src/github.com/docker/libcontainer/Makefile b/vendor/src/github.com/docker/libcontainer/Makefile index f94171b0fc..c2c9a98d35 100644 --- a/vendor/src/github.com/docker/libcontainer/Makefile +++ b/vendor/src/github.com/docker/libcontainer/Makefile @@ -22,3 +22,10 @@ direct-build: direct-install: go install -v $(GO_PACKAGES) + +local: + go test -v + +validate: + hack/validate.sh + diff --git a/vendor/src/github.com/docker/libcontainer/NOTICE b/vendor/src/github.com/docker/libcontainer/NOTICE index ca1635f896..dc9129878c 100644 --- a/vendor/src/github.com/docker/libcontainer/NOTICE +++ b/vendor/src/github.com/docker/libcontainer/NOTICE @@ -1,5 +1,5 @@ libcontainer -Copyright 2012-2014 Docker, Inc. +Copyright 2012-2015 Docker, Inc. This product includes software developed at Docker, Inc. (http://www.docker.com). diff --git a/vendor/src/github.com/docker/libcontainer/PRINCIPLES.md b/vendor/src/github.com/docker/libcontainer/PRINCIPLES.md index 42396c0eec..0560642102 100644 --- a/vendor/src/github.com/docker/libcontainer/PRINCIPLES.md +++ b/vendor/src/github.com/docker/libcontainer/PRINCIPLES.md @@ -8,7 +8,7 @@ In the design and development of libcontainer we try to follow these principles: * Less code is better. * Fewer components are better. Do you really need to add one more class? * 50 lines of straightforward, readable code is better than 10 lines of magic that nobody can understand. -* Don't do later what you can do now. "//FIXME: refactor" is not acceptable in new code. +* Don't do later what you can do now. "//TODO: refactor" is not acceptable in new code. * When hesitating between two options, choose the one that is easier to reverse. * "No" is temporary; "Yes" is forever. If you're not sure about a new feature, say no. You can change your mind later. * Containers must be portable to the greatest possible number of machines. Be suspicious of any change which makes machines less interchangeable. diff --git a/vendor/src/github.com/docker/libcontainer/README.md b/vendor/src/github.com/docker/libcontainer/README.md index 37047e68c8..984f2c5238 100644 --- a/vendor/src/github.com/docker/libcontainer/README.md +++ b/vendor/src/github.com/docker/libcontainer/README.md @@ -1,48 +1,169 @@ -## libcontainer - reference implementation for containers [![Build Status](https://ci.dockerproject.com/github.com/docker/libcontainer/status.svg?branch=master)](https://ci.dockerproject.com/github.com/docker/libcontainer) +## libcontainer - reference implementation for containers [![Build Status](https://jenkins.dockerproject.com/buildStatus/icon?job=Libcontainer Master)](https://jenkins.dockerproject.com/job/Libcontainer%20Master/) -### Note on API changes: - -Please bear with us while we work on making the libcontainer API stable and something that we can support long term. We are currently discussing the API with the community, therefore, if you currently depend on libcontainer please pin your dependency at a specific tag or commit id. Please join the discussion and help shape the API. - -#### Background - -libcontainer specifies configuration options for what a container is. It provides a native Go implementation for using Linux namespaces with no external dependencies. libcontainer provides many convenience functions for working with namespaces, networking, and management. +Libcontainer provides a native Go implementation for creating containers +with namespaces, cgroups, capabilities, and filesystem access controls. +It allows you to manage the lifecycle of the container performing additional operations +after the container is created. #### Container -A container is a self contained execution environment that shares the kernel of the host system and which is (optionally) isolated from other containers in the system. +A container is a self contained execution environment that shares the kernel of the +host system and which is (optionally) isolated from other containers in the system. -libcontainer may be used to execute a process in a container. If a user tries to run a new process inside an existing container, the new process is added to the processes executing in the container. +#### Using libcontainer + +To create a container you first have to initialize an instance of a factory +that will handle the creation and initialization for a container. + +Because containers are spawned in a two step process you will need to provide +arguments to a binary that will be executed as the init process for the container. +To use the current binary that is spawning the containers and acting as the parent +you can use `os.Args[0]` and we have a command called `init` setup. + +```go +root, err := libcontainer.New("/var/lib/container", libcontainer.InitArgs(os.Args[0], "init")) +if err != nil { + log.Fatal(err) +} +``` + +Once you have an instance of the factory created we can create a configuration +struct describing how the container is to be created. A sample would look similar to this: + +```go +config := &configs.Config{ + Rootfs: rootfs, + Capabilities: []string{ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE", + }, + Namespaces: configs.Namespaces([]configs.Namespace{ + {Type: configs.NEWNS}, + {Type: configs.NEWUTS}, + {Type: configs.NEWIPC}, + {Type: configs.NEWPID}, + {Type: configs.NEWNET}, + }), + Cgroups: &configs.Cgroup{ + Name: "test-container", + Parent: "system", + AllowAllDevices: false, + AllowedDevices: configs.DefaultAllowedDevices, + }, + + Devices: configs.DefaultAutoCreatedDevices, + Hostname: "testing", + Networks: []*configs.Network{ + { + Type: "loopback", + Address: "127.0.0.1/0", + Gateway: "localhost", + }, + }, + Rlimits: []configs.Rlimit{ + { + Type: syscall.RLIMIT_NOFILE, + Hard: uint64(1024), + Soft: uint64(1024), + }, + }, +} +``` + +Once you have the configuration populated you can create a container: + +```go +container, err := root.Create("container-id", config) +``` + +To spawn bash as the initial process inside the container and have the +processes pid returned in order to wait, signal, or kill the process: + +```go +process := &libcontainer.Process{ + Args: []string{"/bin/bash"}, + Env: []string{"PATH=/bin"}, + User: "daemon", + Stdin: os.Stdin, + Stdout: os.Stdout, + Stderr: os.Stderr, +} + +err := container.Start(process) +if err != nil { + log.Fatal(err) +} + +// wait for the process to finish. +status, err := process.Wait() +if err != nil { + log.Fatal(err) +} + +// destroy the container. +container.Destroy() +``` + +Additional ways to interact with a running container are: + +```go +// return all the pids for all processes running inside the container. +processes, err := container.Processes() + +// get detailed cpu, memory, io, and network statistics for the container and +// it's processes. +stats, err := container.Stats() -#### Root file system +// pause all processes inside the container. +container.Pause() -A container runs with a directory known as its *root file system*, or *rootfs*, mounted as the file system root. The rootfs is usually a full system tree. - - -#### Configuration - -A container is initially configured by supplying configuration data when the container is created. +// resume all paused processes. +container.Resume() +``` #### nsinit -`nsinit` is a cli application which demonstrates the use of libcontainer. It is able to spawn new containers or join existing containers, based on the current directory. +`nsinit` is a cli application which demonstrates the use of libcontainer. +It is able to spawn new containers or join existing containers. A root +filesystem must be provided for use along with a container configuration file. -To use `nsinit`, cd into a Linux rootfs and copy a `container.json` file into the directory with your specified configuration. Environment, networking, and different capabilities for the container are specified in this file. The configuration is used for each process executed inside the container. +To use `nsinit`, cd into a Linux rootfs and copy a `container.json` file into +the directory with your specified configuration. Environment, networking, +and different capabilities for the container are specified in this file. +The configuration is used for each process executed inside the container. See the `sample_configs` folder for examples of what the container configuration should look like. To execute `/bin/bash` in the current directory as a container just run the following **as root**: ```bash -nsinit exec /bin/bash +nsinit exec --tty /bin/bash ``` -If you wish to spawn another process inside the container while your current bash session is running, run the same command again to get another bash shell (or change the command). If the original process (PID 1) dies, all other processes spawned inside the container will be killed and the namespace will be removed. +If you wish to spawn another process inside the container while your +current bash session is running, run the same command again to +get another bash shell (or change the command). If the original +process (PID 1) dies, all other processes spawned inside the container +will be killed and the namespace will be removed. -You can identify if a process is running in a container by looking to see if `state.json` is in the root of the directory. +You can identify if a process is running in a container by +looking to see if `state.json` is in the root of the directory. -You may also specify an alternate root place where the `container.json` file is read and where the `state.json` file will be saved. +You may also specify an alternate root place where +the `container.json` file is read and where the `state.json` file will be saved. #### Future See the [roadmap](ROADMAP.md). diff --git a/vendor/src/github.com/docker/libcontainer/api_temp.go b/vendor/src/github.com/docker/libcontainer/api_temp.go deleted file mode 100644 index 5c682ee344..0000000000 --- a/vendor/src/github.com/docker/libcontainer/api_temp.go +++ /dev/null @@ -1,21 +0,0 @@ -/* -Temporary API endpoint for libcontainer while the full API is finalized (api.go). -*/ -package libcontainer - -import ( - "github.com/docker/libcontainer/cgroups/fs" - "github.com/docker/libcontainer/network" -) - -// TODO(vmarmol): Complete Stats() in final libcontainer API and move users to that. -// DEPRECATED: The below portions are only to be used during the transition to the official API. -// Returns all available stats for the given container. -func GetStats(container *Config, state *State) (stats *ContainerStats, err error) { - stats = &ContainerStats{} - if stats.CgroupStats, err = fs.GetStats(state.CgroupPaths); err != nil { - return stats, err - } - stats.NetworkStats, err = network.GetStats(&state.NetworkState) - return stats, err -} diff --git a/vendor/src/github.com/docker/libcontainer/apparmor/apparmor.go b/vendor/src/github.com/docker/libcontainer/apparmor/apparmor.go index fb1574dfc6..3be3294d85 100644 --- a/vendor/src/github.com/docker/libcontainer/apparmor/apparmor.go +++ b/vendor/src/github.com/docker/libcontainer/apparmor/apparmor.go @@ -24,7 +24,6 @@ func ApplyProfile(name string) error { if name == "" { return nil } - cName := C.CString(name) defer C.free(unsafe.Pointer(cName)) diff --git a/vendor/src/github.com/docker/libcontainer/capabilities_linux.go b/vendor/src/github.com/docker/libcontainer/capabilities_linux.go new file mode 100644 index 0000000000..6b8b465c56 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/capabilities_linux.go @@ -0,0 +1,91 @@ +// +build linux + +package libcontainer + +import ( + "fmt" + "os" + + "github.com/syndtr/gocapability/capability" +) + +const allCapabilityTypes = capability.CAPS | capability.BOUNDS + +var capabilityList = map[string]capability.Cap{ + "SETPCAP": capability.CAP_SETPCAP, + "SYS_MODULE": capability.CAP_SYS_MODULE, + "SYS_RAWIO": capability.CAP_SYS_RAWIO, + "SYS_PACCT": capability.CAP_SYS_PACCT, + "SYS_ADMIN": capability.CAP_SYS_ADMIN, + "SYS_NICE": capability.CAP_SYS_NICE, + "SYS_RESOURCE": capability.CAP_SYS_RESOURCE, + "SYS_TIME": capability.CAP_SYS_TIME, + "SYS_TTY_CONFIG": capability.CAP_SYS_TTY_CONFIG, + "MKNOD": capability.CAP_MKNOD, + "AUDIT_WRITE": capability.CAP_AUDIT_WRITE, + "AUDIT_CONTROL": capability.CAP_AUDIT_CONTROL, + "MAC_OVERRIDE": capability.CAP_MAC_OVERRIDE, + "MAC_ADMIN": capability.CAP_MAC_ADMIN, + "NET_ADMIN": capability.CAP_NET_ADMIN, + "SYSLOG": capability.CAP_SYSLOG, + "CHOWN": capability.CAP_CHOWN, + "NET_RAW": capability.CAP_NET_RAW, + "DAC_OVERRIDE": capability.CAP_DAC_OVERRIDE, + "FOWNER": capability.CAP_FOWNER, + "DAC_READ_SEARCH": capability.CAP_DAC_READ_SEARCH, + "FSETID": capability.CAP_FSETID, + "KILL": capability.CAP_KILL, + "SETGID": capability.CAP_SETGID, + "SETUID": capability.CAP_SETUID, + "LINUX_IMMUTABLE": capability.CAP_LINUX_IMMUTABLE, + "NET_BIND_SERVICE": capability.CAP_NET_BIND_SERVICE, + "NET_BROADCAST": capability.CAP_NET_BROADCAST, + "IPC_LOCK": capability.CAP_IPC_LOCK, + "IPC_OWNER": capability.CAP_IPC_OWNER, + "SYS_CHROOT": capability.CAP_SYS_CHROOT, + "SYS_PTRACE": capability.CAP_SYS_PTRACE, + "SYS_BOOT": capability.CAP_SYS_BOOT, + "LEASE": capability.CAP_LEASE, + "SETFCAP": capability.CAP_SETFCAP, + "WAKE_ALARM": capability.CAP_WAKE_ALARM, + "BLOCK_SUSPEND": capability.CAP_BLOCK_SUSPEND, + "AUDIT_READ": capability.CAP_AUDIT_READ, +} + +func newCapWhitelist(caps []string) (*whitelist, error) { + l := []capability.Cap{} + for _, c := range caps { + v, ok := capabilityList[c] + if !ok { + return nil, fmt.Errorf("unknown capability %q", c) + } + l = append(l, v) + } + pid, err := capability.NewPid(os.Getpid()) + if err != nil { + return nil, err + } + return &whitelist{ + keep: l, + pid: pid, + }, nil +} + +type whitelist struct { + pid capability.Capabilities + keep []capability.Cap +} + +// dropBoundingSet drops the capability bounding set to those specified in the whitelist. +func (w *whitelist) dropBoundingSet() error { + w.pid.Clear(capability.BOUNDS) + w.pid.Set(capability.BOUNDS, w.keep...) + return w.pid.Apply(capability.BOUNDS) +} + +// drop drops all capabilities for the current process except those specified in the whitelist. +func (w *whitelist) drop() error { + w.pid.Clear(allCapabilityTypes) + w.pid.Set(allCapabilityTypes, w.keep...) + return w.pid.Apply(allCapabilityTypes) +} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/cgroups.go b/vendor/src/github.com/docker/libcontainer/cgroups/cgroups.go index 106698d18f..df7bfb3c71 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/cgroups.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/cgroups.go @@ -3,16 +3,38 @@ package cgroups import ( "fmt" - "github.com/docker/libcontainer/devices" + "github.com/docker/libcontainer/configs" ) -type FreezerState string +type Manager interface { + // Apply cgroup configuration to the process with the specified pid + Apply(pid int) error -const ( - Undefined FreezerState = "" - Frozen FreezerState = "FROZEN" - Thawed FreezerState = "THAWED" -) + // Returns the PIDs inside the cgroup set + GetPids() ([]int, error) + + // Returns statistics for the cgroup set + GetStats() (*Stats, error) + + // Toggles the freezer cgroup according with specified state + Freeze(state configs.FreezerState) error + + // Destroys the cgroup set + Destroy() error + + // NewCgroupManager() and LoadCgroupManager() require following attributes: + // Paths map[string]string + // Cgroups *cgroups.Cgroup + // Paths maps cgroup subsystem to path at which it is mounted. + // Cgroups specifies specific cgroup settings for the various subsystems + + // Returns cgroup paths to save in a state file and to be able to + // restore the object later. + GetPaths() map[string]string + + // Set the cgroup as configured. + Set(container *configs.Config) error +} type NotFoundError struct { Subsystem string @@ -32,25 +54,6 @@ func IsNotFound(err error) bool { if err == nil { return false } - _, ok := err.(*NotFoundError) return ok } - -type Cgroup struct { - Name string `json:"name,omitempty"` - Parent string `json:"parent,omitempty"` // name of parent cgroup or slice - - AllowAllDevices bool `json:"allow_all_devices,omitempty"` // If this is true allow access to any kind of device within the container. If false, allow access only to devices explicitly listed in the allowed_devices list. - AllowedDevices []*devices.Device `json:"allowed_devices,omitempty"` - Memory int64 `json:"memory,omitempty"` // Memory limit (in bytes) - MemoryReservation int64 `json:"memory_reservation,omitempty"` // Memory reservation or soft_limit (in bytes) - MemorySwap int64 `json:"memory_swap,omitempty"` // Total memory usage (memory + swap); set `-1' to disable swap - CpuShares int64 `json:"cpu_shares,omitempty"` // CPU shares (relative weight vs. other containers) - CpuQuota int64 `json:"cpu_quota,omitempty"` // CPU hardcap limit (in usecs). Allowed cpu time in a given period. - CpuPeriod int64 `json:"cpu_period,omitempty"` // CPU period to be used for hardcapping (in usecs). 0 to use system default. - CpusetCpus string `json:"cpuset_cpus,omitempty"` // CPU to use - CpusetMems string `json:"cpuset_mems,omitempty"` // MEM to use - Freezer FreezerState `json:"freezer,omitempty"` // set the freeze value for the process - Slice string `json:"slice,omitempty"` // Parent slice to use for systemd -} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/apply_raw.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/apply_raw.go index 58046b0ad7..f6c0d7d597 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/apply_raw.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/apply_raw.go @@ -1,13 +1,14 @@ package fs import ( - "fmt" "io/ioutil" "os" "path/filepath" "strconv" + "sync" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) var ( @@ -24,43 +25,63 @@ var ( CgroupProcesses = "cgroup.procs" ) -// The absolute path to the root of the cgroup hierarchies. -var cgroupRoot string - -// TODO(vmarmol): Report error here, we'll probably need to wait for the new API. -func init() { - // we can pick any subsystem to find the root - cpuRoot, err := cgroups.FindCgroupMountpoint("cpu") - if err != nil { - return - } - cgroupRoot = filepath.Dir(cpuRoot) - - if _, err := os.Stat(cgroupRoot); err != nil { - return - } -} - type subsystem interface { // Returns the stats, as 'stats', corresponding to the cgroup under 'path'. GetStats(path string, stats *cgroups.Stats) error // Removes the cgroup represented by 'data'. Remove(*data) error // Creates and joins the cgroup represented by data. - Set(*data) error + Apply(*data) error + // Set the cgroup represented by cgroup. + Set(path string, cgroup *configs.Cgroup) error +} + +type Manager struct { + Cgroups *configs.Cgroup + Paths map[string]string +} + +// The absolute path to the root of the cgroup hierarchies. +var cgroupRootLock sync.Mutex +var cgroupRoot string + +// Gets the cgroupRoot. +func getCgroupRoot() (string, error) { + cgroupRootLock.Lock() + defer cgroupRootLock.Unlock() + + if cgroupRoot != "" { + return cgroupRoot, nil + } + + root, err := cgroups.FindCgroupMountpointDir() + if err != nil { + return "", err + } + + if _, err := os.Stat(root); err != nil { + return "", err + } + + cgroupRoot = root + return cgroupRoot, nil } type data struct { root string cgroup string - c *cgroups.Cgroup + c *configs.Cgroup pid int } -func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { - d, err := getCgroupData(c, pid) +func (m *Manager) Apply(pid int) error { + if m.Cgroups == nil { + return nil + } + + d, err := getCgroupData(m.Cgroups, pid) if err != nil { - return nil, err + return err } paths := make(map[string]string) @@ -70,27 +91,38 @@ func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { } }() for name, sys := range subsystems { - if err := sys.Set(d); err != nil { - return nil, err + if err := sys.Apply(d); err != nil { + return err } - // FIXME: Apply should, ideally, be reentrant or be broken up into a separate + // TODO: Apply should, ideally, be reentrant or be broken up into a separate // create and join phase so that the cgroup hierarchy for a container can be // created then join consists of writing the process pids to cgroup.procs p, err := d.path(name) if err != nil { - if cgroups.IsNotFound(err) { - continue - } - return nil, err + return err } + if !cgroups.PathExists(p) { + continue + } + paths[name] = p } - return paths, nil + m.Paths = paths + + return nil +} + +func (m *Manager) Destroy() error { + return cgroups.RemovePaths(m.Paths) +} + +func (m *Manager) GetPaths() map[string]string { + return m.Paths } // Symmetrical public function to update device based cgroups. Also available // in the systemd implementation. -func ApplyDevices(c *cgroups.Cgroup, pid int) error { +func ApplyDevices(c *configs.Cgroup, pid int) error { d, err := getCgroupData(c, pid) if err != nil { return err @@ -98,12 +130,12 @@ func ApplyDevices(c *cgroups.Cgroup, pid int) error { devices := subsystems["devices"] - return devices.Set(d) + return devices.Apply(d) } -func GetStats(systemPaths map[string]string) (*cgroups.Stats, error) { +func (m *Manager) GetStats() (*cgroups.Stats, error) { stats := cgroups.NewStats() - for name, path := range systemPaths { + for name, path := range m.Paths { sys, ok := subsystems[name] if !ok || !cgroups.PathExists(path) { continue @@ -116,29 +148,51 @@ func GetStats(systemPaths map[string]string) (*cgroups.Stats, error) { return stats, nil } +func (m *Manager) Set(container *configs.Config) error { + for name, path := range m.Paths { + sys, ok := subsystems[name] + if !ok || !cgroups.PathExists(path) { + continue + } + if err := sys.Set(path, container.Cgroups); err != nil { + return err + } + } + + return nil +} + // Freeze toggles the container's freezer cgroup depending on the state // provided -func Freeze(c *cgroups.Cgroup, state cgroups.FreezerState) error { - d, err := getCgroupData(c, 0) +func (m *Manager) Freeze(state configs.FreezerState) error { + d, err := getCgroupData(m.Cgroups, 0) if err != nil { return err } - prevState := c.Freezer - c.Freezer = state + dir, err := d.path("freezer") + if err != nil { + return err + } + if !cgroups.PathExists(dir) { + return cgroups.NewNotFoundError("freezer") + } + + prevState := m.Cgroups.Freezer + m.Cgroups.Freezer = state freezer := subsystems["freezer"] - err = freezer.Set(d) + err = freezer.Set(dir, m.Cgroups) if err != nil { - c.Freezer = prevState + m.Cgroups.Freezer = prevState return err } return nil } -func GetPids(c *cgroups.Cgroup) ([]int, error) { - d, err := getCgroupData(c, 0) +func (m *Manager) GetPids() ([]int, error) { + d, err := getCgroupData(m.Cgroups, 0) if err != nil { return nil, err } @@ -147,13 +201,17 @@ func GetPids(c *cgroups.Cgroup) ([]int, error) { if err != nil { return nil, err } + if !cgroups.PathExists(dir) { + return nil, cgroups.NewNotFoundError("devices") + } return cgroups.ReadProcsFile(dir) } -func getCgroupData(c *cgroups.Cgroup, pid int) (*data, error) { - if cgroupRoot == "" { - return nil, fmt.Errorf("failed to find the cgroup root") +func getCgroupData(c *configs.Cgroup, pid int) (*data, error) { + root, err := getCgroupRoot() + if err != nil { + return nil, err } cgroup := c.Name @@ -162,7 +220,7 @@ func getCgroupData(c *cgroups.Cgroup, pid int) (*data, error) { } return &data{ - root: cgroupRoot, + root: root, cgroup: cgroup, c: c, pid: pid, @@ -178,19 +236,15 @@ func (raw *data) parent(subsystem string) (string, error) { } func (raw *data) path(subsystem string) (string, error) { + _, err := cgroups.FindCgroupMountpoint(subsystem) + // If we didn't mount the subsystem, there is no point we make the path. + if err != nil { + return "", err + } + // If the cgroup name/path is absolute do not look relative to the cgroup of the init process. if filepath.IsAbs(raw.cgroup) { - path := filepath.Join(raw.root, subsystem, raw.cgroup) - - if _, err := os.Stat(path); err != nil { - if os.IsNotExist(err) { - return "", cgroups.NewNotFoundError(subsystem) - } - - return "", err - } - - return path, nil + return filepath.Join(raw.root, subsystem, raw.cgroup), nil } parent, err := raw.parent(subsystem) diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio.go index ce824d56c2..01da5d7fc7 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio.go @@ -9,20 +9,39 @@ import ( "strings" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type BlkioGroup struct { } -func (s *BlkioGroup) Set(d *data) error { - // we just want to join this group even though we don't set anything - if _, err := d.join("blkio"); err != nil && !cgroups.IsNotFound(err) { +func (s *BlkioGroup) Apply(d *data) error { + dir, err := d.join("blkio") + if err != nil { + if cgroups.IsNotFound(err) { + return nil + } else { + return err + } + } + + if err := s.Set(dir, d.c); err != nil { return err } return nil } +func (s *BlkioGroup) Set(path string, cgroup *configs.Cgroup) error { + if cgroup.BlkioWeight != 0 { + if err := writeFile(path, "blkio.weight", strconv.FormatInt(cgroup.BlkioWeight, 10)); err != nil { + return err + } + } + + return nil +} + func (s *BlkioGroup) Remove(d *data) error { return removePath(d.path("blkio")) } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio_test.go index 6cd38cbaba..9ef93fcff2 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio_test.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/blkio_test.go @@ -1,6 +1,7 @@ package fs import ( + "strconv" "testing" "github.com/docker/libcontainer/cgroups" @@ -72,6 +73,35 @@ func appendBlkioStatEntry(blkioStatEntries *[]cgroups.BlkioStatEntry, major, min *blkioStatEntries = append(*blkioStatEntries, cgroups.BlkioStatEntry{Major: major, Minor: minor, Value: value, Op: op}) } +func TestBlkioSetWeight(t *testing.T) { + helper := NewCgroupTestUtil("blkio", t) + defer helper.cleanup() + + const ( + weightBefore = 100 + weightAfter = 200 + ) + + helper.writeFileContents(map[string]string{ + "blkio.weight": strconv.Itoa(weightBefore), + }) + + helper.CgroupData.c.BlkioWeight = weightAfter + blkio := &BlkioGroup{} + if err := blkio.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "blkio.weight") + if err != nil { + t.Fatalf("Failed to parse blkio.weight - %s", err) + } + + if value != weightAfter { + t.Fatal("Got the wrong value, set blkio.weight failed.") + } +} + func TestBlkioStats(t *testing.T) { helper := NewCgroupTestUtil("blkio", t) defer helper.cleanup() diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go index efac9ed16a..42386fd847 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu.go @@ -7,33 +7,48 @@ import ( "strconv" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type CpuGroup struct { } -func (s *CpuGroup) Set(d *data) error { +func (s *CpuGroup) Apply(d *data) error { // We always want to join the cpu group, to allow fair cpu scheduling // on a container basis dir, err := d.join("cpu") if err != nil { + if cgroups.IsNotFound(err) { + return nil + } else { + return err + } + } + + if err := s.Set(dir, d.c); err != nil { return err } - if d.c.CpuShares != 0 { - if err := writeFile(dir, "cpu.shares", strconv.FormatInt(d.c.CpuShares, 10)); err != nil { + + return nil +} + +func (s *CpuGroup) Set(path string, cgroup *configs.Cgroup) error { + if cgroup.CpuShares != 0 { + if err := writeFile(path, "cpu.shares", strconv.FormatInt(cgroup.CpuShares, 10)); err != nil { return err } } - if d.c.CpuPeriod != 0 { - if err := writeFile(dir, "cpu.cfs_period_us", strconv.FormatInt(d.c.CpuPeriod, 10)); err != nil { + if cgroup.CpuPeriod != 0 { + if err := writeFile(path, "cpu.cfs_period_us", strconv.FormatInt(cgroup.CpuPeriod, 10)); err != nil { return err } } - if d.c.CpuQuota != 0 { - if err := writeFile(dir, "cpu.cfs_quota_us", strconv.FormatInt(d.c.CpuQuota, 10)); err != nil { + if cgroup.CpuQuota != 0 { + if err := writeFile(path, "cpu.cfs_quota_us", strconv.FormatInt(cgroup.CpuQuota, 10)); err != nil { return err } } + return nil } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu_test.go index 2470e68956..bcf4ac4e8a 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu_test.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpu_test.go @@ -2,11 +2,81 @@ package fs import ( "fmt" + "strconv" "testing" "github.com/docker/libcontainer/cgroups" ) +func TestCpuSetShares(t *testing.T) { + helper := NewCgroupTestUtil("cpu", t) + defer helper.cleanup() + + const ( + sharesBefore = 1024 + sharesAfter = 512 + ) + + helper.writeFileContents(map[string]string{ + "cpu.shares": strconv.Itoa(sharesBefore), + }) + + helper.CgroupData.c.CpuShares = sharesAfter + cpu := &CpuGroup{} + if err := cpu.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "cpu.shares") + if err != nil { + t.Fatalf("Failed to parse cpu.shares - %s", err) + } + + if value != sharesAfter { + t.Fatal("Got the wrong value, set cpu.shares failed.") + } +} + +func TestCpuSetBandWidth(t *testing.T) { + helper := NewCgroupTestUtil("cpu", t) + defer helper.cleanup() + + const ( + quotaBefore = 8000 + quotaAfter = 5000 + periodBefore = 10000 + periodAfter = 7000 + ) + + helper.writeFileContents(map[string]string{ + "cpu.cfs_quota_us": strconv.Itoa(quotaBefore), + "cpu.cfs_period_us": strconv.Itoa(periodBefore), + }) + + helper.CgroupData.c.CpuQuota = quotaAfter + helper.CgroupData.c.CpuPeriod = periodAfter + cpu := &CpuGroup{} + if err := cpu.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + quota, err := getCgroupParamUint(helper.CgroupPath, "cpu.cfs_quota_us") + if err != nil { + t.Fatalf("Failed to parse cpu.cfs_quota_us - %s", err) + } + if quota != quotaAfter { + t.Fatal("Got the wrong value, set cpu.cfs_quota_us failed.") + } + + period, err := getCgroupParamUint(helper.CgroupPath, "cpu.cfs_period_us") + if err != nil { + t.Fatalf("Failed to parse cpu.cfs_period_us - %s", err) + } + if period != periodAfter { + t.Fatal("Got the wrong value, set cpu.cfs_period_us failed.") + } +} + func TestCpuStats(t *testing.T) { helper := NewCgroupTestUtil("cpu", t) defer helper.cleanup() diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuacct.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuacct.go index 14b55ccd4e..decee85094 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuacct.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuacct.go @@ -8,6 +8,7 @@ import ( "strings" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" "github.com/docker/libcontainer/system" ) @@ -21,7 +22,7 @@ var clockTicks = uint64(system.GetClockTicks()) type CpuacctGroup struct { } -func (s *CpuacctGroup) Set(d *data) error { +func (s *CpuacctGroup) Apply(d *data) error { // we just want to join this group even though we don't set anything if _, err := d.join("cpuacct"); err != nil && !cgroups.IsNotFound(err) { return err @@ -30,6 +31,10 @@ func (s *CpuacctGroup) Set(d *data) error { return nil } +func (s *CpuacctGroup) Set(path string, cgroup *configs.Cgroup) error { + return nil +} + func (s *CpuacctGroup) Remove(d *data) error { return removePath(d.path("cpuacct")) } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset.go index ff67a53e87..d8465a666b 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset.go @@ -8,17 +8,35 @@ import ( "strconv" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type CpusetGroup struct { } -func (s *CpusetGroup) Set(d *data) error { +func (s *CpusetGroup) Apply(d *data) error { dir, err := d.path("cpuset") if err != nil { return err } - return s.SetDir(dir, d.c.CpusetCpus, d.c.CpusetMems, d.pid) + + return s.ApplyDir(dir, d.c, d.pid) +} + +func (s *CpusetGroup) Set(path string, cgroup *configs.Cgroup) error { + if cgroup.CpusetCpus != "" { + if err := writeFile(path, "cpuset.cpus", cgroup.CpusetCpus); err != nil { + return err + } + } + + if cgroup.CpusetMems != "" { + if err := writeFile(path, "cpuset.mems", cgroup.CpusetMems); err != nil { + return err + } + } + + return nil } func (s *CpusetGroup) Remove(d *data) error { @@ -29,7 +47,7 @@ func (s *CpusetGroup) GetStats(path string, stats *cgroups.Stats) error { return nil } -func (s *CpusetGroup) SetDir(dir, cpus string, mems string, pid int) error { +func (s *CpusetGroup) ApplyDir(dir string, cgroup *configs.Cgroup, pid int) error { if err := s.ensureParent(dir); err != nil { return err } @@ -40,17 +58,10 @@ func (s *CpusetGroup) SetDir(dir, cpus string, mems string, pid int) error { return err } - // If we don't use --cpuset-xxx, the default value inherit from parent cgroup - // is set in s.ensureParent, otherwise, use the value we set - if cpus != "" { - if err := writeFile(dir, "cpuset.cpus", cpus); err != nil { - return err - } - } - if mems != "" { - if err := writeFile(dir, "cpuset.mems", mems); err != nil { - return err - } + // the default values inherit from parent cgroup are already set in + // s.ensureParent, cover these if we have our own + if err := s.Set(dir, cgroup); err != nil { + return err } return nil diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset_test.go new file mode 100644 index 0000000000..7449cdca17 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/cpuset_test.go @@ -0,0 +1,63 @@ +package fs + +import ( + "testing" +) + +func TestCpusetSetCpus(t *testing.T) { + helper := NewCgroupTestUtil("cpuset", t) + defer helper.cleanup() + + const ( + cpusBefore = "0" + cpusAfter = "1-3" + ) + + helper.writeFileContents(map[string]string{ + "cpuset.cpus": cpusBefore, + }) + + helper.CgroupData.c.CpusetCpus = cpusAfter + cpuset := &CpusetGroup{} + if err := cpuset.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamString(helper.CgroupPath, "cpuset.cpus") + if err != nil { + t.Fatalf("Failed to parse cpuset.cpus - %s", err) + } + + if value != cpusAfter { + t.Fatal("Got the wrong value, set cpuset.cpus failed.") + } +} + +func TestCpusetSetMems(t *testing.T) { + helper := NewCgroupTestUtil("cpuset", t) + defer helper.cleanup() + + const ( + memsBefore = "0" + memsAfter = "1" + ) + + helper.writeFileContents(map[string]string{ + "cpuset.mems": memsBefore, + }) + + helper.CgroupData.c.CpusetMems = memsAfter + cpuset := &CpusetGroup{} + if err := cpuset.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamString(helper.CgroupPath, "cpuset.mems") + if err != nil { + t.Fatalf("Failed to parse cpuset.mems - %s", err) + } + + if value != memsAfter { + t.Fatal("Got the wrong value, set cpuset.mems failed.") + } +} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go index 98d5d2d7dd..fab8323e93 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices.go @@ -1,27 +1,43 @@ package fs -import "github.com/docker/libcontainer/cgroups" +import ( + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" +) type DevicesGroup struct { } -func (s *DevicesGroup) Set(d *data) error { +func (s *DevicesGroup) Apply(d *data) error { dir, err := d.join("devices") if err != nil { + if cgroups.IsNotFound(err) { + return nil + } else { + return err + } + } + + if err := s.Set(dir, d.c); err != nil { return err } - if !d.c.AllowAllDevices { - if err := writeFile(dir, "devices.deny", "a"); err != nil { + return nil +} + +func (s *DevicesGroup) Set(path string, cgroup *configs.Cgroup) error { + if !cgroup.AllowAllDevices { + if err := writeFile(path, "devices.deny", "a"); err != nil { return err } - for _, dev := range d.c.AllowedDevices { - if err := writeFile(dir, "devices.allow", dev.GetCgroupAllowString()); err != nil { + for _, dev := range cgroup.AllowedDevices { + if err := writeFile(path, "devices.allow", dev.CgroupString()); err != nil { return err } } } + return nil } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices_test.go new file mode 100644 index 0000000000..18bb127462 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/devices_test.go @@ -0,0 +1,46 @@ +package fs + +import ( + "testing" + + "github.com/docker/libcontainer/configs" +) + +var ( + allowedDevices = []*configs.Device{ + { + Path: "/dev/zero", + Type: 'c', + Major: 1, + Minor: 5, + Permissions: "rwm", + FileMode: 0666, + }, + } + allowedList = "c 1:5 rwm" +) + +func TestDevicesSetAllow(t *testing.T) { + helper := NewCgroupTestUtil("devices", t) + defer helper.cleanup() + + helper.writeFileContents(map[string]string{ + "devices.deny": "a", + }) + + helper.CgroupData.c.AllowAllDevices = false + helper.CgroupData.c.AllowedDevices = allowedDevices + devices := &DevicesGroup{} + if err := devices.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamString(helper.CgroupPath, "devices.allow") + if err != nil { + t.Fatalf("Failed to parse devices.allow - %s", err) + } + + if value != allowedList { + t.Fatal("Got the wrong value, set devices.allow failed.") + } +} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/freezer.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/freezer.go index c6b677fa95..5e08e05302 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/freezer.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/freezer.go @@ -5,37 +5,46 @@ import ( "time" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type FreezerGroup struct { } -func (s *FreezerGroup) Set(d *data) error { - switch d.c.Freezer { - case cgroups.Frozen, cgroups.Thawed: - dir, err := d.path("freezer") - if err != nil { +func (s *FreezerGroup) Apply(d *data) error { + dir, err := d.join("freezer") + if err != nil { + if cgroups.IsNotFound(err) { + return nil + } else { return err } + } - if err := writeFile(dir, "freezer.state", string(d.c.Freezer)); err != nil { + if err := s.Set(dir, d.c); err != nil { + return err + } + + return nil +} + +func (s *FreezerGroup) Set(path string, cgroup *configs.Cgroup) error { + switch cgroup.Freezer { + case configs.Frozen, configs.Thawed: + if err := writeFile(path, "freezer.state", string(cgroup.Freezer)); err != nil { return err } for { - state, err := readFile(dir, "freezer.state") + state, err := readFile(path, "freezer.state") if err != nil { return err } - if strings.TrimSpace(state) == string(d.c.Freezer) { + if strings.TrimSpace(state) == string(cgroup.Freezer) { break } time.Sleep(1 * time.Millisecond) } - default: - if _, err := d.join("freezer"); err != nil && !cgroups.IsNotFound(err) { - return err - } } return nil diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go index 01713fd790..68e930fdc5 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory.go @@ -8,16 +8,20 @@ import ( "strconv" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type MemoryGroup struct { } -func (s *MemoryGroup) Set(d *data) error { +func (s *MemoryGroup) Apply(d *data) error { dir, err := d.join("memory") - // only return an error for memory if it was specified - if err != nil && (d.c.Memory != 0 || d.c.MemoryReservation != 0 || d.c.MemorySwap != 0) { - return err + if err != nil { + if cgroups.IsNotFound(err) { + return nil + } else { + return err + } } defer func() { if err != nil { @@ -25,31 +29,42 @@ func (s *MemoryGroup) Set(d *data) error { } }() - // Only set values if some config was specified. - if d.c.Memory != 0 || d.c.MemoryReservation != 0 || d.c.MemorySwap != 0 { - if d.c.Memory != 0 { - if err := writeFile(dir, "memory.limit_in_bytes", strconv.FormatInt(d.c.Memory, 10)); err != nil { - return err - } - } - if d.c.MemoryReservation != 0 { - if err := writeFile(dir, "memory.soft_limit_in_bytes", strconv.FormatInt(d.c.MemoryReservation, 10)); err != nil { - return err - } - } - // By default, MemorySwap is set to twice the size of RAM. - // If you want to omit MemorySwap, set it to '-1'. - if d.c.MemorySwap == 0 { - if err := writeFile(dir, "memory.memsw.limit_in_bytes", strconv.FormatInt(d.c.Memory*2, 10)); err != nil { - return err - } - } - if d.c.MemorySwap > 0 { - if err := writeFile(dir, "memory.memsw.limit_in_bytes", strconv.FormatInt(d.c.MemorySwap, 10)); err != nil { - return err - } + if err := s.Set(dir, d.c); err != nil { + return err + } + + return nil +} + +func (s *MemoryGroup) Set(path string, cgroup *configs.Cgroup) error { + if cgroup.Memory != 0 { + if err := writeFile(path, "memory.limit_in_bytes", strconv.FormatInt(cgroup.Memory, 10)); err != nil { + return err } } + if cgroup.MemoryReservation != 0 { + if err := writeFile(path, "memory.soft_limit_in_bytes", strconv.FormatInt(cgroup.MemoryReservation, 10)); err != nil { + return err + } + } + // By default, MemorySwap is set to twice the size of Memory. + if cgroup.MemorySwap == 0 && cgroup.Memory != 0 { + if err := writeFile(path, "memory.memsw.limit_in_bytes", strconv.FormatInt(cgroup.Memory*2, 10)); err != nil { + return err + } + } + if cgroup.MemorySwap > 0 { + if err := writeFile(path, "memory.memsw.limit_in_bytes", strconv.FormatInt(cgroup.MemorySwap, 10)); err != nil { + return err + } + } + + if cgroup.OomKillDisable { + if err := writeFile(path, "memory.oom_control", "1"); err != nil { + return err + } + } + return nil } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory_test.go index a21cec75c0..1e939c4e88 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory_test.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/memory_test.go @@ -1,6 +1,7 @@ package fs import ( + "strconv" "testing" "github.com/docker/libcontainer/cgroups" @@ -14,6 +15,103 @@ rss 1024` memoryFailcnt = "100\n" ) +func TestMemorySetMemory(t *testing.T) { + helper := NewCgroupTestUtil("memory", t) + defer helper.cleanup() + + const ( + memoryBefore = 314572800 // 300M + memoryAfter = 524288000 // 500M + reservationBefore = 209715200 // 200M + reservationAfter = 314572800 // 300M + ) + + helper.writeFileContents(map[string]string{ + "memory.limit_in_bytes": strconv.Itoa(memoryBefore), + "memory.soft_limit_in_bytes": strconv.Itoa(reservationBefore), + }) + + helper.CgroupData.c.Memory = memoryAfter + helper.CgroupData.c.MemoryReservation = reservationAfter + memory := &MemoryGroup{} + if err := memory.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "memory.limit_in_bytes") + if err != nil { + t.Fatalf("Failed to parse memory.limit_in_bytes - %s", err) + } + if value != memoryAfter { + t.Fatal("Got the wrong value, set memory.limit_in_bytes failed.") + } + + value, err = getCgroupParamUint(helper.CgroupPath, "memory.soft_limit_in_bytes") + if err != nil { + t.Fatalf("Failed to parse memory.soft_limit_in_bytes - %s", err) + } + if value != reservationAfter { + t.Fatal("Got the wrong value, set memory.soft_limit_in_bytes failed.") + } +} + +func TestMemorySetMemoryswap(t *testing.T) { + helper := NewCgroupTestUtil("memory", t) + defer helper.cleanup() + + const ( + memoryswapBefore = 314572800 // 300M + memoryswapAfter = 524288000 // 500M + ) + + helper.writeFileContents(map[string]string{ + "memory.memsw.limit_in_bytes": strconv.Itoa(memoryswapBefore), + }) + + helper.CgroupData.c.MemorySwap = memoryswapAfter + memory := &MemoryGroup{} + if err := memory.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "memory.memsw.limit_in_bytes") + if err != nil { + t.Fatalf("Failed to parse memory.memsw.limit_in_bytes - %s", err) + } + if value != memoryswapAfter { + t.Fatal("Got the wrong value, set memory.memsw.limit_in_bytes failed.") + } +} + +func TestMemorySetMemoryswapDefault(t *testing.T) { + helper := NewCgroupTestUtil("memory", t) + defer helper.cleanup() + + const ( + memoryBefore = 209715200 // 200M + memoryAfter = 314572800 // 300M + memoryswapAfter = 629145600 // 300M*2 + ) + + helper.writeFileContents(map[string]string{ + "memory.limit_in_bytes": strconv.Itoa(memoryBefore), + }) + + helper.CgroupData.c.Memory = memoryAfter + memory := &MemoryGroup{} + if err := memory.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "memory.memsw.limit_in_bytes") + if err != nil { + t.Fatalf("Failed to parse memory.memsw.limit_in_bytes - %s", err) + } + if value != memoryswapAfter { + t.Fatal("Got the wrong value, set memory.memsw.limit_in_bytes failed.") + } +} + func TestMemoryStats(t *testing.T) { helper := NewCgroupTestUtil("memory", t) defer helper.cleanup() @@ -132,3 +230,30 @@ func TestMemoryStatsBadMaxUsageFile(t *testing.T) { t.Fatal("Expected failure") } } + +func TestMemorySetOomControl(t *testing.T) { + helper := NewCgroupTestUtil("memory", t) + defer helper.cleanup() + + const ( + oom_kill_disable = 1 // disable oom killer, default is 0 + ) + + helper.writeFileContents(map[string]string{ + "memory.oom_control": strconv.Itoa(oom_kill_disable), + }) + + memory := &MemoryGroup{} + if err := memory.Set(helper.CgroupPath, helper.CgroupData.c); err != nil { + t.Fatal(err) + } + + value, err := getCgroupParamUint(helper.CgroupPath, "memory.oom_control") + if err != nil { + t.Fatalf("Failed to parse memory.oom_control - %s", err) + } + + if value != oom_kill_disable { + t.Fatalf("Got the wrong value, set memory.oom_control failed.") + } +} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/perf_event.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/perf_event.go index 813274d8cb..ca65f734a1 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/perf_event.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/perf_event.go @@ -2,12 +2,13 @@ package fs import ( "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) type PerfEventGroup struct { } -func (s *PerfEventGroup) Set(d *data) error { +func (s *PerfEventGroup) Apply(d *data) error { // we just want to join this group even though we don't set anything if _, err := d.join("perf_event"); err != nil && !cgroups.IsNotFound(err) { return err @@ -15,6 +16,10 @@ func (s *PerfEventGroup) Set(d *data) error { return nil } +func (s *PerfEventGroup) Set(path string, cgroup *configs.Cgroup) error { + return nil +} + func (s *PerfEventGroup) Remove(d *data) error { return removePath(d.path("perf_event")) } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/util_test.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/util_test.go index 548870a8a3..37bf515781 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/util_test.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/util_test.go @@ -6,10 +6,12 @@ Creates a mock of the cgroup filesystem for the duration of the test. package fs import ( - "fmt" "io/ioutil" "os" + "path/filepath" "testing" + + "github.com/docker/libcontainer/configs" ) type cgroupTestUtil struct { @@ -26,13 +28,15 @@ type cgroupTestUtil struct { // Creates a new test util for the specified subsystem func NewCgroupTestUtil(subsystem string, t *testing.T) *cgroupTestUtil { - d := &data{} - tempDir, err := ioutil.TempDir("", fmt.Sprintf("%s_cgroup_test", subsystem)) + d := &data{ + c: &configs.Cgroup{}, + } + tempDir, err := ioutil.TempDir("", "cgroup_test") if err != nil { t.Fatal(err) } d.root = tempDir - testCgroupPath, err := d.path(subsystem) + testCgroupPath := filepath.Join(d.root, subsystem) if err != nil { t.Fatal(err) } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/fs/utils.go b/vendor/src/github.com/docker/libcontainer/cgroups/fs/utils.go index f37a3a485a..c2f75c8e54 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/fs/utils.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/fs/utils.go @@ -60,3 +60,13 @@ func getCgroupParamUint(cgroupPath, cgroupFile string) (uint64, error) { return parseUint(strings.TrimSpace(string(contents)), 10, 64) } + +// Gets a string value from the specified cgroup file +func getCgroupParamString(cgroupPath, cgroupFile string) (string, error) { + contents, err := ioutil.ReadFile(filepath.Join(cgroupPath, cgroupFile)) + if err != nil { + return "", err + } + + return strings.TrimSpace(string(contents)), nil +} diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_nosystemd.go b/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_nosystemd.go index 4b9a2f5b74..95ed4ea7eb 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_nosystemd.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_nosystemd.go @@ -6,24 +6,50 @@ import ( "fmt" "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" ) +type Manager struct { + Cgroups *configs.Cgroup + Paths map[string]string +} + func UseSystemd() bool { return false } -func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func GetPids(c *cgroups.Cgroup) ([]int, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func ApplyDevices(c *cgroups.Cgroup, pid int) error { +func (m *Manager) Apply(pid int) error { return fmt.Errorf("Systemd not supported") } -func Freeze(c *cgroups.Cgroup, state cgroups.FreezerState) error { +func (m *Manager) GetPids() ([]int, error) { + return nil, fmt.Errorf("Systemd not supported") +} + +func (m *Manager) Destroy() error { + return fmt.Errorf("Systemd not supported") +} + +func (m *Manager) GetPaths() map[string]string { + return nil +} + +func (m *Manager) GetStats() (*cgroups.Stats, error) { + return nil, fmt.Errorf("Systemd not supported") +} + +func (m *Manager) Set(container *configs.Config) error { + return nil, fmt.Errorf("Systemd not supported") +} + +func (m *Manager) Freeze(state configs.FreezerState) error { + return fmt.Errorf("Systemd not supported") +} + +func ApplyDevices(c *configs.Cgroup, pid int) error { + return fmt.Errorf("Systemd not supported") +} + +func Freeze(c *configs.Cgroup, state configs.FreezerState) error { return fmt.Errorf("Systemd not supported") } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go b/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go index 41dce3117d..f4358e1a64 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/systemd/apply_systemd.go @@ -16,21 +16,38 @@ import ( systemd "github.com/coreos/go-systemd/dbus" "github.com/docker/libcontainer/cgroups" "github.com/docker/libcontainer/cgroups/fs" + "github.com/docker/libcontainer/configs" "github.com/godbus/dbus" ) -type systemdCgroup struct { - cgroup *cgroups.Cgroup +type Manager struct { + Cgroups *configs.Cgroup + Paths map[string]string } type subsystem interface { - GetStats(string, *cgroups.Stats) error + // Returns the stats, as 'stats', corresponding to the cgroup under 'path'. + GetStats(path string, stats *cgroups.Stats) error + // Set the cgroup represented by cgroup. + Set(path string, cgroup *configs.Cgroup) error +} + +var subsystems = map[string]subsystem{ + "devices": &fs.DevicesGroup{}, + "memory": &fs.MemoryGroup{}, + "cpu": &fs.CpuGroup{}, + "cpuset": &fs.CpusetGroup{}, + "cpuacct": &fs.CpuacctGroup{}, + "blkio": &fs.BlkioGroup{}, + "perf_event": &fs.PerfEventGroup{}, + "freezer": &fs.FreezerGroup{}, } var ( - connLock sync.Mutex - theConn *systemd.Conn - hasStartTransientUnit bool + connLock sync.Mutex + theConn *systemd.Conn + hasStartTransientUnit bool + hasTransientDefaultDependencies bool ) func newProp(name string, units interface{}) systemd.Property { @@ -64,6 +81,18 @@ func UseSystemd() bool { if dbusError, ok := err.(dbus.Error); ok { if dbusError.Name == "org.freedesktop.DBus.Error.UnknownMethod" { hasStartTransientUnit = false + return hasStartTransientUnit + } + } + } + + // Assume StartTransientUnit on a scope allows DefaultDependencies + hasTransientDefaultDependencies = true + ddf := newProp("DefaultDependencies", false) + if _, err := theConn.StartTransientUnit("docker-systemd-test-default-dependencies.scope", "replace", ddf); err != nil { + if dbusError, ok := err.(dbus.Error); ok { + if dbusError.Name == "org.freedesktop.DBus.Error.PropertyReadOnly" { + hasTransientDefaultDependencies = false } } } @@ -81,16 +110,14 @@ func getIfaceForUnit(unitName string) string { return "Unit" } -func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { +func (m *Manager) Apply(pid int) error { var ( + c = m.Cgroups unitName = getUnitName(c) slice = "system.slice" properties []systemd.Property - res = &systemdCgroup{} ) - res.cgroup = c - if c.Slice != "" { slice = c.Slice } @@ -108,6 +135,11 @@ func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { newProp("CPUAccounting", true), newProp("BlockIOAccounting", true)) + if hasTransientDefaultDependencies { + properties = append(properties, + newProp("DefaultDependencies", false)) + } + if c.Memory != 0 { properties = append(properties, newProp("MemoryLimit", uint64(c.Memory))) @@ -119,20 +151,29 @@ func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { newProp("CPUShares", uint64(c.CpuShares))) } - if _, err := theConn.StartTransientUnit(unitName, "replace", properties...); err != nil { - return nil, err + if c.BlkioWeight != 0 { + properties = append(properties, + newProp("BlockIOWeight", uint64(c.BlkioWeight))) } - if !c.AllowAllDevices { - if err := joinDevices(c, pid); err != nil { - return nil, err - } + if _, err := theConn.StartTransientUnit(unitName, "replace", properties...); err != nil { + return err + } + + if err := joinDevices(c, pid); err != nil { + return err + } + + // TODO: CpuQuota and CpuPeriod not available in systemd + // we need to manually join the cpu.cfs_quota_us and cpu.cfs_period_us + if err := joinCpu(c, pid); err != nil { + return err } // -1 disables memorySwap - if c.MemorySwap >= 0 && (c.Memory != 0 || c.MemorySwap > 0) { + if c.MemorySwap >= 0 && c.Memory != 0 { if err := joinMemory(c, pid); err != nil { - return nil, err + return err } } @@ -140,11 +181,11 @@ func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { // we need to manually join the freezer and cpuset cgroup in systemd // because it does not currently support it via the dbus api. if err := joinFreezer(c, pid); err != nil { - return nil, err + return err } if err := joinCpuset(c, pid); err != nil { - return nil, err + return err } paths := make(map[string]string) @@ -158,24 +199,53 @@ func Apply(c *cgroups.Cgroup, pid int) (map[string]string, error) { "perf_event", "freezer", } { - subsystemPath, err := getSubsystemPath(res.cgroup, sysname) + subsystemPath, err := getSubsystemPath(m.Cgroups, sysname) if err != nil { // Don't fail if a cgroup hierarchy was not found, just skip this subsystem if cgroups.IsNotFound(err) { continue } - return nil, err + return err } paths[sysname] = subsystemPath } - return paths, nil + + m.Paths = paths + + return nil +} + +func (m *Manager) Destroy() error { + return cgroups.RemovePaths(m.Paths) +} + +func (m *Manager) GetPaths() map[string]string { + return m.Paths } func writeFile(dir, file, data string) error { return ioutil.WriteFile(filepath.Join(dir, file), []byte(data), 0700) } -func joinFreezer(c *cgroups.Cgroup, pid int) error { +func joinCpu(c *configs.Cgroup, pid int) error { + path, err := getSubsystemPath(c, "cpu") + if err != nil { + return err + } + if c.CpuQuota != 0 { + if err = ioutil.WriteFile(filepath.Join(path, "cpu.cfs_quota_us"), []byte(strconv.FormatInt(c.CpuQuota, 10)), 0700); err != nil { + return err + } + } + if c.CpuPeriod != 0 { + if err = ioutil.WriteFile(filepath.Join(path, "cpu.cfs_period_us"), []byte(strconv.FormatInt(c.CpuPeriod, 10)), 0700); err != nil { + return err + } + } + return nil +} + +func joinFreezer(c *configs.Cgroup, pid int) error { path, err := getSubsystemPath(c, "freezer") if err != nil { return err @@ -188,7 +258,7 @@ func joinFreezer(c *cgroups.Cgroup, pid int) error { return ioutil.WriteFile(filepath.Join(path, "cgroup.procs"), []byte(strconv.Itoa(pid)), 0700) } -func getSubsystemPath(c *cgroups.Cgroup, subsystem string) (string, error) { +func getSubsystemPath(c *configs.Cgroup, subsystem string) (string, error) { mountpoint, err := cgroups.FindCgroupMountpoint(subsystem) if err != nil { return "", err @@ -207,8 +277,8 @@ func getSubsystemPath(c *cgroups.Cgroup, subsystem string) (string, error) { return filepath.Join(mountpoint, initPath, slice, getUnitName(c)), nil } -func Freeze(c *cgroups.Cgroup, state cgroups.FreezerState) error { - path, err := getSubsystemPath(c, "freezer") +func (m *Manager) Freeze(state configs.FreezerState) error { + path, err := getSubsystemPath(m.Cgroups, "freezer") if err != nil { return err } @@ -226,11 +296,14 @@ func Freeze(c *cgroups.Cgroup, state cgroups.FreezerState) error { } time.Sleep(1 * time.Millisecond) } + + m.Cgroups.Freezer = state + return nil } -func GetPids(c *cgroups.Cgroup) ([]int, error) { - path, err := getSubsystemPath(c, "cpu") +func (m *Manager) GetPids() ([]int, error) { + path, err := getSubsystemPath(m.Cgroups, "cpu") if err != nil { return nil, err } @@ -238,7 +311,26 @@ func GetPids(c *cgroups.Cgroup) ([]int, error) { return cgroups.ReadProcsFile(path) } -func getUnitName(c *cgroups.Cgroup) string { +func (m *Manager) GetStats() (*cgroups.Stats, error) { + stats := cgroups.NewStats() + for name, path := range m.Paths { + sys, ok := subsystems[name] + if !ok || !cgroups.PathExists(path) { + continue + } + if err := sys.GetStats(path, stats); err != nil { + return nil, err + } + } + + return stats, nil +} + +func (m *Manager) Set(container *configs.Config) error { + panic("not implemented") +} + +func getUnitName(c *configs.Cgroup) string { return fmt.Sprintf("%s-%s.scope", c.Parent, c.Name) } @@ -253,7 +345,7 @@ func getUnitName(c *cgroups.Cgroup) string { // Note: we can't use systemd to set up the initial limits, and then change the cgroup // because systemd will re-write the device settings if it needs to re-apply the cgroup context. // This happens at least for v208 when any sibling unit is started. -func joinDevices(c *cgroups.Cgroup, pid int) error { +func joinDevices(c *configs.Cgroup, pid int) error { path, err := getSubsystemPath(c, "devices") if err != nil { return err @@ -267,26 +359,26 @@ func joinDevices(c *cgroups.Cgroup, pid int) error { return err } - if err := writeFile(path, "devices.deny", "a"); err != nil { - return err - } - - for _, dev := range c.AllowedDevices { - if err := writeFile(path, "devices.allow", dev.GetCgroupAllowString()); err != nil { + if !c.AllowAllDevices { + if err := writeFile(path, "devices.deny", "a"); err != nil { + return err + } + } + for _, dev := range c.AllowedDevices { + if err := writeFile(path, "devices.allow", dev.CgroupString()); err != nil { return err } } - return nil } // Symmetrical public function to update device based cgroups. Also available // in the fs implementation. -func ApplyDevices(c *cgroups.Cgroup, pid int) error { +func ApplyDevices(c *configs.Cgroup, pid int) error { return joinDevices(c, pid) } -func joinMemory(c *cgroups.Cgroup, pid int) error { +func joinMemory(c *configs.Cgroup, pid int) error { memorySwap := c.MemorySwap if memorySwap == 0 { @@ -305,7 +397,7 @@ func joinMemory(c *cgroups.Cgroup, pid int) error { // systemd does not atm set up the cpuset controller, so we must manually // join it. Additionally that is a very finicky controller where each // level must have a full setup as the default for a new directory is "no cpus" -func joinCpuset(c *cgroups.Cgroup, pid int) error { +func joinCpuset(c *configs.Cgroup, pid int) error { path, err := getSubsystemPath(c, "cpuset") if err != nil { return err @@ -313,5 +405,5 @@ func joinCpuset(c *cgroups.Cgroup, pid int) error { s := &fs.CpusetGroup{} - return s.SetDir(path, c.CpusetCpus, c.CpusetMems, pid) + return s.ApplyDir(path, c, pid) } diff --git a/vendor/src/github.com/docker/libcontainer/cgroups/utils.go b/vendor/src/github.com/docker/libcontainer/cgroups/utils.go index a360904cce..c6c400c7d3 100644 --- a/vendor/src/github.com/docker/libcontainer/cgroups/utils.go +++ b/vendor/src/github.com/docker/libcontainer/cgroups/utils.go @@ -34,6 +34,21 @@ func FindCgroupMountpoint(subsystem string) (string, error) { return "", NewNotFoundError(subsystem) } +func FindCgroupMountpointDir() (string, error) { + mounts, err := mount.GetMounts() + if err != nil { + return "", err + } + + for _, mount := range mounts { + if mount.Fstype == "cgroup" { + return filepath.Dir(mount.Mountpoint), nil + } + } + + return "", NewNotFoundError("cgroup") +} + type Mount struct { Mountpoint string Subsystems []string diff --git a/vendor/src/github.com/docker/libcontainer/config.go b/vendor/src/github.com/docker/libcontainer/config.go deleted file mode 100644 index 643601adac..0000000000 --- a/vendor/src/github.com/docker/libcontainer/config.go +++ /dev/null @@ -1,154 +0,0 @@ -package libcontainer - -import ( - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/mount" - "github.com/docker/libcontainer/network" -) - -type MountConfig mount.MountConfig - -type Network network.Network - -type NamespaceType string - -const ( - NEWNET NamespaceType = "NEWNET" - NEWPID NamespaceType = "NEWPID" - NEWNS NamespaceType = "NEWNS" - NEWUTS NamespaceType = "NEWUTS" - NEWIPC NamespaceType = "NEWIPC" - NEWUSER NamespaceType = "NEWUSER" -) - -// Namespace defines configuration for each namespace. It specifies an -// alternate path that is able to be joined via setns. -type Namespace struct { - Type NamespaceType `json:"type"` - Path string `json:"path,omitempty"` -} - -type Namespaces []Namespace - -func (n *Namespaces) Remove(t NamespaceType) bool { - i := n.index(t) - if i == -1 { - return false - } - *n = append((*n)[:i], (*n)[i+1:]...) - return true -} - -func (n *Namespaces) Add(t NamespaceType, path string) { - i := n.index(t) - if i == -1 { - *n = append(*n, Namespace{Type: t, Path: path}) - return - } - (*n)[i].Path = path -} - -func (n *Namespaces) index(t NamespaceType) int { - for i, ns := range *n { - if ns.Type == t { - return i - } - } - return -1 -} - -func (n *Namespaces) Contains(t NamespaceType) bool { - return n.index(t) != -1 -} - -// Config defines configuration options for executing a process inside a contained environment. -type Config struct { - // Mount specific options. - MountConfig *MountConfig `json:"mount_config,omitempty"` - - // Pathname to container's root filesystem - RootFs string `json:"root_fs,omitempty"` - - // Hostname optionally sets the container's hostname if provided - Hostname string `json:"hostname,omitempty"` - - // User will set the uid and gid of the executing process running inside the container - User string `json:"user,omitempty"` - - // WorkingDir will change the processes current working directory inside the container's rootfs - WorkingDir string `json:"working_dir,omitempty"` - - // Env will populate the processes environment with the provided values - // Any values from the parent processes will be cleared before the values - // provided in Env are provided to the process - Env []string `json:"environment,omitempty"` - - // Tty when true will allocate a pty slave on the host for access by the container's process - // and ensure that it is mounted inside the container's rootfs - Tty bool `json:"tty,omitempty"` - - // Namespaces specifies the container's namespaces that it should setup when cloning the init process - // If a namespace is not provided that namespace is shared from the container's parent process - Namespaces Namespaces `json:"namespaces,omitempty"` - - // Capabilities specify the capabilities to keep when executing the process inside the container - // All capbilities not specified will be dropped from the processes capability mask - Capabilities []string `json:"capabilities,omitempty"` - - // Networks specifies the container's network setup to be created - Networks []*Network `json:"networks,omitempty"` - - // Routes can be specified to create entries in the route table as the container is started - Routes []*Route `json:"routes,omitempty"` - - // Cgroups specifies specific cgroup settings for the various subsystems that the container is - // placed into to limit the resources the container has available - Cgroups *cgroups.Cgroup `json:"cgroups,omitempty"` - - // AppArmorProfile specifies the profile to apply to the process running in the container and is - // change at the time the process is execed - AppArmorProfile string `json:"apparmor_profile,omitempty"` - - // ProcessLabel specifies the label to apply to the process running in the container. It is - // commonly used by selinux - ProcessLabel string `json:"process_label,omitempty"` - - // RestrictSys will remount /proc/sys, /sys, and mask over sysrq-trigger as well as /proc/irq and - // /proc/bus - RestrictSys bool `json:"restrict_sys,omitempty"` - - // Rlimits specifies the resource limits, such as max open files, to set in the container - // If Rlimits are not set, the container will inherit rlimits from the parent process - Rlimits []Rlimit `json:"rlimits,omitempty"` - - // AdditionalGroups specifies the gids that should be added to supplementary groups - // in addition to those that the user belongs to. - AdditionalGroups []int `json:"additional_groups,omitempty"` -} - -// Routes can be specified to create entries in the route table as the container is started -// -// All of destination, source, and gateway should be either IPv4 or IPv6. -// One of the three options must be present, and ommitted entries will use their -// IP family default for the route table. For IPv4 for example, setting the -// gateway to 1.2.3.4 and the interface to eth0 will set up a standard -// destination of 0.0.0.0(or *) when viewed in the route table. -type Route struct { - // Sets the destination and mask, should be a CIDR. Accepts IPv4 and IPv6 - Destination string `json:"destination,omitempty"` - - // Sets the source and mask, should be a CIDR. Accepts IPv4 and IPv6 - Source string `json:"source,omitempty"` - - // Sets the gateway. Accepts IPv4 and IPv6 - Gateway string `json:"gateway,omitempty"` - - // The device to set this route up for, for example: eth0 - InterfaceName string `json:"interface_name,omitempty"` -} - -type Rlimit struct { - Type int `json:"type,omitempty"` - Hard uint64 `json:"hard,omitempty"` - Soft uint64 `json:"soft,omitempty"` -} diff --git a/vendor/src/github.com/docker/libcontainer/configs/cgroup.go b/vendor/src/github.com/docker/libcontainer/configs/cgroup.go new file mode 100644 index 0000000000..8bf174c195 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/cgroup.go @@ -0,0 +1,57 @@ +package configs + +type FreezerState string + +const ( + Undefined FreezerState = "" + Frozen FreezerState = "FROZEN" + Thawed FreezerState = "THAWED" +) + +type Cgroup struct { + Name string `json:"name"` + + // name of parent cgroup or slice + Parent string `json:"parent"` + + // If this is true allow access to any kind of device within the container. If false, allow access only to devices explicitly listed in the allowed_devices list. + AllowAllDevices bool `json:"allow_all_devices"` + + AllowedDevices []*Device `json:"allowed_devices"` + + // Memory limit (in bytes) + Memory int64 `json:"memory"` + + // Memory reservation or soft_limit (in bytes) + MemoryReservation int64 `json:"memory_reservation"` + + // Total memory usage (memory + swap); set `-1' to disable swap + MemorySwap int64 `json:"memory_swap"` + + // CPU shares (relative weight vs. other containers) + CpuShares int64 `json:"cpu_shares"` + + // CPU hardcap limit (in usecs). Allowed cpu time in a given period. + CpuQuota int64 `json:"cpu_quota"` + + // CPU period to be used for hardcapping (in usecs). 0 to use system default. + CpuPeriod int64 `json:"cpu_period"` + + // CPU to use + CpusetCpus string `json:"cpuset_cpus"` + + // MEM to use + CpusetMems string `json:"cpuset_mems"` + + // Specifies per cgroup weight, range is from 10 to 1000. + BlkioWeight int64 `json:"blkio_weight"` + + // set the freeze value for the process + Freezer FreezerState `json:"freezer"` + + // Parent slice to use for systemd TODO: remove in favor or parent + Slice string `json:"slice"` + + // Whether to disable OOM Killer + OomKillDisable bool `json:"oom_kill_disable"` +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/config.go b/vendor/src/github.com/docker/libcontainer/configs/config.go new file mode 100644 index 0000000000..b07f252b5e --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/config.go @@ -0,0 +1,145 @@ +package configs + +import "fmt" + +type Rlimit struct { + Type int `json:"type"` + Hard uint64 `json:"hard"` + Soft uint64 `json:"soft"` +} + +// IDMap represents UID/GID Mappings for User Namespaces. +type IDMap struct { + ContainerID int `json:"container_id"` + HostID int `json:"host_id"` + Size int `json:"size"` +} + +// Config defines configuration options for executing a process inside a contained environment. +type Config struct { + // NoPivotRoot will use MS_MOVE and a chroot to jail the process into the container's rootfs + // This is a common option when the container is running in ramdisk + NoPivotRoot bool `json:"no_pivot_root"` + + // ParentDeathSignal specifies the signal that is sent to the container's process in the case + // that the parent process dies. + ParentDeathSignal int `json:"parent_death_signal"` + + // PivotDir allows a custom directory inside the container's root filesystem to be used as pivot, when NoPivotRoot is not set. + // When a custom PivotDir not set, a temporary dir inside the root filesystem will be used. The pivot dir needs to be writeable. + // This is required when using read only root filesystems. In these cases, a read/writeable path can be (bind) mounted somewhere inside the root filesystem to act as pivot. + PivotDir string `json:"pivot_dir"` + + // Path to a directory containing the container's root filesystem. + Rootfs string `json:"rootfs"` + + // Readonlyfs will remount the container's rootfs as readonly where only externally mounted + // bind mounts are writtable. + Readonlyfs bool `json:"readonlyfs"` + + // Mounts specify additional source and destination paths that will be mounted inside the container's + // rootfs and mount namespace if specified + Mounts []*Mount `json:"mounts"` + + // The device nodes that should be automatically created within the container upon container start. Note, make sure that the node is marked as allowed in the cgroup as well! + Devices []*Device `json:"devices"` + + MountLabel string `json:"mount_label"` + + // Hostname optionally sets the container's hostname if provided + Hostname string `json:"hostname"` + + // Namespaces specifies the container's namespaces that it should setup when cloning the init process + // If a namespace is not provided that namespace is shared from the container's parent process + Namespaces Namespaces `json:"namespaces"` + + // Capabilities specify the capabilities to keep when executing the process inside the container + // All capbilities not specified will be dropped from the processes capability mask + Capabilities []string `json:"capabilities"` + + // Networks specifies the container's network setup to be created + Networks []*Network `json:"networks"` + + // Routes can be specified to create entries in the route table as the container is started + Routes []*Route `json:"routes"` + + // Cgroups specifies specific cgroup settings for the various subsystems that the container is + // placed into to limit the resources the container has available + Cgroups *Cgroup `json:"cgroups"` + + // AppArmorProfile specifies the profile to apply to the process running in the container and is + // change at the time the process is execed + AppArmorProfile string `json:"apparmor_profile"` + + // ProcessLabel specifies the label to apply to the process running in the container. It is + // commonly used by selinux + ProcessLabel string `json:"process_label"` + + // Rlimits specifies the resource limits, such as max open files, to set in the container + // If Rlimits are not set, the container will inherit rlimits from the parent process + Rlimits []Rlimit `json:"rlimits"` + + // AdditionalGroups specifies the gids that should be added to supplementary groups + // in addition to those that the user belongs to. + AdditionalGroups []int `json:"additional_groups"` + + // UidMappings is an array of User ID mappings for User Namespaces + UidMappings []IDMap `json:"uid_mappings"` + + // GidMappings is an array of Group ID mappings for User Namespaces + GidMappings []IDMap `json:"gid_mappings"` + + // MaskPaths specifies paths within the container's rootfs to mask over with a bind + // mount pointing to /dev/null as to prevent reads of the file. + MaskPaths []string `json:"mask_paths"` + + // ReadonlyPaths specifies paths within the container's rootfs to remount as read-only + // so that these files prevent any writes. + ReadonlyPaths []string `json:"readonly_paths"` +} + +// Gets the root uid for the process on host which could be non-zero +// when user namespaces are enabled. +func (c Config) HostUID() (int, error) { + if c.Namespaces.Contains(NEWUSER) { + if c.UidMappings == nil { + return -1, fmt.Errorf("User namespaces enabled, but no user mappings found.") + } + id, found := c.hostIDFromMapping(0, c.UidMappings) + if !found { + return -1, fmt.Errorf("User namespaces enabled, but no root user mapping found.") + } + return id, nil + } + // Return default root uid 0 + return 0, nil +} + +// Gets the root uid for the process on host which could be non-zero +// when user namespaces are enabled. +func (c Config) HostGID() (int, error) { + if c.Namespaces.Contains(NEWUSER) { + if c.GidMappings == nil { + return -1, fmt.Errorf("User namespaces enabled, but no gid mappings found.") + } + id, found := c.hostIDFromMapping(0, c.GidMappings) + if !found { + return -1, fmt.Errorf("User namespaces enabled, but no root user mapping found.") + } + return id, nil + } + // Return default root uid 0 + return 0, nil +} + +// Utility function that gets a host ID for a container ID from user namespace map +// if that ID is present in the map. +func (c Config) hostIDFromMapping(containerID int, uMap []IDMap) (int, bool) { + for _, m := range uMap { + if (containerID >= m.ContainerID) && (containerID <= (m.ContainerID + m.Size - 1)) { + hostID := m.HostID + (containerID - m.ContainerID) + return hostID, true + } + } + return -1, false +} diff --git a/vendor/src/github.com/docker/libcontainer/config_test.go b/vendor/src/github.com/docker/libcontainer/configs/config_test.go similarity index 58% rename from vendor/src/github.com/docker/libcontainer/config_test.go rename to vendor/src/github.com/docker/libcontainer/configs/config_test.go index f2287fc741..765d5e50db 100644 --- a/vendor/src/github.com/docker/libcontainer/config_test.go +++ b/vendor/src/github.com/docker/libcontainer/configs/config_test.go @@ -1,12 +1,11 @@ -package libcontainer +package configs import ( "encoding/json" + "fmt" "os" "path/filepath" "testing" - - "github.com/docker/libcontainer/devices" ) // Checks whether the expected capability is specified in the capabilities. @@ -19,13 +18,13 @@ func contains(expected string, values []string) bool { return false } -func containsDevice(expected *devices.Device, values []*devices.Device) bool { +func containsDevice(expected *Device, values []*Device) bool { for _, d := range values { if d.Path == expected.Path && - d.CgroupPermissions == expected.CgroupPermissions && + d.Permissions == expected.Permissions && d.FileMode == expected.FileMode && - d.MajorNumber == expected.MajorNumber && - d.MinorNumber == expected.MinorNumber && + d.Major == expected.Major && + d.Minor == expected.Minor && d.Type == expected.Type { return true } @@ -34,7 +33,7 @@ func containsDevice(expected *devices.Device, values []*devices.Device) bool { } func loadConfig(name string) (*Config, error) { - f, err := os.Open(filepath.Join("sample_configs", name)) + f, err := os.Open(filepath.Join("../sample_configs", name)) if err != nil { return nil, err } @@ -45,6 +44,34 @@ func loadConfig(name string) (*Config, error) { return nil, err } + // Check that a config doesn't contain extra fields + var configMap, abstractMap map[string]interface{} + + if _, err := f.Seek(0, 0); err != nil { + return nil, err + } + + if err := json.NewDecoder(f).Decode(&abstractMap); err != nil { + return nil, err + } + + configData, err := json.Marshal(&container) + if err != nil { + return nil, err + } + + if err := json.Unmarshal(configData, &configMap); err != nil { + return nil, err + } + + for k := range configMap { + delete(abstractMap, k) + } + + if len(abstractMap) != 0 { + return nil, fmt.Errorf("unknown fields: %s", abstractMap) + } + return container, nil } @@ -59,11 +86,6 @@ func TestConfigJsonFormat(t *testing.T) { t.Fail() } - if !container.Tty { - t.Log("tty should be set to true") - t.Fail() - } - if !container.Namespaces.Contains(NEWNET) { t.Log("namespaces should contain NEWNET") t.Fail() @@ -101,11 +123,6 @@ func TestConfigJsonFormat(t *testing.T) { t.Fail() } - if n.VethPrefix != "veth" { - t.Logf("veth prefix should be veth but received %q", n.VethPrefix) - t.Fail() - } - if n.Gateway != "172.17.42.1" { t.Logf("veth gateway should be 172.17.42.1 but received %q", n.Gateway) t.Fail() @@ -119,18 +136,12 @@ func TestConfigJsonFormat(t *testing.T) { break } } - - for _, d := range devices.DefaultSimpleDevices { - if !containsDevice(d, container.MountConfig.DeviceNodes) { + for _, d := range DefaultSimpleDevices { + if !containsDevice(d, container.Devices) { t.Logf("expected device configuration for %s", d.Path) t.Fail() } } - - if !container.RestrictSys { - t.Log("expected restrict sys to be true") - t.Fail() - } } func TestApparmorProfile(t *testing.T) { @@ -154,8 +165,8 @@ func TestSelinuxLabels(t *testing.T) { if container.ProcessLabel != label { t.Fatalf("expected process label %q but received %q", label, container.ProcessLabel) } - if container.MountConfig.MountLabel != label { - t.Fatalf("expected mount label %q but received %q", label, container.MountConfig.MountLabel) + if container.MountLabel != label { + t.Fatalf("expected mount label %q but received %q", label, container.MountLabel) } } @@ -170,3 +181,69 @@ func TestRemoveNamespace(t *testing.T) { t.Fatalf("namespaces should have 0 items but reports %d", len(ns)) } } + +func TestHostUIDNoUSERNS(t *testing.T) { + config := &Config{ + Namespaces: Namespaces{}, + } + uid, err := config.HostUID() + if err != nil { + t.Fatal(err) + } + if uid != 0 { + t.Fatalf("expected uid 0 with no USERNS but received %d", uid) + } +} + +func TestHostUIDWithUSERNS(t *testing.T) { + config := &Config{ + Namespaces: Namespaces{{Type: NEWUSER}}, + UidMappings: []IDMap{ + { + ContainerID: 0, + HostID: 1000, + Size: 1, + }, + }, + } + uid, err := config.HostUID() + if err != nil { + t.Fatal(err) + } + if uid != 1000 { + t.Fatalf("expected uid 1000 with no USERNS but received %d", uid) + } +} + +func TestHostGIDNoUSERNS(t *testing.T) { + config := &Config{ + Namespaces: Namespaces{}, + } + uid, err := config.HostGID() + if err != nil { + t.Fatal(err) + } + if uid != 0 { + t.Fatalf("expected gid 0 with no USERNS but received %d", uid) + } +} + +func TestHostGIDWithUSERNS(t *testing.T) { + config := &Config{ + Namespaces: Namespaces{{Type: NEWUSER}}, + GidMappings: []IDMap{ + { + ContainerID: 0, + HostID: 1000, + Size: 1, + }, + }, + } + uid, err := config.HostGID() + if err != nil { + t.Fatal(err) + } + if uid != 1000 { + t.Fatalf("expected gid 1000 with no USERNS but received %d", uid) + } +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/device.go b/vendor/src/github.com/docker/libcontainer/configs/device.go new file mode 100644 index 0000000000..abff26696e --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/device.go @@ -0,0 +1,52 @@ +package configs + +import ( + "fmt" + "os" +) + +const ( + Wildcard = -1 +) + +type Device struct { + // Device type, block, char, etc. + Type rune `json:"type"` + + // Path to the device. + Path string `json:"path"` + + // Major is the device's major number. + Major int64 `json:"major"` + + // Minor is the device's minor number. + Minor int64 `json:"minor"` + + // Cgroup permissions format, rwm. + Permissions string `json:"permissions"` + + // FileMode permission bits for the device. + FileMode os.FileMode `json:"file_mode"` + + // Uid of the device. + Uid uint32 `json:"uid"` + + // Gid of the device. + Gid uint32 `json:"gid"` +} + +func (d *Device) CgroupString() string { + return fmt.Sprintf("%c %s:%s %s", d.Type, deviceNumberString(d.Major), deviceNumberString(d.Minor), d.Permissions) +} + +func (d *Device) Mkdev() int { + return int((d.Major << 8) | (d.Minor & 0xff) | ((d.Minor & 0xfff00) << 12)) +} + +// deviceNumberString converts the device number to a string return result. +func deviceNumberString(number int64) string { + if number == Wildcard { + return "*" + } + return fmt.Sprint(number) +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/device_defaults.go b/vendor/src/github.com/docker/libcontainer/configs/device_defaults.go new file mode 100644 index 0000000000..70fa4af049 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/device_defaults.go @@ -0,0 +1,137 @@ +package configs + +var ( + // These are devices that are to be both allowed and created. + DefaultSimpleDevices = []*Device{ + // /dev/null and zero + { + Path: "/dev/null", + Type: 'c', + Major: 1, + Minor: 3, + Permissions: "rwm", + FileMode: 0666, + }, + { + Path: "/dev/zero", + Type: 'c', + Major: 1, + Minor: 5, + Permissions: "rwm", + FileMode: 0666, + }, + + { + Path: "/dev/full", + Type: 'c', + Major: 1, + Minor: 7, + Permissions: "rwm", + FileMode: 0666, + }, + + // consoles and ttys + { + Path: "/dev/tty", + Type: 'c', + Major: 5, + Minor: 0, + Permissions: "rwm", + FileMode: 0666, + }, + + // /dev/urandom,/dev/random + { + Path: "/dev/urandom", + Type: 'c', + Major: 1, + Minor: 9, + Permissions: "rwm", + FileMode: 0666, + }, + { + Path: "/dev/random", + Type: 'c', + Major: 1, + Minor: 8, + Permissions: "rwm", + FileMode: 0666, + }, + } + DefaultAllowedDevices = append([]*Device{ + // allow mknod for any device + { + Type: 'c', + Major: Wildcard, + Minor: Wildcard, + Permissions: "m", + }, + { + Type: 'b', + Major: Wildcard, + Minor: Wildcard, + Permissions: "m", + }, + + { + Path: "/dev/console", + Type: 'c', + Major: 5, + Minor: 1, + Permissions: "rwm", + }, + { + Path: "/dev/tty0", + Type: 'c', + Major: 4, + Minor: 0, + Permissions: "rwm", + }, + { + Path: "/dev/tty1", + Type: 'c', + Major: 4, + Minor: 1, + Permissions: "rwm", + }, + // /dev/pts/ - pts namespaces are "coming soon" + { + Path: "", + Type: 'c', + Major: 136, + Minor: Wildcard, + Permissions: "rwm", + }, + { + Path: "", + Type: 'c', + Major: 5, + Minor: 2, + Permissions: "rwm", + }, + + // tuntap + { + Path: "", + Type: 'c', + Major: 10, + Minor: 200, + Permissions: "rwm", + }, + }, DefaultSimpleDevices...) + DefaultAutoCreatedDevices = append([]*Device{ + { + // /dev/fuse is created but not allowed. + // This is to allow java to work. Because java + // Insists on there being a /dev/fuse + // https://github.com/docker/docker/issues/514 + // https://github.com/docker/docker/issues/2393 + // + Path: "/dev/fuse", + Type: 'c', + Major: 10, + Minor: 229, + Permissions: "rwm", + }, + }, DefaultSimpleDevices...) +) diff --git a/vendor/src/github.com/docker/libcontainer/configs/mount.go b/vendor/src/github.com/docker/libcontainer/configs/mount.go new file mode 100644 index 0000000000..7b3dea3312 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/mount.go @@ -0,0 +1,21 @@ +package configs + +type Mount struct { + // Source path for the mount. + Source string `json:"source"` + + // Destination path for the mount inside the container. + Destination string `json:"destination"` + + // Device the mount is for. + Device string `json:"device"` + + // Mount flags. + Flags int `json:"flags"` + + // Mount data applied to the mount. + Data string `json:"data"` + + // Relabel source if set, "z" indicates shared, "Z" indicates unshared. + Relabel string `json:"relabel"` +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/namespaces.go b/vendor/src/github.com/docker/libcontainer/configs/namespaces.go new file mode 100644 index 0000000000..9078e6abfc --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/namespaces.go @@ -0,0 +1,109 @@ +package configs + +import ( + "fmt" + "syscall" +) + +type NamespaceType string + +const ( + NEWNET NamespaceType = "NEWNET" + NEWPID NamespaceType = "NEWPID" + NEWNS NamespaceType = "NEWNS" + NEWUTS NamespaceType = "NEWUTS" + NEWIPC NamespaceType = "NEWIPC" + NEWUSER NamespaceType = "NEWUSER" +) + +// Namespace defines configuration for each namespace. It specifies an +// alternate path that is able to be joined via setns. +type Namespace struct { + Type NamespaceType `json:"type"` + Path string `json:"path"` +} + +func (n *Namespace) Syscall() int { + return namespaceInfo[n.Type] +} + +func (n *Namespace) GetPath(pid int) string { + if n.Path != "" { + return n.Path + } + return fmt.Sprintf("/proc/%d/ns/%s", pid, n.file()) +} + +func (n *Namespace) file() string { + file := "" + switch n.Type { + case NEWNET: + file = "net" + case NEWNS: + file = "mnt" + case NEWPID: + file = "pid" + case NEWIPC: + file = "ipc" + case NEWUSER: + file = "user" + case NEWUTS: + file = "uts" + } + return file +} + +type Namespaces []Namespace + +func (n *Namespaces) Remove(t NamespaceType) bool { + i := n.index(t) + if i == -1 { + return false + } + *n = append((*n)[:i], (*n)[i+1:]...) + return true +} + +func (n *Namespaces) Add(t NamespaceType, path string) { + i := n.index(t) + if i == -1 { + *n = append(*n, Namespace{Type: t, Path: path}) + return + } + (*n)[i].Path = path +} + +func (n *Namespaces) index(t NamespaceType) int { + for i, ns := range *n { + if ns.Type == t { + return i + } + } + return -1 +} + +func (n *Namespaces) Contains(t NamespaceType) bool { + return n.index(t) != -1 +} + +var namespaceInfo = map[NamespaceType]int{ + NEWNET: syscall.CLONE_NEWNET, + NEWNS: syscall.CLONE_NEWNS, + NEWUSER: syscall.CLONE_NEWUSER, + NEWIPC: syscall.CLONE_NEWIPC, + NEWUTS: syscall.CLONE_NEWUTS, + NEWPID: syscall.CLONE_NEWPID, +} + +// CloneFlags parses the container's Namespaces options to set the correct +// flags on clone, unshare. This functions returns flags only for new namespaces. +func (n *Namespaces) CloneFlags() uintptr { + var flag int + for _, v := range *n { + if v.Path != "" { + continue + } + flag |= namespaceInfo[v.Type] + } + return uintptr(flag) +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/network.go b/vendor/src/github.com/docker/libcontainer/configs/network.go new file mode 100644 index 0000000000..9d5ed7a65f --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/network.go @@ -0,0 +1,72 @@ +package configs + +// Network defines configuration for a container's networking stack +// +// The network configuration can be omited from a container causing the +// container to be setup with the host's networking stack +type Network struct { + // Type sets the networks type, commonly veth and loopback + Type string `json:"type"` + + // Name of the network interface + Name string `json:"name"` + + // The bridge to use. + Bridge string `json:"bridge"` + + // MacAddress contains the MAC address to set on the network interface + MacAddress string `json:"mac_address"` + + // Address contains the IPv4 and mask to set on the network interface + Address string `json:"address"` + + // Gateway sets the gateway address that is used as the default for the interface + Gateway string `json:"gateway"` + + // IPv6Address contains the IPv6 and mask to set on the network interface + IPv6Address string `json:"ipv6_address"` + + // IPv6Gateway sets the ipv6 gateway address that is used as the default for the interface + IPv6Gateway string `json:"ipv6_gateway"` + + // Mtu sets the mtu value for the interface and will be mirrored on both the host and + // container's interfaces if a pair is created, specifically in the case of type veth + // Note: This does not apply to loopback interfaces. + Mtu int `json:"mtu"` + + // TxQueueLen sets the tx_queuelen value for the interface and will be mirrored on both the host and + // container's interfaces if a pair is created, specifically in the case of type veth + // Note: This does not apply to loopback interfaces. + TxQueueLen int `json:"txqueuelen"` + + // HostInterfaceName is a unique name of a veth pair that resides on in the host interface of the + // container. + HostInterfaceName string `json:"host_interface_name"` + + // HairpinMode specifies if hairpin NAT should be enabled on the virtual interface + // bridge port in the case of type veth + // Note: This is unsupported on some systems. + // Note: This does not apply to loopback interfaces. + HairpinMode bool `json:"hairpin_mode"` +} + +// Routes can be specified to create entries in the route table as the container is started +// +// All of destination, source, and gateway should be either IPv4 or IPv6. +// One of the three options must be present, and ommitted entries will use their +// IP family default for the route table. For IPv4 for example, setting the +// gateway to 1.2.3.4 and the interface to eth0 will set up a standard +// destination of 0.0.0.0(or *) when viewed in the route table. +type Route struct { + // Sets the destination and mask, should be a CIDR. Accepts IPv4 and IPv6 + Destination string `json:"destination"` + + // Sets the source and mask, should be a CIDR. Accepts IPv4 and IPv6 + Source string `json:"source"` + + // Sets the gateway. Accepts IPv4 and IPv6 + Gateway string `json:"gateway"` + + // The device to set this route up for, for example: eth0 + InterfaceName string `json:"interface_name"` +} diff --git a/vendor/src/github.com/docker/libcontainer/configs/validate/config.go b/vendor/src/github.com/docker/libcontainer/configs/validate/config.go new file mode 100644 index 0000000000..98926dd26e --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/configs/validate/config.go @@ -0,0 +1,93 @@ +package validate + +import ( + "fmt" + "os" + "path/filepath" + + "github.com/docker/libcontainer/configs" +) + +type Validator interface { + Validate(*configs.Config) error +} + +func New() Validator { + return &ConfigValidator{} +} + +type ConfigValidator struct { +} + +func (v *ConfigValidator) Validate(config *configs.Config) error { + if err := v.rootfs(config); err != nil { + return err + } + if err := v.network(config); err != nil { + return err + } + if err := v.hostname(config); err != nil { + return err + } + if err := v.security(config); err != nil { + return err + } + if err := v.usernamespace(config); err != nil { + return err + } + return nil +} + +// rootfs validates the the rootfs is an absolute path and is not a symlink +// to the container's root filesystem. +func (v *ConfigValidator) rootfs(config *configs.Config) error { + cleaned, err := filepath.Abs(config.Rootfs) + if err != nil { + return err + } + if cleaned, err = filepath.EvalSymlinks(cleaned); err != nil { + return err + } + if config.Rootfs != cleaned { + return fmt.Errorf("%s is not an absolute path or is a symlink", config.Rootfs) + } + return nil +} + +func (v *ConfigValidator) network(config *configs.Config) error { + if !config.Namespaces.Contains(configs.NEWNET) { + if len(config.Networks) > 0 || len(config.Routes) > 0 { + return fmt.Errorf("unable to apply network settings without a private NET namespace") + } + } + return nil +} + +func (v *ConfigValidator) hostname(config *configs.Config) error { + if config.Hostname != "" && !config.Namespaces.Contains(configs.NEWUTS) { + return fmt.Errorf("unable to set hostname without a private UTS namespace") + } + return nil +} + +func (v *ConfigValidator) security(config *configs.Config) error { + // restrict sys without mount namespace + if (len(config.MaskPaths) > 0 || len(config.ReadonlyPaths) > 0) && + !config.Namespaces.Contains(configs.NEWNS) { + return fmt.Errorf("unable to restrict sys entries without a private MNT namespace") + } + return nil +} + +func (v *ConfigValidator) usernamespace(config *configs.Config) error { + if config.Namespaces.Contains(configs.NEWUSER) { + if _, err := os.Stat("/proc/self/ns/user"); os.IsNotExist(err) { + return fmt.Errorf("USER namespaces aren't enabled in the kernel") + } + } else { + if config.UidMappings != nil || config.GidMappings != nil { + return fmt.Errorf("User namespace mappings specified, but USER namespace isn't enabled in the config") + } + } + return nil +} diff --git a/vendor/src/github.com/docker/libcontainer/console.go b/vendor/src/github.com/docker/libcontainer/console.go new file mode 100644 index 0000000000..042a2a2e48 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/console.go @@ -0,0 +1,15 @@ +package libcontainer + +import "io" + +// Console represents a pseudo TTY. +type Console interface { + io.ReadWriter + io.Closer + + // Path returns the filesystem path to the slave side of the pty. + Path() string + + // Fd returns the fd for the master of the pty. + Fd() uintptr +} diff --git a/vendor/src/github.com/docker/libcontainer/console/console.go b/vendor/src/github.com/docker/libcontainer/console/console.go deleted file mode 100644 index 438e670420..0000000000 --- a/vendor/src/github.com/docker/libcontainer/console/console.go +++ /dev/null @@ -1,128 +0,0 @@ -// +build linux - -package console - -import ( - "fmt" - "os" - "path/filepath" - "syscall" - "unsafe" - - "github.com/docker/libcontainer/label" -) - -// Setup initializes the proper /dev/console inside the rootfs path -func Setup(rootfs, consolePath, mountLabel string) error { - oldMask := syscall.Umask(0000) - defer syscall.Umask(oldMask) - - if err := os.Chmod(consolePath, 0600); err != nil { - return err - } - - if err := os.Chown(consolePath, 0, 0); err != nil { - return err - } - - if err := label.SetFileLabel(consolePath, mountLabel); err != nil { - return fmt.Errorf("set file label %s %s", consolePath, err) - } - - dest := filepath.Join(rootfs, "dev/console") - - f, err := os.Create(dest) - if err != nil && !os.IsExist(err) { - return fmt.Errorf("create %s %s", dest, err) - } - - if f != nil { - f.Close() - } - - if err := syscall.Mount(consolePath, dest, "bind", syscall.MS_BIND, ""); err != nil { - return fmt.Errorf("bind %s to %s %s", consolePath, dest, err) - } - - return nil -} - -func OpenAndDup(consolePath string) error { - slave, err := OpenTerminal(consolePath, syscall.O_RDWR) - if err != nil { - return fmt.Errorf("open terminal %s", err) - } - - if err := syscall.Dup2(int(slave.Fd()), 0); err != nil { - return err - } - - if err := syscall.Dup2(int(slave.Fd()), 1); err != nil { - return err - } - - return syscall.Dup2(int(slave.Fd()), 2) -} - -// Unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f. -// Unlockpt should be called before opening the slave side of a pseudoterminal. -func Unlockpt(f *os.File) error { - var u int32 - - return Ioctl(f.Fd(), syscall.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))) -} - -// Ptsname retrieves the name of the first available pts for the given master. -func Ptsname(f *os.File) (string, error) { - var n int32 - - if err := Ioctl(f.Fd(), syscall.TIOCGPTN, uintptr(unsafe.Pointer(&n))); err != nil { - return "", err - } - - return fmt.Sprintf("/dev/pts/%d", n), nil -} - -// CreateMasterAndConsole will open /dev/ptmx on the host and retreive the -// pts name for use as the pty slave inside the container -func CreateMasterAndConsole() (*os.File, string, error) { - master, err := os.OpenFile("/dev/ptmx", syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_CLOEXEC, 0) - if err != nil { - return nil, "", err - } - - console, err := Ptsname(master) - if err != nil { - return nil, "", err - } - - if err := Unlockpt(master); err != nil { - return nil, "", err - } - - return master, console, nil -} - -// OpenPtmx opens /dev/ptmx, i.e. the PTY master. -func OpenPtmx() (*os.File, error) { - // O_NOCTTY and O_CLOEXEC are not present in os package so we use the syscall's one for all. - return os.OpenFile("/dev/ptmx", syscall.O_RDONLY|syscall.O_NOCTTY|syscall.O_CLOEXEC, 0) -} - -// OpenTerminal is a clone of os.OpenFile without the O_CLOEXEC -// used to open the pty slave inside the container namespace -func OpenTerminal(name string, flag int) (*os.File, error) { - r, e := syscall.Open(name, flag, 0) - if e != nil { - return nil, &os.PathError{Op: "open", Path: name, Err: e} - } - return os.NewFile(uintptr(r), name), nil -} - -func Ioctl(fd uintptr, flag, data uintptr) error { - if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, flag, data); err != 0 { - return err - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/console_linux.go b/vendor/src/github.com/docker/libcontainer/console_linux.go new file mode 100644 index 0000000000..afdc2976c4 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/console_linux.go @@ -0,0 +1,147 @@ +// +build linux + +package libcontainer + +import ( + "fmt" + "os" + "path/filepath" + "syscall" + "unsafe" + + "github.com/docker/libcontainer/label" +) + +// newConsole returns an initalized console that can be used within a container by copying bytes +// from the master side to the slave that is attached as the tty for the container's init process. +func newConsole(uid, gid int) (Console, error) { + master, err := os.OpenFile("/dev/ptmx", syscall.O_RDWR|syscall.O_NOCTTY|syscall.O_CLOEXEC, 0) + if err != nil { + return nil, err + } + console, err := ptsname(master) + if err != nil { + return nil, err + } + if err := unlockpt(master); err != nil { + return nil, err + } + if err := os.Chmod(console, 0600); err != nil { + return nil, err + } + if err := os.Chown(console, uid, gid); err != nil { + return nil, err + } + return &linuxConsole{ + slavePath: console, + master: master, + }, nil +} + +// newConsoleFromPath is an internal fucntion returning an initialzied console for use inside +// a container's MNT namespace. +func newConsoleFromPath(slavePath string) *linuxConsole { + return &linuxConsole{ + slavePath: slavePath, + } +} + +// linuxConsole is a linux psuedo TTY for use within a container. +type linuxConsole struct { + master *os.File + slavePath string +} + +func (c *linuxConsole) Fd() uintptr { + return c.master.Fd() +} + +func (c *linuxConsole) Path() string { + return c.slavePath +} + +func (c *linuxConsole) Read(b []byte) (int, error) { + return c.master.Read(b) +} + +func (c *linuxConsole) Write(b []byte) (int, error) { + return c.master.Write(b) +} + +func (c *linuxConsole) Close() error { + if m := c.master; m != nil { + return m.Close() + } + return nil +} + +// mount initializes the console inside the rootfs mounting with the specified mount label +// and applying the correct ownership of the console. +func (c *linuxConsole) mount(rootfs, mountLabel string, uid, gid int) error { + oldMask := syscall.Umask(0000) + defer syscall.Umask(oldMask) + if err := label.SetFileLabel(c.slavePath, mountLabel); err != nil { + return err + } + dest := filepath.Join(rootfs, "/dev/console") + f, err := os.Create(dest) + if err != nil && !os.IsExist(err) { + return err + } + if f != nil { + f.Close() + } + return syscall.Mount(c.slavePath, dest, "bind", syscall.MS_BIND, "") +} + +// dupStdio opens the slavePath for the console and dup2s the fds to the current +// processes stdio, fd 0,1,2. +func (c *linuxConsole) dupStdio() error { + slave, err := c.open(syscall.O_RDWR) + if err != nil { + return err + } + fd := int(slave.Fd()) + for _, i := range []int{0, 1, 2} { + if err := syscall.Dup2(fd, i); err != nil { + return err + } + } + return nil +} + +// open is a clone of os.OpenFile without the O_CLOEXEC used to open the pty slave. +func (c *linuxConsole) open(flag int) (*os.File, error) { + r, e := syscall.Open(c.slavePath, flag, 0) + if e != nil { + return nil, &os.PathError{ + Op: "open", + Path: c.slavePath, + Err: e, + } + } + return os.NewFile(uintptr(r), c.slavePath), nil +} + +func ioctl(fd uintptr, flag, data uintptr) error { + if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, flag, data); err != 0 { + return err + } + return nil +} + +// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f. +// unlockpt should be called before opening the slave side of a pty. +func unlockpt(f *os.File) error { + var u int32 + return ioctl(f.Fd(), syscall.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))) +} + +// ptsname retrieves the name of the first available pts for the given master. +func ptsname(f *os.File) (string, error) { + var n int32 + if err := ioctl(f.Fd(), syscall.TIOCGPTN, uintptr(unsafe.Pointer(&n))); err != nil { + return "", err + } + return fmt.Sprintf("/dev/pts/%d", n), nil +} diff --git a/vendor/src/github.com/docker/libcontainer/container.go b/vendor/src/github.com/docker/libcontainer/container.go index 307e8cbcbb..35bdfd781f 100644 --- a/vendor/src/github.com/docker/libcontainer/container.go +++ b/vendor/src/github.com/docker/libcontainer/container.go @@ -1,8 +1,53 @@ -/* -NOTE: The API is in flux and mainly not implemented. Proceed with caution until further notice. -*/ +// Libcontainer provides a native Go implementation for creating containers +// with namespaces, cgroups, capabilities, and filesystem access controls. +// It allows you to manage the lifecycle of the container performing additional operations +// after the container is created. package libcontainer +import ( + "github.com/docker/libcontainer/configs" +) + +// The status of a container. +type Status int + +const ( + // The container exists and is running. + Running Status = iota + 1 + + // The container exists, it is in the process of being paused. + Pausing + + // The container exists, but all its processes are paused. + Paused + + // The container does not exist. + Destroyed +) + +// State represents a running container's state +type State struct { + // ID is the container ID. + ID string `json:"id"` + + // InitProcessPid is the init process id in the parent namespace. + InitProcessPid int `json:"init_process_pid"` + + // InitProcessStartTime is the init process start time. + InitProcessStartTime string `json:"init_process_start"` + + // Path to all the cgroups setup for a container. Key is cgroup subsystem name + // with the value as the path. + CgroupPaths map[string]string `json:"cgroup_paths"` + + // NamespacePaths are filepaths to the container's namespaces. Key is the namespace type + // with the value as the path. + NamespacePaths map[configs.NamespaceType]string `json:"namespace_paths"` + + // Config is the container's configuration. + Config configs.Config `json:"config"` +} + // A libcontainer container object. // // Each container is thread-safe within the same process. Since a container can @@ -12,67 +57,88 @@ type Container interface { // Returns the ID of the container ID() string - // Returns the current run state of the container. + // Returns the current status of the container. // - // Errors: + // errors: // ContainerDestroyed - Container no longer exists, - // SystemError - System error. - RunState() (*RunState, Error) + // Systemerror - System error. + Status() (Status, error) + + // State returns the current container's state information. + // + // errors: + // Systemerror - System erroor. + State() (*State, error) // Returns the current config of the container. - Config() *Config + Config() configs.Config - // Start a process inside the container. Returns the PID of the new process (in the caller process's namespace) and a channel that will return the exit status of the process whenever it dies. + // Returns the PIDs inside this container. The PIDs are in the namespace of the calling process. // - // Errors: + // errors: + // ContainerDestroyed - Container no longer exists, + // Systemerror - System error. + // + // Some of the returned PIDs may no longer refer to processes in the Container, unless + // the Container state is PAUSED in which case every PID in the slice is valid. + Processes() ([]int, error) + + // Returns statistics for the container. + // + // errors: + // ContainerDestroyed - Container no longer exists, + // Systemerror - System error. + Stats() (*Stats, error) + + // Set cgroup resources of container as configured + // + // We can use this to change resources when containers are running. + // + // errors: + // Systemerror - System error. + Set(config configs.Config) error + + // Start a process inside the container. Returns error if process fails to + // start. You can track process lifecycle with passed Process structure. + // + // errors: // ContainerDestroyed - Container no longer exists, // ConfigInvalid - config is invalid, // ContainerPaused - Container is paused, - // SystemError - System error. - Start(config *ProcessConfig) (pid int, exitChan chan int, err Error) + // Systemerror - System error. + Start(process *Process) (err error) // Destroys the container after killing all running processes. // // Any event registrations are removed before the container is destroyed. // No error is returned if the container is already destroyed. // - // Errors: - // SystemError - System error. - Destroy() Error - - // Returns the PIDs inside this container. The PIDs are in the namespace of the calling process. - // - // Errors: - // ContainerDestroyed - Container no longer exists, - // SystemError - System error. - // - // Some of the returned PIDs may no longer refer to processes in the Container, unless - // the Container state is PAUSED in which case every PID in the slice is valid. - Processes() ([]int, Error) - - // Returns statistics for the container. - // - // Errors: - // ContainerDestroyed - Container no longer exists, - // SystemError - System error. - Stats() (*ContainerStats, Error) + // errors: + // Systemerror - System error. + Destroy() error // If the Container state is RUNNING or PAUSING, sets the Container state to PAUSING and pauses // the execution of any user processes. Asynchronously, when the container finished being paused the // state is changed to PAUSED. // If the Container state is PAUSED, do nothing. // - // Errors: + // errors: // ContainerDestroyed - Container no longer exists, - // SystemError - System error. - Pause() Error + // Systemerror - System error. + Pause() error // If the Container state is PAUSED, resumes the execution of any user processes in the // Container before setting the Container state to RUNNING. // If the Container state is RUNNING, do nothing. // - // Errors: + // errors: // ContainerDestroyed - Container no longer exists, - // SystemError - System error. - Resume() Error + // Systemerror - System error. + Resume() error + + // NotifyOOM returns a read-only channel signaling when the container receives an OOM notification. + // + // errors: + // Systemerror - System error. + NotifyOOM() (<-chan struct{}, error) } diff --git a/vendor/src/github.com/docker/libcontainer/container_linux.go b/vendor/src/github.com/docker/libcontainer/container_linux.go new file mode 100644 index 0000000000..c44c8daccc --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/container_linux.go @@ -0,0 +1,307 @@ +// +build linux + +package libcontainer + +import ( + "encoding/json" + "fmt" + "os" + "os/exec" + "path/filepath" + "sync" + "syscall" + + log "github.com/Sirupsen/logrus" + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" +) + +type linuxContainer struct { + id string + root string + config *configs.Config + cgroupManager cgroups.Manager + initPath string + initArgs []string + initProcess parentProcess + m sync.Mutex +} + +// ID returns the container's unique ID +func (c *linuxContainer) ID() string { + return c.id +} + +// Config returns the container's configuration +func (c *linuxContainer) Config() configs.Config { + return *c.config +} + +func (c *linuxContainer) Status() (Status, error) { + c.m.Lock() + defer c.m.Unlock() + return c.currentStatus() +} + +func (c *linuxContainer) State() (*State, error) { + c.m.Lock() + defer c.m.Unlock() + return c.currentState() +} + +func (c *linuxContainer) Processes() ([]int, error) { + pids, err := c.cgroupManager.GetPids() + if err != nil { + return nil, newSystemError(err) + } + return pids, nil +} + +func (c *linuxContainer) Stats() (*Stats, error) { + var ( + err error + stats = &Stats{} + ) + if stats.CgroupStats, err = c.cgroupManager.GetStats(); err != nil { + return stats, newSystemError(err) + } + for _, iface := range c.config.Networks { + switch iface.Type { + case "veth": + istats, err := getNetworkInterfaceStats(iface.HostInterfaceName) + if err != nil { + return stats, newSystemError(err) + } + stats.Interfaces = append(stats.Interfaces, istats) + } + } + return stats, nil +} + +func (c *linuxContainer) Set(config configs.Config) error { + c.m.Lock() + defer c.m.Unlock() + c.config = &config + return c.cgroupManager.Set(c.config) +} + +func (c *linuxContainer) Start(process *Process) error { + c.m.Lock() + defer c.m.Unlock() + status, err := c.currentStatus() + if err != nil { + return err + } + doInit := status == Destroyed + parent, err := c.newParentProcess(process, doInit) + if err != nil { + return newSystemError(err) + } + if err := parent.start(); err != nil { + // terminate the process to ensure that it properly is reaped. + if err := parent.terminate(); err != nil { + log.Warn(err) + } + return newSystemError(err) + } + process.ops = parent + if doInit { + + c.updateState(parent) + } + return nil +} + +func (c *linuxContainer) newParentProcess(p *Process, doInit bool) (parentProcess, error) { + parentPipe, childPipe, err := newPipe() + if err != nil { + return nil, newSystemError(err) + } + cmd, err := c.commandTemplate(p, childPipe) + if err != nil { + return nil, newSystemError(err) + } + if !doInit { + return c.newSetnsProcess(p, cmd, parentPipe, childPipe), nil + } + return c.newInitProcess(p, cmd, parentPipe, childPipe) +} + +func (c *linuxContainer) commandTemplate(p *Process, childPipe *os.File) (*exec.Cmd, error) { + cmd := &exec.Cmd{ + Path: c.initPath, + Args: c.initArgs, + } + cmd.Stdin = p.Stdin + cmd.Stdout = p.Stdout + cmd.Stderr = p.Stderr + cmd.Dir = c.config.Rootfs + if cmd.SysProcAttr == nil { + cmd.SysProcAttr = &syscall.SysProcAttr{} + } + cmd.ExtraFiles = []*os.File{childPipe} + cmd.SysProcAttr.Pdeathsig = syscall.SIGKILL + if c.config.ParentDeathSignal > 0 { + cmd.SysProcAttr.Pdeathsig = syscall.Signal(c.config.ParentDeathSignal) + } + return cmd, nil +} + +func (c *linuxContainer) newInitProcess(p *Process, cmd *exec.Cmd, parentPipe, childPipe *os.File) (*initProcess, error) { + t := "_LIBCONTAINER_INITTYPE=standard" + cloneFlags := c.config.Namespaces.CloneFlags() + if cloneFlags&syscall.CLONE_NEWUSER != 0 { + if err := c.addUidGidMappings(cmd.SysProcAttr); err != nil { + // user mappings are not supported + return nil, err + } + // Default to root user when user namespaces are enabled. + if cmd.SysProcAttr.Credential == nil { + cmd.SysProcAttr.Credential = &syscall.Credential{} + } + } + cmd.Env = append(cmd.Env, t) + cmd.SysProcAttr.Cloneflags = cloneFlags + return &initProcess{ + cmd: cmd, + childPipe: childPipe, + parentPipe: parentPipe, + manager: c.cgroupManager, + config: c.newInitConfig(p), + }, nil +} + +func (c *linuxContainer) newSetnsProcess(p *Process, cmd *exec.Cmd, parentPipe, childPipe *os.File) *setnsProcess { + cmd.Env = append(cmd.Env, + fmt.Sprintf("_LIBCONTAINER_INITPID=%d", c.initProcess.pid()), + "_LIBCONTAINER_INITTYPE=setns", + ) + + if p.consolePath != "" { + cmd.Env = append(cmd.Env, "_LIBCONTAINER_CONSOLE_PATH="+p.consolePath) + } + + // TODO: set on container for process management + return &setnsProcess{ + cmd: cmd, + cgroupPaths: c.cgroupManager.GetPaths(), + childPipe: childPipe, + parentPipe: parentPipe, + config: c.newInitConfig(p), + } +} + +func (c *linuxContainer) newInitConfig(process *Process) *initConfig { + return &initConfig{ + Config: c.config, + Args: process.Args, + Env: process.Env, + User: process.User, + Cwd: process.Cwd, + Console: process.consolePath, + } +} + +func newPipe() (parent *os.File, child *os.File, err error) { + fds, err := syscall.Socketpair(syscall.AF_LOCAL, syscall.SOCK_STREAM|syscall.SOCK_CLOEXEC, 0) + if err != nil { + return nil, nil, err + } + return os.NewFile(uintptr(fds[1]), "parent"), os.NewFile(uintptr(fds[0]), "child"), nil +} + +func (c *linuxContainer) Destroy() error { + c.m.Lock() + defer c.m.Unlock() + status, err := c.currentStatus() + if err != nil { + return err + } + if status != Destroyed { + return newGenericError(fmt.Errorf("container is not destroyed"), ContainerNotStopped) + } + if !c.config.Namespaces.Contains(configs.NEWPID) { + if err := killCgroupProcesses(c.cgroupManager); err != nil { + log.Warn(err) + } + } + err = c.cgroupManager.Destroy() + if rerr := os.RemoveAll(c.root); err == nil { + err = rerr + } + c.initProcess = nil + return err +} + +func (c *linuxContainer) Pause() error { + c.m.Lock() + defer c.m.Unlock() + return c.cgroupManager.Freeze(configs.Frozen) +} + +func (c *linuxContainer) Resume() error { + c.m.Lock() + defer c.m.Unlock() + return c.cgroupManager.Freeze(configs.Thawed) +} + +func (c *linuxContainer) NotifyOOM() (<-chan struct{}, error) { + return notifyOnOOM(c.cgroupManager.GetPaths()) +} + +func (c *linuxContainer) updateState(process parentProcess) error { + c.initProcess = process + state, err := c.currentState() + if err != nil { + return err + } + f, err := os.Create(filepath.Join(c.root, stateFilename)) + if err != nil { + return err + } + defer f.Close() + return json.NewEncoder(f).Encode(state) +} + +func (c *linuxContainer) currentStatus() (Status, error) { + if c.initProcess == nil { + return Destroyed, nil + } + // return Running if the init process is alive + if err := syscall.Kill(c.initProcess.pid(), 0); err != nil { + if err == syscall.ESRCH { + return Destroyed, nil + } + return 0, newSystemError(err) + } + if c.config.Cgroups != nil && c.config.Cgroups.Freezer == configs.Frozen { + return Paused, nil + } + return Running, nil +} + +func (c *linuxContainer) currentState() (*State, error) { + status, err := c.currentStatus() + if err != nil { + return nil, err + } + if status == Destroyed { + return nil, newGenericError(fmt.Errorf("container destroyed"), ContainerNotExists) + } + startTime, err := c.initProcess.startTime() + if err != nil { + return nil, newSystemError(err) + } + state := &State{ + ID: c.ID(), + Config: *c.config, + InitProcessPid: c.initProcess.pid(), + InitProcessStartTime: startTime, + CgroupPaths: c.cgroupManager.GetPaths(), + NamespacePaths: make(map[configs.NamespaceType]string), + } + for _, ns := range c.config.Namespaces { + state.NamespacePaths[ns.Type] = ns.GetPath(c.initProcess.pid()) + } + return state, nil +} diff --git a/vendor/src/github.com/docker/libcontainer/container_linux_test.go b/vendor/src/github.com/docker/libcontainer/container_linux_test.go new file mode 100644 index 0000000000..5ee46ab140 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/container_linux_test.go @@ -0,0 +1,200 @@ +// +build linux + +package libcontainer + +import ( + "fmt" + "os" + "testing" + + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" +) + +type mockCgroupManager struct { + pids []int + stats *cgroups.Stats + paths map[string]string +} + +func (m *mockCgroupManager) GetPids() ([]int, error) { + return m.pids, nil +} + +func (m *mockCgroupManager) GetStats() (*cgroups.Stats, error) { + return m.stats, nil +} + +func (m *mockCgroupManager) Apply(pid int) error { + return nil +} + +func (m *mockCgroupManager) Set(container *configs.Config) error { + return nil +} + +func (m *mockCgroupManager) Destroy() error { + return nil +} + +func (m *mockCgroupManager) GetPaths() map[string]string { + return m.paths +} + +func (m *mockCgroupManager) Freeze(state configs.FreezerState) error { + return nil +} + +type mockProcess struct { + _pid int + started string +} + +func (m *mockProcess) terminate() error { + return nil +} + +func (m *mockProcess) pid() int { + return m._pid +} + +func (m *mockProcess) startTime() (string, error) { + return m.started, nil +} + +func (m *mockProcess) start() error { + return nil +} + +func (m *mockProcess) wait() (*os.ProcessState, error) { + return nil, nil +} + +func (m *mockProcess) signal(_ os.Signal) error { + return nil +} + +func TestGetContainerPids(t *testing.T) { + container := &linuxContainer{ + id: "myid", + config: &configs.Config{}, + cgroupManager: &mockCgroupManager{pids: []int{1, 2, 3}}, + } + pids, err := container.Processes() + if err != nil { + t.Fatal(err) + } + for i, expected := range []int{1, 2, 3} { + if pids[i] != expected { + t.Fatalf("expected pid %d but received %d", expected, pids[i]) + } + } +} + +func TestGetContainerStats(t *testing.T) { + container := &linuxContainer{ + id: "myid", + config: &configs.Config{}, + cgroupManager: &mockCgroupManager{ + pids: []int{1, 2, 3}, + stats: &cgroups.Stats{ + MemoryStats: cgroups.MemoryStats{ + Usage: 1024, + }, + }, + }, + } + stats, err := container.Stats() + if err != nil { + t.Fatal(err) + } + if stats.CgroupStats == nil { + t.Fatal("cgroup stats are nil") + } + if stats.CgroupStats.MemoryStats.Usage != 1024 { + t.Fatalf("expected memory usage 1024 but recevied %d", stats.CgroupStats.MemoryStats.Usage) + } +} + +func TestGetContainerState(t *testing.T) { + var ( + pid = os.Getpid() + expectedMemoryPath = "/sys/fs/cgroup/memory/myid" + expectedNetworkPath = "/networks/fd" + ) + container := &linuxContainer{ + id: "myid", + config: &configs.Config{ + Namespaces: []configs.Namespace{ + {Type: configs.NEWPID}, + {Type: configs.NEWNS}, + {Type: configs.NEWNET, Path: expectedNetworkPath}, + {Type: configs.NEWUTS}, + {Type: configs.NEWIPC}, + }, + }, + initProcess: &mockProcess{ + _pid: pid, + started: "010", + }, + cgroupManager: &mockCgroupManager{ + pids: []int{1, 2, 3}, + stats: &cgroups.Stats{ + MemoryStats: cgroups.MemoryStats{ + Usage: 1024, + }, + }, + paths: map[string]string{ + "memory": expectedMemoryPath, + }, + }, + } + state, err := container.State() + if err != nil { + t.Fatal(err) + } + if state.InitProcessPid != pid { + t.Fatalf("expected pid %d but received %d", pid, state.InitProcessPid) + } + if state.InitProcessStartTime != "010" { + t.Fatalf("expected process start time 010 but received %s", state.InitProcessStartTime) + } + paths := state.CgroupPaths + if paths == nil { + t.Fatal("cgroup paths should not be nil") + } + if memPath := paths["memory"]; memPath != expectedMemoryPath { + t.Fatalf("expected memory path %q but received %q", expectedMemoryPath, memPath) + } + for _, ns := range container.config.Namespaces { + path := state.NamespacePaths[ns.Type] + if path == "" { + t.Fatalf("expected non nil namespace path for %s", ns.Type) + } + if ns.Type == configs.NEWNET { + if path != expectedNetworkPath { + t.Fatalf("expected path %q but received %q", expectedNetworkPath, path) + } + } else { + file := "" + switch ns.Type { + case configs.NEWNET: + file = "net" + case configs.NEWNS: + file = "mnt" + case configs.NEWPID: + file = "pid" + case configs.NEWIPC: + file = "ipc" + case configs.NEWUSER: + file = "user" + case configs.NEWUTS: + file = "uts" + } + expected := fmt.Sprintf("/proc/%d/ns/%s", pid, file) + if expected != path { + t.Fatalf("expected path %q but received %q", expected, path) + } + } + } +} diff --git a/vendor/src/github.com/docker/libcontainer/container_nouserns_linux.go b/vendor/src/github.com/docker/libcontainer/container_nouserns_linux.go new file mode 100644 index 0000000000..3b75d593cc --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/container_nouserns_linux.go @@ -0,0 +1,13 @@ +// +build !go1.4 + +package libcontainer + +import ( + "fmt" + "syscall" +) + +// not available before go 1.4 +func (c *linuxContainer) addUidGidMappings(sys *syscall.SysProcAttr) error { + return fmt.Errorf("User namespace is not supported in golang < 1.4") +} diff --git a/vendor/src/github.com/docker/libcontainer/container_userns_linux.go b/vendor/src/github.com/docker/libcontainer/container_userns_linux.go new file mode 100644 index 0000000000..5f4cf3c9fe --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/container_userns_linux.go @@ -0,0 +1,26 @@ +// +build go1.4 + +package libcontainer + +import "syscall" + +// Converts IDMap to SysProcIDMap array and adds it to SysProcAttr. +func (c *linuxContainer) addUidGidMappings(sys *syscall.SysProcAttr) error { + if c.config.UidMappings != nil { + sys.UidMappings = make([]syscall.SysProcIDMap, len(c.config.UidMappings)) + for i, um := range c.config.UidMappings { + sys.UidMappings[i].ContainerID = um.ContainerID + sys.UidMappings[i].HostID = um.HostID + sys.UidMappings[i].Size = um.Size + } + } + if c.config.GidMappings != nil { + sys.GidMappings = make([]syscall.SysProcIDMap, len(c.config.GidMappings)) + for i, gm := range c.config.GidMappings { + sys.GidMappings[i].ContainerID = gm.ContainerID + sys.GidMappings[i].HostID = gm.HostID + sys.GidMappings[i].Size = gm.Size + } + } + return nil +} diff --git a/vendor/src/github.com/docker/libcontainer/devices/defaults.go b/vendor/src/github.com/docker/libcontainer/devices/defaults.go deleted file mode 100644 index e0ad0b08f8..0000000000 --- a/vendor/src/github.com/docker/libcontainer/devices/defaults.go +++ /dev/null @@ -1,159 +0,0 @@ -package devices - -var ( - // These are devices that are to be both allowed and created. - - DefaultSimpleDevices = []*Device{ - // /dev/null and zero - { - Path: "/dev/null", - Type: 'c', - MajorNumber: 1, - MinorNumber: 3, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - { - Path: "/dev/zero", - Type: 'c', - MajorNumber: 1, - MinorNumber: 5, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - - { - Path: "/dev/full", - Type: 'c', - MajorNumber: 1, - MinorNumber: 7, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - - // consoles and ttys - { - Path: "/dev/tty", - Type: 'c', - MajorNumber: 5, - MinorNumber: 0, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - - // /dev/urandom,/dev/random - { - Path: "/dev/urandom", - Type: 'c', - MajorNumber: 1, - MinorNumber: 9, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - { - Path: "/dev/random", - Type: 'c', - MajorNumber: 1, - MinorNumber: 8, - CgroupPermissions: "rwm", - FileMode: 0666, - }, - } - - DefaultAllowedDevices = append([]*Device{ - // allow mknod for any device - { - Type: 'c', - MajorNumber: Wildcard, - MinorNumber: Wildcard, - CgroupPermissions: "m", - }, - { - Type: 'b', - MajorNumber: Wildcard, - MinorNumber: Wildcard, - CgroupPermissions: "m", - }, - - { - Path: "/dev/console", - Type: 'c', - MajorNumber: 5, - MinorNumber: 1, - CgroupPermissions: "rwm", - }, - { - Path: "/dev/tty0", - Type: 'c', - MajorNumber: 4, - MinorNumber: 0, - CgroupPermissions: "rwm", - }, - { - Path: "/dev/tty1", - Type: 'c', - MajorNumber: 4, - MinorNumber: 1, - CgroupPermissions: "rwm", - }, - // /dev/pts/ - pts namespaces are "coming soon" - { - Path: "", - Type: 'c', - MajorNumber: 136, - MinorNumber: Wildcard, - CgroupPermissions: "rwm", - }, - { - Path: "", - Type: 'c', - MajorNumber: 5, - MinorNumber: 2, - CgroupPermissions: "rwm", - }, - - // tuntap - { - Path: "", - Type: 'c', - MajorNumber: 10, - MinorNumber: 200, - CgroupPermissions: "rwm", - }, - - /*// fuse - { - Path: "", - Type: 'c', - MajorNumber: 10, - MinorNumber: 229, - CgroupPermissions: "rwm", - }, - - // rtc - { - Path: "", - Type: 'c', - MajorNumber: 254, - MinorNumber: 0, - CgroupPermissions: "rwm", - }, - */ - }, DefaultSimpleDevices...) - - DefaultAutoCreatedDevices = append([]*Device{ - { - // /dev/fuse is created but not allowed. - // This is to allow java to work. Because java - // Insists on there being a /dev/fuse - // https://github.com/docker/docker/issues/514 - // https://github.com/docker/docker/issues/2393 - // - Path: "/dev/fuse", - Type: 'c', - MajorNumber: 10, - MinorNumber: 229, - CgroupPermissions: "rwm", - }, - }, DefaultSimpleDevices...) -) diff --git a/vendor/src/github.com/docker/libcontainer/devices/devices.go b/vendor/src/github.com/docker/libcontainer/devices/devices.go index 8e86d95292..537f71aff1 100644 --- a/vendor/src/github.com/docker/libcontainer/devices/devices.go +++ b/vendor/src/github.com/docker/libcontainer/devices/devices.go @@ -7,14 +7,12 @@ import ( "os" "path/filepath" "syscall" -) -const ( - Wildcard = -1 + "github.com/docker/libcontainer/configs" ) var ( - ErrNotADeviceNode = errors.New("not a device node") + ErrNotADevice = errors.New("not a device node") ) // Testing dependencies @@ -23,45 +21,20 @@ var ( ioutilReadDir = ioutil.ReadDir ) -type Device struct { - Type rune `json:"type,omitempty"` - Path string `json:"path,omitempty"` // It is fine if this is an empty string in the case that you are using Wildcards - MajorNumber int64 `json:"major_number,omitempty"` // Use the wildcard constant for wildcards. - MinorNumber int64 `json:"minor_number,omitempty"` // Use the wildcard constant for wildcards. - CgroupPermissions string `json:"cgroup_permissions,omitempty"` // Typically just "rwm" - FileMode os.FileMode `json:"file_mode,omitempty"` // The permission bits of the file's mode - Uid uint32 `json:"uid,omitempty"` - Gid uint32 `json:"gid,omitempty"` -} - -func GetDeviceNumberString(deviceNumber int64) string { - if deviceNumber == Wildcard { - return "*" - } else { - return fmt.Sprintf("%d", deviceNumber) - } -} - -func (device *Device) GetCgroupAllowString() string { - return fmt.Sprintf("%c %s:%s %s", device.Type, GetDeviceNumberString(device.MajorNumber), GetDeviceNumberString(device.MinorNumber), device.CgroupPermissions) -} - // Given the path to a device and it's cgroup_permissions(which cannot be easilly queried) look up the information about a linux device and return that information as a Device struct. -func GetDevice(path, cgroupPermissions string) (*Device, error) { +func DeviceFromPath(path, permissions string) (*configs.Device, error) { fileInfo, err := osLstat(path) if err != nil { return nil, err } - var ( devType rune mode = fileInfo.Mode() fileModePermissionBits = os.FileMode.Perm(mode) ) - switch { case mode&os.ModeDevice == 0: - return nil, ErrNotADeviceNode + return nil, ErrNotADevice case mode&os.ModeCharDevice != 0: fileModePermissionBits |= syscall.S_IFCHR devType = 'c' @@ -69,36 +42,33 @@ func GetDevice(path, cgroupPermissions string) (*Device, error) { fileModePermissionBits |= syscall.S_IFBLK devType = 'b' } - stat_t, ok := fileInfo.Sys().(*syscall.Stat_t) if !ok { return nil, fmt.Errorf("cannot determine the device number for device %s", path) } devNumber := int(stat_t.Rdev) - - return &Device{ - Type: devType, - Path: path, - MajorNumber: Major(devNumber), - MinorNumber: Minor(devNumber), - CgroupPermissions: cgroupPermissions, - FileMode: fileModePermissionBits, - Uid: stat_t.Uid, - Gid: stat_t.Gid, + return &configs.Device{ + Type: devType, + Path: path, + Major: Major(devNumber), + Minor: Minor(devNumber), + Permissions: permissions, + FileMode: fileModePermissionBits, + Uid: stat_t.Uid, + Gid: stat_t.Gid, }, nil } -func GetHostDeviceNodes() ([]*Device, error) { - return getDeviceNodes("/dev") +func HostDevices() ([]*configs.Device, error) { + return getDevices("/dev") } -func getDeviceNodes(path string) ([]*Device, error) { +func getDevices(path string) ([]*configs.Device, error) { files, err := ioutilReadDir(path) if err != nil { return nil, err } - - out := []*Device{} + out := []*configs.Device{} for _, f := range files { switch { case f.IsDir(): @@ -106,7 +76,7 @@ func getDeviceNodes(path string) ([]*Device, error) { case "pts", "shm", "fd", "mqueue": continue default: - sub, err := getDeviceNodes(filepath.Join(path, f.Name())) + sub, err := getDevices(filepath.Join(path, f.Name())) if err != nil { return nil, err } @@ -117,16 +87,14 @@ func getDeviceNodes(path string) ([]*Device, error) { case f.Name() == "console": continue } - - device, err := GetDevice(filepath.Join(path, f.Name()), "rwm") + device, err := DeviceFromPath(filepath.Join(path, f.Name()), "rwm") if err != nil { - if err == ErrNotADeviceNode { + if err == ErrNotADevice { continue } return nil, err } out = append(out, device) } - return out, nil } diff --git a/vendor/src/github.com/docker/libcontainer/devices/devices_test.go b/vendor/src/github.com/docker/libcontainer/devices/devices_test.go index fec4002237..9e52fc4e25 100644 --- a/vendor/src/github.com/docker/libcontainer/devices/devices_test.go +++ b/vendor/src/github.com/docker/libcontainer/devices/devices_test.go @@ -6,7 +6,7 @@ import ( "testing" ) -func TestGetDeviceLstatFailure(t *testing.T) { +func TestDeviceFromPathLstatFailure(t *testing.T) { testError := errors.New("test error") // Override os.Lstat to inject error. @@ -14,13 +14,13 @@ func TestGetDeviceLstatFailure(t *testing.T) { return nil, testError } - _, err := GetDevice("", "") + _, err := DeviceFromPath("", "") if err != testError { t.Fatalf("Unexpected error %v, expected %v", err, testError) } } -func TestGetHostDeviceNodesIoutilReadDirFailure(t *testing.T) { +func TestHostDevicesIoutilReadDirFailure(t *testing.T) { testError := errors.New("test error") // Override ioutil.ReadDir to inject error. @@ -28,13 +28,13 @@ func TestGetHostDeviceNodesIoutilReadDirFailure(t *testing.T) { return nil, testError } - _, err := GetHostDeviceNodes() + _, err := HostDevices() if err != testError { t.Fatalf("Unexpected error %v, expected %v", err, testError) } } -func TestGetHostDeviceNodesIoutilReadDirDeepFailure(t *testing.T) { +func TestHostDevicesIoutilReadDirDeepFailure(t *testing.T) { testError := errors.New("test error") called := false @@ -54,7 +54,7 @@ func TestGetHostDeviceNodesIoutilReadDirDeepFailure(t *testing.T) { return []os.FileInfo{fi}, nil } - _, err := GetHostDeviceNodes() + _, err := HostDevices() if err != testError { t.Fatalf("Unexpected error %v, expected %v", err, testError) } diff --git a/vendor/src/github.com/docker/libcontainer/devices/number.go b/vendor/src/github.com/docker/libcontainer/devices/number.go index 3aae380bb1..9e8feb83b0 100644 --- a/vendor/src/github.com/docker/libcontainer/devices/number.go +++ b/vendor/src/github.com/docker/libcontainer/devices/number.go @@ -20,7 +20,3 @@ func Major(devNumber int) int64 { func Minor(devNumber int) int64 { return int64((devNumber & 0xff) | ((devNumber >> 12) & 0xfff00)) } - -func Mkdev(majorNumber int64, minorNumber int64) int { - return int((majorNumber << 8) | (minorNumber & 0xff) | ((minorNumber & 0xfff00) << 12)) -} diff --git a/vendor/src/github.com/docker/libcontainer/docs/man/nsinit.1.md b/vendor/src/github.com/docker/libcontainer/docs/man/nsinit.1.md new file mode 100644 index 0000000000..898dba2306 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/docs/man/nsinit.1.md @@ -0,0 +1,38 @@ +% nsinit User Manual +% docker/libcontainer +% JAN 2015 + +NAME: + nsinit - A low-level utility for managing containers. + It is used to spawn new containers or join existing containers. + +USAGE: + nsinit [global options] command [command options] [arguments...] + +VERSION: + 0.1 + +COMMANDS: + config display the container configuration + exec execute a new command inside a container + init runs the init process inside the namespace + oom display oom notifications for a container + pause pause the container's processes + stats display statistics for the container + unpause unpause the container's processes + help, h shows a list of commands or help for one command + +EXAMPLES: + +Get the of an already running docker container. +`sudo docker ps` will return the list of all the running containers. + +take the (e.g. 4addb0b2d307) and go to its config directory +`/var/lib/docker/execdriver/native/4addb0b2d307` and here you can run the nsinit +command line utility. + +e.g. `nsinit exec /bin/bash` will start a shell on the already running container. + +# HISTORY +Jan 2015, Originally compiled by Shishir Mahajan (shishir dot mahajan at redhat dot com) +based on nsinit source material and internal work. diff --git a/vendor/src/github.com/docker/libcontainer/error.go b/vendor/src/github.com/docker/libcontainer/error.go index 5ff56d80ba..6c266620e7 100644 --- a/vendor/src/github.com/docker/libcontainer/error.go +++ b/vendor/src/github.com/docker/libcontainer/error.go @@ -1,5 +1,7 @@ package libcontainer +import "io" + // API error code type. type ErrorCode int @@ -8,29 +10,52 @@ const ( // Factory errors IdInUse ErrorCode = iota InvalidIdFormat - // TODO: add Load errors // Container errors - ContainerDestroyed + ContainerNotExists ContainerPaused + ContainerNotStopped + ContainerNotRunning + + // Process errors + ProcessNotExecuted // Common errors ConfigInvalid SystemError ) +func (c ErrorCode) String() string { + switch c { + case IdInUse: + return "Id already in use" + case InvalidIdFormat: + return "Invalid format" + case ContainerPaused: + return "Container paused" + case ConfigInvalid: + return "Invalid configuration" + case SystemError: + return "System error" + case ContainerNotExists: + return "Container does not exist" + case ContainerNotStopped: + return "Container is not stopped" + case ContainerNotRunning: + return "Container is not running" + default: + return "Unknown error" + } +} + // API Error type. type Error interface { error - // Returns the stack trace, if any, which identifies the - // point at which the error occurred. - Stack() []byte - // Returns a verbose string including the error message // and a representation of the stack trace suitable for // printing. - Detail() string + Detail(w io.Writer) error // Returns the error code for this error. Code() ErrorCode diff --git a/vendor/src/github.com/docker/libcontainer/error_test.go b/vendor/src/github.com/docker/libcontainer/error_test.go new file mode 100644 index 0000000000..4bf4c9f5d4 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/error_test.go @@ -0,0 +1,20 @@ +package libcontainer + +import "testing" + +func TestErrorCode(t *testing.T) { + codes := map[ErrorCode]string{ + IdInUse: "Id already in use", + InvalidIdFormat: "Invalid format", + ContainerPaused: "Container paused", + ConfigInvalid: "Invalid configuration", + SystemError: "System error", + ContainerNotExists: "Container does not exist", + } + + for code, expected := range codes { + if actual := code.String(); actual != expected { + t.Fatalf("expected string %q but received %q", expected, actual) + } + } +} diff --git a/vendor/src/github.com/docker/libcontainer/factory.go b/vendor/src/github.com/docker/libcontainer/factory.go index e37773b2bd..0c9fa63a32 100644 --- a/vendor/src/github.com/docker/libcontainer/factory.go +++ b/vendor/src/github.com/docker/libcontainer/factory.go @@ -1,7 +1,10 @@ package libcontainer -type Factory interface { +import ( + "github.com/docker/libcontainer/configs" +) +type Factory interface { // Creates a new container with the given id and starts the initial process inside it. // id must be a string containing only letters, digits and underscores and must contain // between 1 and 1024 characters, inclusive. @@ -11,22 +14,34 @@ type Factory interface { // // Returns the new container with a running process. // - // Errors: + // errors: // IdInUse - id is already in use by a container // InvalidIdFormat - id has incorrect format // ConfigInvalid - config is invalid - // SystemError - System error + // Systemerror - System error // // On error, any partially created container parts are cleaned up (the operation is atomic). - Create(id string, config *Config) (Container, Error) + Create(id string, config *configs.Config) (Container, error) - // Load takes an ID for an existing container and reconstructs the container - // from the state. + // Load takes an ID for an existing container and returns the container information + // from the state. This presents a read only view of the container. // - // Errors: + // errors: // Path does not exist // Container is stopped // System error - // TODO: fix description - Load(id string) (Container, Error) + Load(id string) (Container, error) + + // StartInitialization is an internal API to libcontainer used during the rexec of the + // container. pipefd is the fd to the child end of the pipe used to syncronize the + // parent and child process providing state and configuration to the child process and + // returning any errors during the init of the container + // + // Errors: + // pipe connection error + // system error + StartInitialization(pipefd uintptr) error + + // Type returns info string about factory type (e.g. lxc, libcontainer...) + Type() string } diff --git a/vendor/src/github.com/docker/libcontainer/factory_linux.go b/vendor/src/github.com/docker/libcontainer/factory_linux.go new file mode 100644 index 0000000000..a2d3bec780 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/factory_linux.go @@ -0,0 +1,282 @@ +// +build linux + +package libcontainer + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "os" + "os/exec" + "path/filepath" + "regexp" + "syscall" + + "github.com/docker/docker/pkg/mount" + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/cgroups/fs" + "github.com/docker/libcontainer/cgroups/systemd" + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/configs/validate" +) + +const ( + stateFilename = "state.json" +) + +var ( + idRegex = regexp.MustCompile(`^[\w_]+$`) + maxIdLen = 1024 +) + +// InitArgs returns an options func to configure a LinuxFactory with the +// provided init arguments. +func InitArgs(args ...string) func(*LinuxFactory) error { + return func(l *LinuxFactory) error { + name := args[0] + if filepath.Base(name) == name { + if lp, err := exec.LookPath(name); err == nil { + name = lp + } + } + l.InitPath = name + l.InitArgs = append([]string{name}, args[1:]...) + return nil + } +} + +// InitPath returns an options func to configure a LinuxFactory with the +// provided absolute path to the init binary and arguements. +func InitPath(path string, args ...string) func(*LinuxFactory) error { + return func(l *LinuxFactory) error { + l.InitPath = path + l.InitArgs = args + return nil + } +} + +// SystemdCgroups is an options func to configure a LinuxFactory to return +// containers that use systemd to create and manage cgroups. +func SystemdCgroups(l *LinuxFactory) error { + l.NewCgroupsManager = func(config *configs.Cgroup, paths map[string]string) cgroups.Manager { + return &systemd.Manager{ + Cgroups: config, + Paths: paths, + } + } + return nil +} + +// Cgroupfs is an options func to configure a LinuxFactory to return +// containers that use the native cgroups filesystem implementation to +// create and manage cgroups. +func Cgroupfs(l *LinuxFactory) error { + l.NewCgroupsManager = func(config *configs.Cgroup, paths map[string]string) cgroups.Manager { + return &fs.Manager{ + Cgroups: config, + Paths: paths, + } + } + return nil +} + +// TmpfsRoot is an option func to mount LinuxFactory.Root to tmpfs. +func TmpfsRoot(l *LinuxFactory) error { + mounted, err := mount.Mounted(l.Root) + if err != nil { + return err + } + if !mounted { + if err := syscall.Mount("tmpfs", l.Root, "tmpfs", 0, ""); err != nil { + return err + } + } + return nil +} + +// New returns a linux based container factory based in the root directory and +// configures the factory with the provided option funcs. +func New(root string, options ...func(*LinuxFactory) error) (Factory, error) { + if root != "" { + if err := os.MkdirAll(root, 0700); err != nil { + return nil, newGenericError(err, SystemError) + } + } + l := &LinuxFactory{ + Root: root, + Validator: validate.New(), + } + InitArgs(os.Args[0], "init")(l) + Cgroupfs(l) + for _, opt := range options { + if err := opt(l); err != nil { + return nil, err + } + } + return l, nil +} + +// LinuxFactory implements the default factory interface for linux based systems. +type LinuxFactory struct { + // Root directory for the factory to store state. + Root string + + // InitPath is the absolute path to the init binary. + InitPath string + + // InitArgs are arguments for calling the init responsibilities for spawning + // a container. + InitArgs []string + + // Validator provides validation to container configurations. + Validator validate.Validator + + // NewCgroupsManager returns an initialized cgroups manager for a single container. + NewCgroupsManager func(config *configs.Cgroup, paths map[string]string) cgroups.Manager +} + +func (l *LinuxFactory) Create(id string, config *configs.Config) (Container, error) { + if l.Root == "" { + return nil, newGenericError(fmt.Errorf("invalid root"), ConfigInvalid) + } + if err := l.validateID(id); err != nil { + return nil, err + } + if err := l.Validator.Validate(config); err != nil { + return nil, newGenericError(err, ConfigInvalid) + } + containerRoot := filepath.Join(l.Root, id) + if _, err := os.Stat(containerRoot); err == nil { + return nil, newGenericError(fmt.Errorf("Container with id exists: %v", id), IdInUse) + } else if !os.IsNotExist(err) { + return nil, newGenericError(err, SystemError) + } + if err := os.MkdirAll(containerRoot, 0700); err != nil { + return nil, newGenericError(err, SystemError) + } + return &linuxContainer{ + id: id, + root: containerRoot, + config: config, + initPath: l.InitPath, + initArgs: l.InitArgs, + cgroupManager: l.NewCgroupsManager(config.Cgroups, nil), + }, nil +} + +func (l *LinuxFactory) Load(id string) (Container, error) { + if l.Root == "" { + return nil, newGenericError(fmt.Errorf("invalid root"), ConfigInvalid) + } + containerRoot := filepath.Join(l.Root, id) + state, err := l.loadState(containerRoot) + if err != nil { + return nil, err + } + r := &restoredProcess{ + processPid: state.InitProcessPid, + processStartTime: state.InitProcessStartTime, + } + return &linuxContainer{ + initProcess: r, + id: id, + config: &state.Config, + initPath: l.InitPath, + initArgs: l.InitArgs, + cgroupManager: l.NewCgroupsManager(state.Config.Cgroups, state.CgroupPaths), + root: containerRoot, + }, nil +} + +func (l *LinuxFactory) Type() string { + return "libcontainer" +} + +// StartInitialization loads a container by opening the pipe fd from the parent to read the configuration and state +// This is a low level implementation detail of the reexec and should not be consumed externally +func (l *LinuxFactory) StartInitialization(pipefd uintptr) (err error) { + var ( + pipe = os.NewFile(uintptr(pipefd), "pipe") + it = initType(os.Getenv("_LIBCONTAINER_INITTYPE")) + ) + // clear the current process's environment to clean any libcontainer + // specific env vars. + os.Clearenv() + defer func() { + // if we have an error during the initialization of the container's init then send it back to the + // parent process in the form of an initError. + if err != nil { + // ensure that any data sent from the parent is consumed so it doesn't + // receive ECONNRESET when the child writes to the pipe. + ioutil.ReadAll(pipe) + if err := json.NewEncoder(pipe).Encode(newSystemError(err)); err != nil { + panic(err) + } + } + // ensure that this pipe is always closed + pipe.Close() + }() + i, err := newContainerInit(it, pipe) + if err != nil { + return err + } + return i.Init() +} + +func (l *LinuxFactory) loadState(root string) (*State, error) { + f, err := os.Open(filepath.Join(root, stateFilename)) + if err != nil { + if os.IsNotExist(err) { + return nil, newGenericError(err, ContainerNotExists) + } + return nil, newGenericError(err, SystemError) + } + defer f.Close() + var state *State + if err := json.NewDecoder(f).Decode(&state); err != nil { + return nil, newGenericError(err, SystemError) + } + return state, nil +} + +func (l *LinuxFactory) validateID(id string) error { + if !idRegex.MatchString(id) { + return newGenericError(fmt.Errorf("Invalid id format: %v", id), InvalidIdFormat) + } + if len(id) > maxIdLen { + return newGenericError(fmt.Errorf("Invalid id format: %v", id), InvalidIdFormat) + } + return nil +} + +// restoredProcess represents a process where the calling process may or may not be +// the parent process. This process is created when a factory loads a container from +// a persisted state. +type restoredProcess struct { + processPid int + processStartTime string +} + +func (p *restoredProcess) start() error { + return newGenericError(fmt.Errorf("restored process cannot be started"), SystemError) +} + +func (p *restoredProcess) pid() int { + return p.processPid +} + +func (p *restoredProcess) terminate() error { + return newGenericError(fmt.Errorf("restored process cannot be terminated"), SystemError) +} + +func (p *restoredProcess) wait() (*os.ProcessState, error) { + return nil, newGenericError(fmt.Errorf("restored process cannot be waited on"), SystemError) +} + +func (p *restoredProcess) startTime() (string, error) { + return p.processStartTime, nil +} + +func (p *restoredProcess) signal(s os.Signal) error { + return newGenericError(fmt.Errorf("restored process cannot be signaled"), SystemError) +} diff --git a/vendor/src/github.com/docker/libcontainer/factory_linux_test.go b/vendor/src/github.com/docker/libcontainer/factory_linux_test.go new file mode 100644 index 0000000000..00e3973943 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/factory_linux_test.go @@ -0,0 +1,179 @@ +// +build linux + +package libcontainer + +import ( + "encoding/json" + "io/ioutil" + "os" + "path/filepath" + "testing" + + "github.com/docker/docker/pkg/mount" + "github.com/docker/libcontainer/configs" +) + +func newTestRoot() (string, error) { + dir, err := ioutil.TempDir("", "libcontainer") + if err != nil { + return "", err + } + return dir, nil +} + +func TestFactoryNew(t *testing.T) { + root, rerr := newTestRoot() + if rerr != nil { + t.Fatal(rerr) + } + defer os.RemoveAll(root) + factory, err := New(root, Cgroupfs) + if err != nil { + t.Fatal(err) + } + if factory == nil { + t.Fatal("factory should not be nil") + } + lfactory, ok := factory.(*LinuxFactory) + if !ok { + t.Fatal("expected linux factory returned on linux based systems") + } + if lfactory.Root != root { + t.Fatalf("expected factory root to be %q but received %q", root, lfactory.Root) + } + + if factory.Type() != "libcontainer" { + t.Fatalf("unexpected factory type: %q, expected %q", factory.Type(), "libcontainer") + } +} + +func TestFactoryNewTmpfs(t *testing.T) { + root, rerr := newTestRoot() + if rerr != nil { + t.Fatal(rerr) + } + defer os.RemoveAll(root) + factory, err := New(root, Cgroupfs, TmpfsRoot) + if err != nil { + t.Fatal(err) + } + if factory == nil { + t.Fatal("factory should not be nil") + } + lfactory, ok := factory.(*LinuxFactory) + if !ok { + t.Fatal("expected linux factory returned on linux based systems") + } + if lfactory.Root != root { + t.Fatalf("expected factory root to be %q but received %q", root, lfactory.Root) + } + + if factory.Type() != "libcontainer" { + t.Fatalf("unexpected factory type: %q, expected %q", factory.Type(), "libcontainer") + } + mounted, err := mount.Mounted(lfactory.Root) + if err != nil { + t.Fatal(err) + } + if !mounted { + t.Fatalf("Factory Root is not mounted") + } + mounts, err := mount.GetMounts() + if err != nil { + t.Fatal(err) + } + var found bool + for _, m := range mounts { + if m.Mountpoint == lfactory.Root { + if m.Fstype != "tmpfs" { + t.Fatalf("Fstype of root: %s, expected %s", m.Fstype, "tmpfs") + } + if m.Source != "tmpfs" { + t.Fatalf("Source of root: %s, expected %s", m.Source, "tmpfs") + } + found = true + } + } + if !found { + t.Fatalf("Factory Root is not listed in mounts list") + } +} + +func TestFactoryLoadNotExists(t *testing.T) { + root, rerr := newTestRoot() + if rerr != nil { + t.Fatal(rerr) + } + defer os.RemoveAll(root) + factory, err := New(root, Cgroupfs) + if err != nil { + t.Fatal(err) + } + _, err = factory.Load("nocontainer") + if err == nil { + t.Fatal("expected nil error loading non-existing container") + } + lerr, ok := err.(Error) + if !ok { + t.Fatal("expected libcontainer error type") + } + if lerr.Code() != ContainerNotExists { + t.Fatalf("expected error code %s but received %s", ContainerNotExists, lerr.Code()) + } +} + +func TestFactoryLoadContainer(t *testing.T) { + root, err := newTestRoot() + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(root) + // setup default container config and state for mocking + var ( + id = "1" + expectedConfig = &configs.Config{ + Rootfs: "/mycontainer/root", + } + expectedState = &State{ + InitProcessPid: 1024, + Config: *expectedConfig, + } + ) + if err := os.Mkdir(filepath.Join(root, id), 0700); err != nil { + t.Fatal(err) + } + if err := marshal(filepath.Join(root, id, stateFilename), expectedState); err != nil { + t.Fatal(err) + } + factory, err := New(root, Cgroupfs) + if err != nil { + t.Fatal(err) + } + container, err := factory.Load(id) + if err != nil { + t.Fatal(err) + } + if container.ID() != id { + t.Fatalf("expected container id %q but received %q", id, container.ID()) + } + config := container.Config() + if config.Rootfs != expectedConfig.Rootfs { + t.Fatalf("expected rootfs %q but received %q", expectedConfig.Rootfs, config.Rootfs) + } + lcontainer, ok := container.(*linuxContainer) + if !ok { + t.Fatal("expected linux container on linux based systems") + } + if lcontainer.initProcess.pid() != expectedState.InitProcessPid { + t.Fatalf("expected init pid %d but received %d", expectedState.InitProcessPid, lcontainer.initProcess.pid()) + } +} + +func marshal(path string, v interface{}) error { + f, err := os.Create(path) + if err != nil { + return err + } + defer f.Close() + return json.NewEncoder(f).Encode(v) +} diff --git a/vendor/src/github.com/docker/libcontainer/generic_error.go b/vendor/src/github.com/docker/libcontainer/generic_error.go new file mode 100644 index 0000000000..ff4d7248da --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/generic_error.go @@ -0,0 +1,74 @@ +package libcontainer + +import ( + "fmt" + "io" + "text/template" + "time" + + "github.com/docker/libcontainer/stacktrace" +) + +var errorTemplate = template.Must(template.New("error").Parse(`Timestamp: {{.Timestamp}} +Code: {{.ECode}} +{{if .Message }} +Message: {{.Message}} +{{end}} +Frames:{{range $i, $frame := .Stack.Frames}} +--- +{{$i}}: {{$frame.Function}} +Package: {{$frame.Package}} +File: {{$frame.File}}@{{$frame.Line}}{{end}} +`)) + +func newGenericError(err error, c ErrorCode) Error { + if le, ok := err.(Error); ok { + return le + } + gerr := &genericError{ + Timestamp: time.Now(), + Err: err, + ECode: c, + Stack: stacktrace.Capture(1), + } + if err != nil { + gerr.Message = err.Error() + } + return gerr +} + +func newSystemError(err error) Error { + if le, ok := err.(Error); ok { + return le + } + gerr := &genericError{ + Timestamp: time.Now(), + Err: err, + ECode: SystemError, + Stack: stacktrace.Capture(1), + } + if err != nil { + gerr.Message = err.Error() + } + return gerr +} + +type genericError struct { + Timestamp time.Time + ECode ErrorCode + Err error `json:"-"` + Message string + Stack stacktrace.Stacktrace +} + +func (e *genericError) Error() string { + return fmt.Sprintf("[%d] %s: %s", e.ECode, e.ECode, e.Message) +} + +func (e *genericError) Code() ErrorCode { + return e.ECode +} + +func (e *genericError) Detail(w io.Writer) error { + return errorTemplate.Execute(w, e) +} diff --git a/vendor/src/github.com/docker/libcontainer/generic_error_test.go b/vendor/src/github.com/docker/libcontainer/generic_error_test.go new file mode 100644 index 0000000000..292d2a36bd --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/generic_error_test.go @@ -0,0 +1,14 @@ +package libcontainer + +import ( + "fmt" + "io/ioutil" + "testing" +) + +func TestErrorDetail(t *testing.T) { + err := newGenericError(fmt.Errorf("test error"), SystemError) + if derr := err.Detail(ioutil.Discard); derr != nil { + t.Fatal(derr) + } +} diff --git a/vendor/src/github.com/docker/libcontainer/hack/validate.sh b/vendor/src/github.com/docker/libcontainer/hack/validate.sh new file mode 100755 index 0000000000..70635377b9 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/hack/validate.sh @@ -0,0 +1,15 @@ +#!/usr/bin/env bash +set -e + +# This script runs all validations + +validate() { + sed -i 's!docker/docker!docker/libcontainer!' /go/src/github.com/docker/docker/hack/make/.validate + bash /go/src/github.com/docker/docker/hack/make/validate-dco + bash /go/src/github.com/docker/docker/hack/make/validate-gofmt + go get golang.org/x/tools/cmd/vet + go vet github.com/docker/libcontainer/... +} + +# run validations +validate diff --git a/vendor/src/github.com/docker/libcontainer/init_linux.go b/vendor/src/github.com/docker/libcontainer/init_linux.go new file mode 100644 index 0000000000..1c5f6a87ee --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/init_linux.go @@ -0,0 +1,253 @@ +// +build linux + +package libcontainer + +import ( + "encoding/json" + "fmt" + "os" + "strings" + "syscall" + + log "github.com/Sirupsen/logrus" + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/netlink" + "github.com/docker/libcontainer/system" + "github.com/docker/libcontainer/user" + "github.com/docker/libcontainer/utils" +) + +type initType string + +const ( + initSetns initType = "setns" + initStandard initType = "standard" +) + +type pid struct { + Pid int `json:"pid"` +} + +// network is an internal struct used to setup container networks. +type network struct { + configs.Network + + // TempVethPeerName is a unique tempory veth peer name that was placed into + // the container's namespace. + TempVethPeerName string `json:"temp_veth_peer_name"` +} + +// initConfig is used for transferring parameters from Exec() to Init() +type initConfig struct { + Args []string `json:"args"` + Env []string `json:"env"` + Cwd string `json:"cwd"` + User string `json:"user"` + Config *configs.Config `json:"config"` + Console string `json:"console"` + Networks []*network `json:"network"` +} + +type initer interface { + Init() error +} + +func newContainerInit(t initType, pipe *os.File) (initer, error) { + var config *initConfig + if err := json.NewDecoder(pipe).Decode(&config); err != nil { + return nil, err + } + if err := populateProcessEnvironment(config.Env); err != nil { + return nil, err + } + switch t { + case initSetns: + return &linuxSetnsInit{ + config: config, + }, nil + case initStandard: + return &linuxStandardInit{ + config: config, + }, nil + } + return nil, fmt.Errorf("unknown init type %q", t) +} + +// populateProcessEnvironment loads the provided environment variables into the +// current processes's environment. +func populateProcessEnvironment(env []string) error { + for _, pair := range env { + p := strings.SplitN(pair, "=", 2) + if len(p) < 2 { + return fmt.Errorf("invalid environment '%v'", pair) + } + if err := os.Setenv(p[0], p[1]); err != nil { + return err + } + } + return nil +} + +// finalizeNamespace drops the caps, sets the correct user +// and working dir, and closes any leaked file descriptors +// before execing the command inside the namespace +func finalizeNamespace(config *initConfig) error { + // Ensure that all non-standard fds we may have accidentally + // inherited are marked close-on-exec so they stay out of the + // container + if err := utils.CloseExecFrom(3); err != nil { + return err + } + w, err := newCapWhitelist(config.Config.Capabilities) + if err != nil { + return err + } + // drop capabilities in bounding set before changing user + if err := w.dropBoundingSet(); err != nil { + return err + } + // preserve existing capabilities while we change users + if err := system.SetKeepCaps(); err != nil { + return err + } + if err := setupUser(config); err != nil { + return err + } + if err := system.ClearKeepCaps(); err != nil { + return err + } + // drop all other capabilities + if err := w.drop(); err != nil { + return err + } + if config.Cwd != "" { + if err := syscall.Chdir(config.Cwd); err != nil { + return err + } + } + return nil +} + +// joinExistingNamespaces gets all the namespace paths specified for the container and +// does a setns on the namespace fd so that the current process joins the namespace. +func joinExistingNamespaces(namespaces []configs.Namespace) error { + for _, ns := range namespaces { + if ns.Path != "" { + f, err := os.OpenFile(ns.Path, os.O_RDONLY, 0) + if err != nil { + return err + } + err = system.Setns(f.Fd(), uintptr(ns.Syscall())) + f.Close() + if err != nil { + return err + } + } + } + return nil +} + +// setupUser changes the groups, gid, and uid for the user inside the container +func setupUser(config *initConfig) error { + // Set up defaults. + defaultExecUser := user.ExecUser{ + Uid: syscall.Getuid(), + Gid: syscall.Getgid(), + Home: "/", + } + passwdPath, err := user.GetPasswdPath() + if err != nil { + return err + } + groupPath, err := user.GetGroupPath() + if err != nil { + return err + } + execUser, err := user.GetExecUserPath(config.User, &defaultExecUser, passwdPath, groupPath) + if err != nil { + return err + } + suppGroups := append(execUser.Sgids, config.Config.AdditionalGroups...) + if err := syscall.Setgroups(suppGroups); err != nil { + return err + } + if err := system.Setgid(execUser.Gid); err != nil { + return err + } + if err := system.Setuid(execUser.Uid); err != nil { + return err + } + // if we didn't get HOME already, set it based on the user's HOME + if envHome := os.Getenv("HOME"); envHome == "" { + if err := os.Setenv("HOME", execUser.Home); err != nil { + return err + } + } + return nil +} + +// setupNetwork sets up and initializes any network interface inside the container. +func setupNetwork(config *initConfig) error { + for _, config := range config.Networks { + strategy, err := getStrategy(config.Type) + if err != nil { + return err + } + if err := strategy.initialize(config); err != nil { + return err + } + } + return nil +} + +func setupRoute(config *configs.Config) error { + for _, config := range config.Routes { + if err := netlink.AddRoute(config.Destination, config.Source, config.Gateway, config.InterfaceName); err != nil { + return err + } + } + return nil +} + +func setupRlimits(config *configs.Config) error { + for _, rlimit := range config.Rlimits { + l := &syscall.Rlimit{Max: rlimit.Hard, Cur: rlimit.Soft} + if err := syscall.Setrlimit(rlimit.Type, l); err != nil { + return fmt.Errorf("error setting rlimit type %v: %v", rlimit.Type, err) + } + } + return nil +} + +// killCgroupProcesses freezes then iterates over all the processes inside the +// manager's cgroups sending a SIGKILL to each process then waiting for them to +// exit. +func killCgroupProcesses(m cgroups.Manager) error { + var procs []*os.Process + if err := m.Freeze(configs.Frozen); err != nil { + log.Warn(err) + } + pids, err := m.GetPids() + if err != nil { + m.Freeze(configs.Thawed) + return err + } + for _, pid := range pids { + if p, err := os.FindProcess(pid); err == nil { + procs = append(procs, p) + if err := p.Kill(); err != nil { + log.Warn(err) + } + } + } + if err := m.Freeze(configs.Thawed); err != nil { + log.Warn(err) + } + for _, p := range procs { + if _, err := p.Wait(); err != nil { + log.Warn(err) + } + } + return nil +} diff --git a/vendor/src/github.com/docker/libcontainer/integration/exec_test.go b/vendor/src/github.com/docker/libcontainer/integration/exec_test.go index fb1d1d7a10..b68cb739c3 100644 --- a/vendor/src/github.com/docker/libcontainer/integration/exec_test.go +++ b/vendor/src/github.com/docker/libcontainer/integration/exec_test.go @@ -1,34 +1,50 @@ package integration import ( + "bytes" + "io/ioutil" "os" "strings" "testing" "github.com/docker/libcontainer" + "github.com/docker/libcontainer/configs" ) func TestExecPS(t *testing.T) { + testExecPS(t, false) +} + +func TestUsernsExecPS(t *testing.T) { + if _, err := os.Stat("/proc/self/ns/user"); os.IsNotExist(err) { + t.Skip("userns is unsupported") + } + testExecPS(t, true) +} + +func testExecPS(t *testing.T, userns bool) { if testing.Short() { return } - - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } defer remove(rootfs) - config := newTemplateConfig(rootfs) - buffers, exitCode, err := runContainer(config, "", "ps") - if err != nil { - t.Fatal(err) + if userns { + config.UidMappings = []configs.IDMap{{0, 0, 1000}} + config.GidMappings = []configs.IDMap{{0, 0, 1000}} + config.Namespaces = append(config.Namespaces, configs.Namespace{Type: configs.NEWUSER}) } + buffers, exitCode, err := runContainer(config, "", "ps") + if err != nil { + t.Fatalf("%s: %s", buffers, err) + } if exitCode != 0 { t.Fatalf("exit code not 0. code %d stderr %q", exitCode, buffers.Stderr) } - lines := strings.Split(buffers.Stdout.String(), "\n") if len(lines) < 2 { t.Fatalf("more than one process running for output %q", buffers.Stdout.String()) @@ -45,7 +61,7 @@ func TestIPCPrivate(t *testing.T) { return } - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } @@ -76,7 +92,7 @@ func TestIPCHost(t *testing.T) { return } - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } @@ -88,7 +104,7 @@ func TestIPCHost(t *testing.T) { } config := newTemplateConfig(rootfs) - config.Namespaces.Remove(libcontainer.NEWIPC) + config.Namespaces.Remove(configs.NEWIPC) buffers, exitCode, err := runContainer(config, "", "readlink", "/proc/self/ns/ipc") if err != nil { t.Fatal(err) @@ -108,7 +124,7 @@ func TestIPCJoinPath(t *testing.T) { return } - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } @@ -120,7 +136,7 @@ func TestIPCJoinPath(t *testing.T) { } config := newTemplateConfig(rootfs) - config.Namespaces.Add(libcontainer.NEWIPC, "/proc/1/ns/ipc") + config.Namespaces.Add(configs.NEWIPC, "/proc/1/ns/ipc") buffers, exitCode, err := runContainer(config, "", "readlink", "/proc/self/ns/ipc") if err != nil { @@ -141,14 +157,14 @@ func TestIPCBadPath(t *testing.T) { return } - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } defer remove(rootfs) config := newTemplateConfig(rootfs) - config.Namespaces.Add(libcontainer.NEWIPC, "/proc/1/ns/ipcc") + config.Namespaces.Add(configs.NEWIPC, "/proc/1/ns/ipcc") _, _, err = runContainer(config, "", "true") if err == nil { @@ -161,7 +177,7 @@ func TestRlimit(t *testing.T) { return } - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } @@ -172,38 +188,289 @@ func TestRlimit(t *testing.T) { if err != nil { t.Fatal(err) } - if limit := strings.TrimSpace(out.Stdout.String()); limit != "1024" { - t.Fatalf("expected rlimit to be 1024, got %s", limit) + if limit := strings.TrimSpace(out.Stdout.String()); limit != "1025" { + t.Fatalf("expected rlimit to be 1025, got %s", limit) } } -func TestPIDNSPrivate(t *testing.T) { +func newTestRoot() (string, error) { + dir, err := ioutil.TempDir("", "libcontainer") + if err != nil { + return "", err + } + if err := os.MkdirAll(dir, 0700); err != nil { + return "", err + } + return dir, nil +} + +func waitProcess(p *libcontainer.Process, t *testing.T) { + status, err := p.Wait() + if err != nil { + t.Fatal(err) + } + if !status.Success() { + t.Fatal(status) + } +} + +func TestEnter(t *testing.T) { if testing.Short() { return } + root, err := newTestRoot() + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(root) - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } defer remove(rootfs) - l, err := os.Readlink("/proc/1/ns/pid") - if err != nil { - t.Fatal(err) - } - config := newTemplateConfig(rootfs) - buffers, exitCode, err := runContainer(config, "", "readlink", "/proc/self/ns/pid") + + factory, err := libcontainer.New(root, libcontainer.Cgroupfs) if err != nil { t.Fatal(err) } - if exitCode != 0 { - t.Fatalf("exit code not 0. code %d stderr %q", exitCode, buffers.Stderr) + container, err := factory.Create("test", config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() + + // Execute a first process in the container + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) } - if actual := strings.Trim(buffers.Stdout.String(), "\n"); actual == l { - t.Fatalf("pid link should be private to the container but equals host %q %q", actual, l) + var stdout, stdout2 bytes.Buffer + + pconfig := libcontainer.Process{ + Args: []string{"sh", "-c", "cat && readlink /proc/self/ns/pid"}, + Env: standardEnvironment, + Stdin: stdinR, + Stdout: &stdout, + } + err = container.Start(&pconfig) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) + } + pid, err := pconfig.Pid() + if err != nil { + t.Fatal(err) + } + + // Execute another process in the container + stdinR2, stdinW2, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + pconfig2 := libcontainer.Process{ + Env: standardEnvironment, + } + pconfig2.Args = []string{"sh", "-c", "cat && readlink /proc/self/ns/pid"} + pconfig2.Stdin = stdinR2 + pconfig2.Stdout = &stdout2 + + err = container.Start(&pconfig2) + stdinR2.Close() + defer stdinW2.Close() + if err != nil { + t.Fatal(err) + } + + pid2, err := pconfig2.Pid() + if err != nil { + t.Fatal(err) + } + + processes, err := container.Processes() + if err != nil { + t.Fatal(err) + } + + n := 0 + for i := range processes { + if processes[i] == pid || processes[i] == pid2 { + n++ + } + } + if n != 2 { + t.Fatal("unexpected number of processes", processes, pid, pid2) + } + + // Wait processes + stdinW2.Close() + waitProcess(&pconfig2, t) + + stdinW.Close() + waitProcess(&pconfig, t) + + // Check that both processes live in the same pidns + pidns := string(stdout.Bytes()) + if err != nil { + t.Fatal(err) + } + + pidns2 := string(stdout2.Bytes()) + if err != nil { + t.Fatal(err) + } + + if pidns != pidns2 { + t.Fatal("The second process isn't in the required pid namespace", pidns, pidns2) + } +} + +func TestProcessEnv(t *testing.T) { + if testing.Short() { + return + } + root, err := newTestRoot() + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(root) + + rootfs, err := newRootfs() + if err != nil { + t.Fatal(err) + } + defer remove(rootfs) + + config := newTemplateConfig(rootfs) + + factory, err := libcontainer.New(root, libcontainer.Cgroupfs) + if err != nil { + t.Fatal(err) + } + + container, err := factory.Create("test", config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() + + var stdout bytes.Buffer + pconfig := libcontainer.Process{ + Args: []string{"sh", "-c", "env"}, + Env: []string{ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "HOSTNAME=integration", + "TERM=xterm", + "FOO=BAR", + }, + Stdin: nil, + Stdout: &stdout, + } + err = container.Start(&pconfig) + if err != nil { + t.Fatal(err) + } + + // Wait for process + waitProcess(&pconfig, t) + + outputEnv := string(stdout.Bytes()) + if err != nil { + t.Fatal(err) + } + + // Check that the environment has the key/value pair we added + if !strings.Contains(outputEnv, "FOO=BAR") { + t.Fatal("Environment doesn't have the expected FOO=BAR key/value pair: ", outputEnv) + } + + // Make sure that HOME is set + if !strings.Contains(outputEnv, "HOME=/root") { + t.Fatal("Environment doesn't have HOME set: ", outputEnv) + } +} + +func TestFreeze(t *testing.T) { + if testing.Short() { + return + } + root, err := newTestRoot() + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(root) + + rootfs, err := newRootfs() + if err != nil { + t.Fatal(err) + } + defer remove(rootfs) + + config := newTemplateConfig(rootfs) + + factory, err := libcontainer.New(root, libcontainer.Cgroupfs) + if err != nil { + t.Fatal(err) + } + + container, err := factory.Create("test", config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() + + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + + pconfig := libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(&pconfig) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) + } + + pid, err := pconfig.Pid() + if err != nil { + t.Fatal(err) + } + + process, err := os.FindProcess(pid) + if err != nil { + t.Fatal(err) + } + + if err := container.Pause(); err != nil { + t.Fatal(err) + } + state, err := container.Status() + if err != nil { + t.Fatal(err) + } + if err := container.Resume(); err != nil { + t.Fatal(err) + } + if state != libcontainer.Paused { + t.Fatal("Unexpected state: ", state) + } + + stdinW.Close() + s, err := process.Wait() + if err != nil { + t.Fatal(err) + } + if !s.Success() { + t.Fatal(s.String()) } } diff --git a/vendor/src/github.com/docker/libcontainer/integration/execin_test.go b/vendor/src/github.com/docker/libcontainer/integration/execin_test.go index 86d9c5c260..252e6e415e 100644 --- a/vendor/src/github.com/docker/libcontainer/integration/execin_test.go +++ b/vendor/src/github.com/docker/libcontainer/integration/execin_test.go @@ -1,62 +1,70 @@ package integration import ( + "bytes" + "io" "os" - "os/exec" "strings" - "sync" "testing" + "time" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" ) func TestExecIn(t *testing.T) { if testing.Short() { return } - - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } defer remove(rootfs) - config := newTemplateConfig(rootfs) - if err := writeConfig(config); err != nil { - t.Fatalf("failed to write config %s", err) - } - - containerCmd, statePath, containerErr := startLongRunningContainer(config) - defer func() { - // kill the container - if containerCmd.Process != nil { - containerCmd.Process.Kill() - } - if err := <-containerErr; err != nil { - t.Fatal(err) - } - }() - - // start the exec process - state, err := libcontainer.GetState(statePath) + container, err := newContainer(config) if err != nil { - t.Fatalf("failed to get state %s", err) + t.Fatal(err) } - buffers := newStdBuffers() - execErr := make(chan error, 1) - go func() { - _, err := namespaces.ExecIn(config, state, []string{"ps"}, - os.Args[0], "exec", buffers.Stdin, buffers.Stdout, buffers.Stderr, - "", nil) - execErr <- err - }() - if err := <-execErr; err != nil { - t.Fatalf("exec finished with error %s", err) + defer container.Destroy() + + // Execute a first process in the container + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + process := &libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(process) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) } + buffers := newStdBuffers() + ps := &libcontainer.Process{ + Args: []string{"ps"}, + Env: standardEnvironment, + Stdin: buffers.Stdin, + Stdout: buffers.Stdout, + Stderr: buffers.Stderr, + } + err = container.Start(ps) + if err != nil { + t.Fatal(err) + } + if _, err := ps.Wait(); err != nil { + t.Fatal(err) + } + stdinW.Close() + if _, err := process.Wait(); err != nil { + t.Log(err) + } out := buffers.Stdout.String() - if !strings.Contains(out, "sleep 10") || !strings.Contains(out, "ps") { + if !strings.Contains(out, "cat") || !strings.Contains(out, "ps") { t.Fatalf("unexpected running process, output %q", out) } } @@ -65,76 +73,244 @@ func TestExecInRlimit(t *testing.T) { if testing.Short() { return } - - rootfs, err := newRootFs() + rootfs, err := newRootfs() if err != nil { t.Fatal(err) } defer remove(rootfs) - config := newTemplateConfig(rootfs) - if err := writeConfig(config); err != nil { - t.Fatalf("failed to write config %s", err) - } - - containerCmd, statePath, containerErr := startLongRunningContainer(config) - defer func() { - // kill the container - if containerCmd.Process != nil { - containerCmd.Process.Kill() - } - if err := <-containerErr; err != nil { - t.Fatal(err) - } - }() - - // start the exec process - state, err := libcontainer.GetState(statePath) + container, err := newContainer(config) if err != nil { - t.Fatalf("failed to get state %s", err) + t.Fatal(err) } + defer container.Destroy() + + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + process := &libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(process) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) + } + buffers := newStdBuffers() - execErr := make(chan error, 1) - go func() { - _, err := namespaces.ExecIn(config, state, []string{"/bin/sh", "-c", "ulimit -n"}, - os.Args[0], "exec", buffers.Stdin, buffers.Stdout, buffers.Stderr, - "", nil) - execErr <- err - }() - if err := <-execErr; err != nil { - t.Fatalf("exec finished with error %s", err) + ps := &libcontainer.Process{ + Args: []string{"/bin/sh", "-c", "ulimit -n"}, + Env: standardEnvironment, + Stdin: buffers.Stdin, + Stdout: buffers.Stdout, + Stderr: buffers.Stderr, + } + err = container.Start(ps) + if err != nil { + t.Fatal(err) + } + if _, err := ps.Wait(); err != nil { + t.Fatal(err) + } + stdinW.Close() + if _, err := process.Wait(); err != nil { + t.Log(err) } - out := buffers.Stdout.String() - if limit := strings.TrimSpace(out); limit != "1024" { - t.Fatalf("expected rlimit to be 1024, got %s", limit) + if limit := strings.TrimSpace(out); limit != "1025" { + t.Fatalf("expected rlimit to be 1025, got %s", limit) } } -// start a long-running container so we have time to inspect execin processes -func startLongRunningContainer(config *libcontainer.Config) (*exec.Cmd, string, chan error) { - containerErr := make(chan error, 1) - containerCmd := &exec.Cmd{} - var statePath string - - createCmd := func(container *libcontainer.Config, console, dataPath, init string, - pipe *os.File, args []string) *exec.Cmd { - containerCmd = namespaces.DefaultCreateCommand(container, console, dataPath, init, pipe, args) - statePath = dataPath - return containerCmd +func TestExecInError(t *testing.T) { + if testing.Short() { + return } + rootfs, err := newRootfs() + if err != nil { + t.Fatal(err) + } + defer remove(rootfs) + config := newTemplateConfig(rootfs) + container, err := newContainer(config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() - var containerStart sync.WaitGroup - containerStart.Add(1) - go func() { - buffers := newStdBuffers() - _, err := namespaces.Exec(config, - buffers.Stdin, buffers.Stdout, buffers.Stderr, - "", config.RootFs, []string{"sleep", "10"}, - createCmd, containerStart.Done) - containerErr <- err + // Execute a first process in the container + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + process := &libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(process) + stdinR.Close() + defer func() { + stdinW.Close() + if _, err := process.Wait(); err != nil { + t.Log(err) + } }() - containerStart.Wait() + if err != nil { + t.Fatal(err) + } - return containerCmd, statePath, containerErr + unexistent := &libcontainer.Process{ + Args: []string{"unexistent"}, + Env: standardEnvironment, + } + err = container.Start(unexistent) + if err == nil { + t.Fatal("Should be an error") + } + if !strings.Contains(err.Error(), "executable file not found") { + t.Fatalf("Should be error about not found executable, got %s", err) + } +} + +func TestExecInTTY(t *testing.T) { + if testing.Short() { + return + } + rootfs, err := newRootfs() + if err != nil { + t.Fatal(err) + } + defer remove(rootfs) + config := newTemplateConfig(rootfs) + container, err := newContainer(config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() + + // Execute a first process in the container + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + process := &libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(process) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) + } + + var stdout bytes.Buffer + ps := &libcontainer.Process{ + Args: []string{"ps"}, + Env: standardEnvironment, + } + console, err := ps.NewConsole(0) + copy := make(chan struct{}) + go func() { + io.Copy(&stdout, console) + close(copy) + }() + if err != nil { + t.Fatal(err) + } + err = container.Start(ps) + if err != nil { + t.Fatal(err) + } + select { + case <-time.After(5 * time.Second): + t.Fatal("Waiting for copy timed out") + case <-copy: + } + if _, err := ps.Wait(); err != nil { + t.Fatal(err) + } + stdinW.Close() + if _, err := process.Wait(); err != nil { + t.Log(err) + } + out := stdout.String() + if !strings.Contains(out, "cat") || !strings.Contains(string(out), "ps") { + t.Fatalf("unexpected running process, output %q", out) + } +} + +func TestExecInEnvironment(t *testing.T) { + if testing.Short() { + return + } + rootfs, err := newRootfs() + if err != nil { + t.Fatal(err) + } + defer remove(rootfs) + config := newTemplateConfig(rootfs) + container, err := newContainer(config) + if err != nil { + t.Fatal(err) + } + defer container.Destroy() + + // Execute a first process in the container + stdinR, stdinW, err := os.Pipe() + if err != nil { + t.Fatal(err) + } + process := &libcontainer.Process{ + Args: []string{"cat"}, + Env: standardEnvironment, + Stdin: stdinR, + } + err = container.Start(process) + stdinR.Close() + defer stdinW.Close() + if err != nil { + t.Fatal(err) + } + + buffers := newStdBuffers() + process2 := &libcontainer.Process{ + Args: []string{"env"}, + Env: []string{ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "DEBUG=true", + "DEBUG=false", + "ENV=test", + }, + Stdin: buffers.Stdin, + Stdout: buffers.Stdout, + Stderr: buffers.Stderr, + } + err = container.Start(process2) + if err != nil { + t.Fatal(err) + } + if _, err := process2.Wait(); err != nil { + out := buffers.Stdout.String() + t.Fatal(err, out) + } + stdinW.Close() + if _, err := process.Wait(); err != nil { + t.Log(err) + } + out := buffers.Stdout.String() + // check execin's process environment + if !strings.Contains(out, "DEBUG=false") || + !strings.Contains(out, "ENV=test") || + !strings.Contains(out, "HOME=/root") || + !strings.Contains(out, "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin") || + strings.Contains(out, "DEBUG=true") { + t.Fatalf("unexpected running process, output %q", out) + } } diff --git a/vendor/src/github.com/docker/libcontainer/integration/init_test.go b/vendor/src/github.com/docker/libcontainer/integration/init_test.go index 3106a5fb1e..f11834de34 100644 --- a/vendor/src/github.com/docker/libcontainer/integration/init_test.go +++ b/vendor/src/github.com/docker/libcontainer/integration/init_test.go @@ -1,76 +1,27 @@ package integration import ( - "encoding/json" "log" "os" "runtime" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" - _ "github.com/docker/libcontainer/namespaces/nsenter" + _ "github.com/docker/libcontainer/nsenter" ) // init runs the libcontainer initialization code because of the busybox style needs // to work around the go runtime and the issues with forking func init() { - if len(os.Args) < 2 { + if len(os.Args) < 2 || os.Args[1] != "init" { return } - // handle init - if len(os.Args) >= 2 && os.Args[1] == "init" { - runtime.LockOSThread() - - container, err := loadConfig() - if err != nil { - log.Fatal(err) - } - - rootfs, err := os.Getwd() - if err != nil { - log.Fatal(err) - } - - if err := namespaces.Init(container, rootfs, "", os.NewFile(3, "pipe"), os.Args[3:]); err != nil { - log.Fatalf("unable to initialize for container: %s", err) - } - os.Exit(1) + runtime.GOMAXPROCS(1) + runtime.LockOSThread() + factory, err := libcontainer.New("") + if err != nil { + log.Fatalf("unable to initialize for container: %s", err) } - - // handle execin - if len(os.Args) >= 2 && os.Args[0] == "nsenter-exec" { - runtime.LockOSThread() - - // User args are passed after '--' in the command line. - userArgs := findUserArgs() - - config, err := loadConfigFromFd() - if err != nil { - log.Fatalf("docker-exec: unable to receive config from sync pipe: %s", err) - } - - if err := namespaces.FinalizeSetns(config, userArgs); err != nil { - log.Fatalf("docker-exec: failed to exec: %s", err) - } - os.Exit(1) + if err := factory.StartInitialization(3); err != nil { + log.Fatal(err) } } - -func findUserArgs() []string { - for i, a := range os.Args { - if a == "--" { - return os.Args[i+1:] - } - } - return []string{} -} - -// loadConfigFromFd loads a container's config from the sync pipe that is provided by -// fd 3 when running a process -func loadConfigFromFd() (*libcontainer.Config, error) { - var config *libcontainer.Config - if err := json.NewDecoder(os.NewFile(3, "child")).Decode(&config); err != nil { - return nil, err - } - return config, nil -} diff --git a/vendor/src/github.com/docker/libcontainer/integration/template_test.go b/vendor/src/github.com/docker/libcontainer/integration/template_test.go index 98846eb199..cb991b4170 100644 --- a/vendor/src/github.com/docker/libcontainer/integration/template_test.go +++ b/vendor/src/github.com/docker/libcontainer/integration/template_test.go @@ -3,19 +3,25 @@ package integration import ( "syscall" - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/devices" + "github.com/docker/libcontainer/configs" ) +var standardEnvironment = []string{ + "HOME=/root", + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "HOSTNAME=integration", + "TERM=xterm", +} + +const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV + // newTemplateConfig returns a base template for running a container // // it uses a network strategy of just setting a loopback interface // and the default setup for devices -func newTemplateConfig(rootfs string) *libcontainer.Config { - return &libcontainer.Config{ - RootFs: rootfs, - Tty: false, +func newTemplateConfig(rootfs string) *configs.Config { + return &configs.Config{ + Rootfs: rootfs, Capabilities: []string{ "CHOWN", "DAC_OVERRIDE", @@ -32,41 +38,80 @@ func newTemplateConfig(rootfs string) *libcontainer.Config { "KILL", "AUDIT_WRITE", }, - Namespaces: libcontainer.Namespaces([]libcontainer.Namespace{ - {Type: libcontainer.NEWNS}, - {Type: libcontainer.NEWUTS}, - {Type: libcontainer.NEWIPC}, - {Type: libcontainer.NEWPID}, - {Type: libcontainer.NEWNET}, + Namespaces: configs.Namespaces([]configs.Namespace{ + {Type: configs.NEWNS}, + {Type: configs.NEWUTS}, + {Type: configs.NEWIPC}, + {Type: configs.NEWPID}, + {Type: configs.NEWNET}, }), - Cgroups: &cgroups.Cgroup{ + Cgroups: &configs.Cgroup{ + Name: "test", Parent: "integration", AllowAllDevices: false, - AllowedDevices: devices.DefaultAllowedDevices, + AllowedDevices: configs.DefaultAllowedDevices, }, - - MountConfig: &libcontainer.MountConfig{ - DeviceNodes: devices.DefaultAutoCreatedDevices, + MaskPaths: []string{ + "/proc/kcore", }, + ReadonlyPaths: []string{ + "/proc/sys", "/proc/sysrq-trigger", "/proc/irq", "/proc/bus", + }, + Devices: configs.DefaultAutoCreatedDevices, Hostname: "integration", - Env: []string{ - "HOME=/root", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=integration", - "TERM=xterm", + Mounts: []*configs.Mount{ + { + Source: "proc", + Destination: "/proc", + Device: "proc", + Flags: defaultMountFlags, + }, + { + Source: "tmpfs", + Destination: "/dev", + Device: "tmpfs", + Flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME, + Data: "mode=755", + }, + { + Source: "devpts", + Destination: "/dev/pts", + Device: "devpts", + Flags: syscall.MS_NOSUID | syscall.MS_NOEXEC, + Data: "newinstance,ptmxmode=0666,mode=0620,gid=5", + }, + { + Device: "tmpfs", + Source: "shm", + Destination: "/dev/shm", + Data: "mode=1777,size=65536k", + Flags: defaultMountFlags, + }, + { + Source: "mqueue", + Destination: "/dev/mqueue", + Device: "mqueue", + Flags: defaultMountFlags, + }, + { + Source: "sysfs", + Destination: "/sys", + Device: "sysfs", + Flags: defaultMountFlags | syscall.MS_RDONLY, + }, }, - Networks: []*libcontainer.Network{ + Networks: []*configs.Network{ { Type: "loopback", Address: "127.0.0.1/0", Gateway: "localhost", }, }, - Rlimits: []libcontainer.Rlimit{ + Rlimits: []configs.Rlimit{ { Type: syscall.RLIMIT_NOFILE, - Hard: uint64(1024), - Soft: uint64(1024), + Hard: uint64(1025), + Soft: uint64(1025), }, }, } diff --git a/vendor/src/github.com/docker/libcontainer/integration/utils_test.go b/vendor/src/github.com/docker/libcontainer/integration/utils_test.go index 6393fb9982..c444eecfc5 100644 --- a/vendor/src/github.com/docker/libcontainer/integration/utils_test.go +++ b/vendor/src/github.com/docker/libcontainer/integration/utils_test.go @@ -2,15 +2,15 @@ package integration import ( "bytes" - "encoding/json" "fmt" "io/ioutil" "os" "os/exec" - "path/filepath" + "strings" + "syscall" "github.com/docker/libcontainer" - "github.com/docker/libcontainer/namespaces" + "github.com/docker/libcontainer/configs" ) func newStdBuffers() *stdBuffers { @@ -27,31 +27,19 @@ type stdBuffers struct { Stderr *bytes.Buffer } -func writeConfig(config *libcontainer.Config) error { - f, err := os.OpenFile(filepath.Join(config.RootFs, "container.json"), os.O_CREATE|os.O_RDWR|os.O_TRUNC, 0700) - if err != nil { - return err +func (b *stdBuffers) String() string { + s := []string{} + if b.Stderr != nil { + s = append(s, b.Stderr.String()) } - defer f.Close() - return json.NewEncoder(f).Encode(config) + if b.Stdout != nil { + s = append(s, b.Stdout.String()) + } + return strings.Join(s, "|") } -func loadConfig() (*libcontainer.Config, error) { - f, err := os.Open(filepath.Join(os.Getenv("data_path"), "container.json")) - if err != nil { - return nil, err - } - defer f.Close() - - var container *libcontainer.Config - if err := json.NewDecoder(f).Decode(&container); err != nil { - return nil, err - } - return container, nil -} - -// newRootFs creates a new tmp directory and copies the busybox root filesystem -func newRootFs() (string, error) { +// newRootfs creates a new tmp directory and copies the busybox root filesystem +func newRootfs() (string, error) { dir, err := ioutil.TempDir("", "") if err != nil { return "", err @@ -79,17 +67,51 @@ func copyBusybox(dest string) error { return nil } +func newContainer(config *configs.Config) (libcontainer.Container, error) { + factory, err := libcontainer.New(".", + libcontainer.InitArgs(os.Args[0], "init", "--"), + libcontainer.Cgroupfs, + ) + if err != nil { + return nil, err + } + return factory.Create("testCT", config) +} + // runContainer runs the container with the specific config and arguments // // buffers are returned containing the STDOUT and STDERR output for the run // along with the exit code and any go error -func runContainer(config *libcontainer.Config, console string, args ...string) (buffers *stdBuffers, exitCode int, err error) { - if err := writeConfig(config); err != nil { +func runContainer(config *configs.Config, console string, args ...string) (buffers *stdBuffers, exitCode int, err error) { + container, err := newContainer(config) + if err != nil { return nil, -1, err } - + defer container.Destroy() buffers = newStdBuffers() - exitCode, err = namespaces.Exec(config, buffers.Stdin, buffers.Stdout, buffers.Stderr, - console, config.RootFs, args, namespaces.DefaultCreateCommand, nil) + process := &libcontainer.Process{ + Args: args, + Env: standardEnvironment, + Stdin: buffers.Stdin, + Stdout: buffers.Stdout, + Stderr: buffers.Stderr, + } + + err = container.Start(process) + if err != nil { + return nil, -1, err + } + ps, err := process.Wait() + if err != nil { + return nil, -1, err + } + status := ps.Sys().(syscall.WaitStatus) + if status.Exited() { + exitCode = status.ExitStatus() + } else if status.Signaled() { + exitCode = -int(status.Signal()) + } else { + return nil, -1, err + } return } diff --git a/vendor/src/github.com/docker/libcontainer/mount/init.go b/vendor/src/github.com/docker/libcontainer/mount/init.go deleted file mode 100644 index a2c3d52026..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/init.go +++ /dev/null @@ -1,209 +0,0 @@ -// +build linux - -package mount - -import ( - "fmt" - "os" - "path/filepath" - "syscall" - - "github.com/docker/libcontainer/label" - "github.com/docker/libcontainer/mount/nodes" -) - -// default mount point flags -const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV - -type mount struct { - source string - path string - device string - flags int - data string -} - -// InitializeMountNamespace sets up the devices, mount points, and filesystems for use inside a -// new mount namespace. -func InitializeMountNamespace(rootfs, console string, sysReadonly bool, mountConfig *MountConfig) error { - var ( - err error - flag = syscall.MS_PRIVATE - ) - - if mountConfig.NoPivotRoot { - flag = syscall.MS_SLAVE - } - - if err := syscall.Mount("", "/", "", uintptr(flag|syscall.MS_REC), ""); err != nil { - return fmt.Errorf("mounting / with flags %X %s", (flag | syscall.MS_REC), err) - } - - if err := syscall.Mount(rootfs, rootfs, "bind", syscall.MS_BIND|syscall.MS_REC, ""); err != nil { - return fmt.Errorf("mouting %s as bind %s", rootfs, err) - } - - if err := mountSystem(rootfs, sysReadonly, mountConfig); err != nil { - return fmt.Errorf("mount system %s", err) - } - - // apply any user specified mounts within the new mount namespace - for _, m := range mountConfig.Mounts { - if err := m.Mount(rootfs, mountConfig.MountLabel); err != nil { - return err - } - } - - if err := nodes.CreateDeviceNodes(rootfs, mountConfig.DeviceNodes); err != nil { - return fmt.Errorf("create device nodes %s", err) - } - - if err := SetupPtmx(rootfs, console, mountConfig.MountLabel); err != nil { - return err - } - - // stdin, stdout and stderr could be pointing to /dev/null from parent namespace. - // Re-open them inside this namespace. - if err := reOpenDevNull(rootfs); err != nil { - return fmt.Errorf("Failed to reopen /dev/null %s", err) - } - - if err := setupDevSymlinks(rootfs); err != nil { - return fmt.Errorf("dev symlinks %s", err) - } - - if err := syscall.Chdir(rootfs); err != nil { - return fmt.Errorf("chdir into %s %s", rootfs, err) - } - - if mountConfig.NoPivotRoot { - err = MsMoveRoot(rootfs) - } else { - err = PivotRoot(rootfs) - } - - if err != nil { - return err - } - - if mountConfig.ReadonlyFs { - if err := SetReadonly(); err != nil { - return fmt.Errorf("set readonly %s", err) - } - } - - syscall.Umask(0022) - - return nil -} - -// mountSystem sets up linux specific system mounts like mqueue, sys, proc, shm, and devpts -// inside the mount namespace -func mountSystem(rootfs string, sysReadonly bool, mountConfig *MountConfig) error { - for _, m := range newSystemMounts(rootfs, mountConfig.MountLabel, sysReadonly) { - if err := os.MkdirAll(m.path, 0755); err != nil && !os.IsExist(err) { - return fmt.Errorf("mkdirall %s %s", m.path, err) - } - if err := syscall.Mount(m.source, m.path, m.device, uintptr(m.flags), m.data); err != nil { - return fmt.Errorf("mounting %s into %s %s", m.source, m.path, err) - } - } - return nil -} - -func createIfNotExists(path string, isDir bool) error { - if _, err := os.Stat(path); err != nil { - if os.IsNotExist(err) { - if isDir { - if err := os.MkdirAll(path, 0755); err != nil { - return err - } - } else { - if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { - return err - } - f, err := os.OpenFile(path, os.O_CREATE, 0755) - if err != nil { - return err - } - f.Close() - } - } - } - return nil -} - -func setupDevSymlinks(rootfs string) error { - var links = [][2]string{ - {"/proc/self/fd", "/dev/fd"}, - {"/proc/self/fd/0", "/dev/stdin"}, - {"/proc/self/fd/1", "/dev/stdout"}, - {"/proc/self/fd/2", "/dev/stderr"}, - } - - // kcore support can be toggled with CONFIG_PROC_KCORE; only create a symlink - // in /dev if it exists in /proc. - if _, err := os.Stat("/proc/kcore"); err == nil { - links = append(links, [2]string{"/proc/kcore", "/dev/kcore"}) - } - - for _, link := range links { - var ( - src = link[0] - dst = filepath.Join(rootfs, link[1]) - ) - - if err := os.Symlink(src, dst); err != nil && !os.IsExist(err) { - return fmt.Errorf("symlink %s %s %s", src, dst, err) - } - } - - return nil -} - -// TODO: this is crappy right now and should be cleaned up with a better way of handling system and -// standard bind mounts allowing them to be more dynamic -func newSystemMounts(rootfs, mountLabel string, sysReadonly bool) []mount { - systemMounts := []mount{ - {source: "proc", path: filepath.Join(rootfs, "proc"), device: "proc", flags: defaultMountFlags}, - {source: "tmpfs", path: filepath.Join(rootfs, "dev"), device: "tmpfs", flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME, data: label.FormatMountLabel("mode=755", mountLabel)}, - {source: "shm", path: filepath.Join(rootfs, "dev", "shm"), device: "tmpfs", flags: defaultMountFlags, data: label.FormatMountLabel("mode=1777,size=65536k", mountLabel)}, - {source: "mqueue", path: filepath.Join(rootfs, "dev", "mqueue"), device: "mqueue", flags: defaultMountFlags}, - {source: "devpts", path: filepath.Join(rootfs, "dev", "pts"), device: "devpts", flags: syscall.MS_NOSUID | syscall.MS_NOEXEC, data: label.FormatMountLabel("newinstance,ptmxmode=0666,mode=620,gid=5", mountLabel)}, - } - - sysMountFlags := defaultMountFlags - if sysReadonly { - sysMountFlags |= syscall.MS_RDONLY - } - - systemMounts = append(systemMounts, mount{source: "sysfs", path: filepath.Join(rootfs, "sys"), device: "sysfs", flags: sysMountFlags}) - - return systemMounts -} - -// Is stdin, stdout or stderr were to be pointing to '/dev/null', -// this method will make them point to '/dev/null' from within this namespace. -func reOpenDevNull(rootfs string) error { - var stat, devNullStat syscall.Stat_t - file, err := os.Open(filepath.Join(rootfs, "/dev/null")) - if err != nil { - return fmt.Errorf("Failed to open /dev/null - %s", err) - } - defer file.Close() - if err = syscall.Fstat(int(file.Fd()), &devNullStat); err != nil { - return fmt.Errorf("Failed to stat /dev/null - %s", err) - } - for fd := 0; fd < 3; fd++ { - if err = syscall.Fstat(fd, &stat); err != nil { - return fmt.Errorf("Failed to stat fd %d - %s", fd, err) - } - if stat.Rdev == devNullStat.Rdev { - // Close and re-open the fd. - if err = syscall.Dup2(int(file.Fd()), fd); err != nil { - return fmt.Errorf("Failed to dup fd %d to fd %d - %s", file.Fd(), fd, err) - } - } - } - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/mount.go b/vendor/src/github.com/docker/libcontainer/mount/mount.go deleted file mode 100644 index c1b424214f..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/mount.go +++ /dev/null @@ -1,109 +0,0 @@ -package mount - -import ( - "fmt" - "os" - "path/filepath" - "syscall" - - "github.com/docker/docker/pkg/symlink" - "github.com/docker/libcontainer/label" -) - -type Mount struct { - Type string `json:"type,omitempty"` - Source string `json:"source,omitempty"` // Source path, in the host namespace - Destination string `json:"destination,omitempty"` // Destination path, in the container - Writable bool `json:"writable,omitempty"` - Relabel string `json:"relabel,omitempty"` // Relabel source if set, "z" indicates shared, "Z" indicates unshared - Private bool `json:"private,omitempty"` - Slave bool `json:"slave,omitempty"` -} - -func (m *Mount) Mount(rootfs, mountLabel string) error { - switch m.Type { - case "bind": - return m.bindMount(rootfs, mountLabel) - case "tmpfs": - return m.tmpfsMount(rootfs, mountLabel) - default: - return fmt.Errorf("unsupported mount type %s for %s", m.Type, m.Destination) - } -} - -func (m *Mount) bindMount(rootfs, mountLabel string) error { - var ( - flags = syscall.MS_BIND | syscall.MS_REC - dest = filepath.Join(rootfs, m.Destination) - ) - - if !m.Writable { - flags = flags | syscall.MS_RDONLY - } - - if m.Slave { - flags = flags | syscall.MS_SLAVE - } - - stat, err := os.Stat(m.Source) - if err != nil { - return err - } - - // FIXME: (crosbymichael) This does not belong here and should be done a layer above - dest, err = symlink.FollowSymlinkInScope(dest, rootfs) - if err != nil { - return err - } - - if err := createIfNotExists(dest, stat.IsDir()); err != nil { - return fmt.Errorf("creating new bind mount target %s", err) - } - - if err := syscall.Mount(m.Source, dest, "bind", uintptr(flags), ""); err != nil { - return fmt.Errorf("mounting %s into %s %s", m.Source, dest, err) - } - - if !m.Writable { - if err := syscall.Mount(m.Source, dest, "bind", uintptr(flags|syscall.MS_REMOUNT), ""); err != nil { - return fmt.Errorf("remounting %s into %s %s", m.Source, dest, err) - } - } - - if m.Relabel != "" { - if err := label.Relabel(m.Source, mountLabel, m.Relabel); err != nil { - return fmt.Errorf("relabeling %s to %s %s", m.Source, mountLabel, err) - } - } - - if m.Private { - if err := syscall.Mount("", dest, "none", uintptr(syscall.MS_PRIVATE), ""); err != nil { - return fmt.Errorf("mounting %s private %s", dest, err) - } - } - - return nil -} - -func (m *Mount) tmpfsMount(rootfs, mountLabel string) error { - var ( - err error - l = label.FormatMountLabel("", mountLabel) - dest = filepath.Join(rootfs, m.Destination) - ) - - // FIXME: (crosbymichael) This does not belong here and should be done a layer above - if dest, err = symlink.FollowSymlinkInScope(dest, rootfs); err != nil { - return err - } - - if err := createIfNotExists(dest, true); err != nil { - return fmt.Errorf("creating new tmpfs mount target %s", err) - } - - if err := syscall.Mount("tmpfs", dest, "tmpfs", uintptr(defaultMountFlags), l); err != nil { - return fmt.Errorf("%s mounting %s in tmpfs", err, dest) - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/mount_config.go b/vendor/src/github.com/docker/libcontainer/mount/mount_config.go deleted file mode 100644 index eef9b8ce4d..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/mount_config.go +++ /dev/null @@ -1,28 +0,0 @@ -package mount - -import ( - "errors" - - "github.com/docker/libcontainer/devices" -) - -var ErrUnsupported = errors.New("Unsupported method") - -type MountConfig struct { - // NoPivotRoot will use MS_MOVE and a chroot to jail the process into the container's rootfs - // This is a common option when the container is running in ramdisk - NoPivotRoot bool `json:"no_pivot_root,omitempty"` - - // ReadonlyFs will remount the container's rootfs as readonly where only externally mounted - // bind mounts are writtable - ReadonlyFs bool `json:"readonly_fs,omitempty"` - - // Mounts specify additional source and destination paths that will be mounted inside the container's - // rootfs and mount namespace if specified - Mounts []*Mount `json:"mounts,omitempty"` - - // The device nodes that should be automatically created within the container upon container start. Note, make sure that the node is marked as allowed in the cgroup as well! - DeviceNodes []*devices.Device `json:"device_nodes,omitempty"` - - MountLabel string `json:"mount_label,omitempty"` -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/msmoveroot.go b/vendor/src/github.com/docker/libcontainer/mount/msmoveroot.go deleted file mode 100644 index 94afd3a99c..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/msmoveroot.go +++ /dev/null @@ -1,20 +0,0 @@ -// +build linux - -package mount - -import ( - "fmt" - "syscall" -) - -func MsMoveRoot(rootfs string) error { - if err := syscall.Mount(rootfs, "/", "", syscall.MS_MOVE, ""); err != nil { - return fmt.Errorf("mount move %s into / %s", rootfs, err) - } - - if err := syscall.Chroot("."); err != nil { - return fmt.Errorf("chroot . %s", err) - } - - return syscall.Chdir("/") -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes.go b/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes.go deleted file mode 100644 index 322c0c0ee2..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes.go +++ /dev/null @@ -1,57 +0,0 @@ -// +build linux - -package nodes - -import ( - "fmt" - "os" - "path/filepath" - "syscall" - - "github.com/docker/libcontainer/devices" -) - -// Create the device nodes in the container. -func CreateDeviceNodes(rootfs string, nodesToCreate []*devices.Device) error { - oldMask := syscall.Umask(0000) - defer syscall.Umask(oldMask) - - for _, node := range nodesToCreate { - if err := CreateDeviceNode(rootfs, node); err != nil { - return err - } - } - return nil -} - -// Creates the device node in the rootfs of the container. -func CreateDeviceNode(rootfs string, node *devices.Device) error { - var ( - dest = filepath.Join(rootfs, node.Path) - parent = filepath.Dir(dest) - ) - - if err := os.MkdirAll(parent, 0755); err != nil { - return err - } - - fileMode := node.FileMode - switch node.Type { - case 'c': - fileMode |= syscall.S_IFCHR - case 'b': - fileMode |= syscall.S_IFBLK - default: - return fmt.Errorf("%c is not a valid device type for device %s", node.Type, node.Path) - } - - if err := syscall.Mknod(dest, uint32(fileMode), devices.Mkdev(node.MajorNumber, node.MinorNumber)); err != nil && !os.IsExist(err) { - return fmt.Errorf("mknod %s %s", node.Path, err) - } - - if err := syscall.Chown(dest, int(node.Uid), int(node.Gid)); err != nil { - return fmt.Errorf("chown %s to %d:%d", node.Path, node.Uid, node.Gid) - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes_unsupported.go b/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes_unsupported.go deleted file mode 100644 index 83660715d4..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/nodes/nodes_unsupported.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build !linux - -package nodes - -import ( - "errors" - - "github.com/docker/libcontainer/devices" -) - -func CreateDeviceNodes(rootfs string, nodesToCreate []*devices.Device) error { - return errors.New("Unsupported method") -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/pivotroot.go b/vendor/src/github.com/docker/libcontainer/mount/pivotroot.go deleted file mode 100644 index a88ed4a84c..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/pivotroot.go +++ /dev/null @@ -1,34 +0,0 @@ -// +build linux - -package mount - -import ( - "fmt" - "io/ioutil" - "os" - "path/filepath" - "syscall" -) - -func PivotRoot(rootfs string) error { - pivotDir, err := ioutil.TempDir(rootfs, ".pivot_root") - if err != nil { - return fmt.Errorf("can't create pivot_root dir %s, error %v", pivotDir, err) - } - - if err := syscall.PivotRoot(rootfs, pivotDir); err != nil { - return fmt.Errorf("pivot_root %s", err) - } - - if err := syscall.Chdir("/"); err != nil { - return fmt.Errorf("chdir / %s", err) - } - - // path to pivot dir now changed, update - pivotDir = filepath.Join("/", filepath.Base(pivotDir)) - if err := syscall.Unmount(pivotDir, syscall.MNT_DETACH); err != nil { - return fmt.Errorf("unmount pivot_root dir %s", err) - } - - return os.Remove(pivotDir) -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/ptmx.go b/vendor/src/github.com/docker/libcontainer/mount/ptmx.go deleted file mode 100644 index c316481adf..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/ptmx.go +++ /dev/null @@ -1,30 +0,0 @@ -// +build linux - -package mount - -import ( - "fmt" - "os" - "path/filepath" - - "github.com/docker/libcontainer/console" -) - -func SetupPtmx(rootfs, consolePath, mountLabel string) error { - ptmx := filepath.Join(rootfs, "dev/ptmx") - if err := os.Remove(ptmx); err != nil && !os.IsNotExist(err) { - return err - } - - if err := os.Symlink("pts/ptmx", ptmx); err != nil { - return fmt.Errorf("symlink dev ptmx %s", err) - } - - if consolePath != "" { - if err := console.Setup(rootfs, consolePath, mountLabel); err != nil { - return err - } - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/readonly.go b/vendor/src/github.com/docker/libcontainer/mount/readonly.go deleted file mode 100644 index 9b4a6f704c..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/readonly.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build linux - -package mount - -import ( - "syscall" -) - -func SetReadonly() error { - return syscall.Mount("/", "/", "bind", syscall.MS_BIND|syscall.MS_REMOUNT|syscall.MS_RDONLY|syscall.MS_REC, "") -} diff --git a/vendor/src/github.com/docker/libcontainer/mount/remount.go b/vendor/src/github.com/docker/libcontainer/mount/remount.go deleted file mode 100644 index 99a01209d1..0000000000 --- a/vendor/src/github.com/docker/libcontainer/mount/remount.go +++ /dev/null @@ -1,31 +0,0 @@ -// +build linux - -package mount - -import "syscall" - -func RemountProc() error { - if err := syscall.Unmount("/proc", syscall.MNT_DETACH); err != nil { - return err - } - - if err := syscall.Mount("proc", "/proc", "proc", uintptr(defaultMountFlags), ""); err != nil { - return err - } - - return nil -} - -func RemountSys() error { - if err := syscall.Unmount("/sys", syscall.MNT_DETACH); err != nil { - if err != syscall.EINVAL { - return err - } - } else { - if err := syscall.Mount("sysfs", "/sys", "sysfs", uintptr(defaultMountFlags), ""); err != nil { - return err - } - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/create.go b/vendor/src/github.com/docker/libcontainer/namespaces/create.go deleted file mode 100644 index b6418b6e9f..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/create.go +++ /dev/null @@ -1,10 +0,0 @@ -package namespaces - -import ( - "os" - "os/exec" - - "github.com/docker/libcontainer" -) - -type CreateCommand func(container *libcontainer.Config, console, dataPath, init string, childPipe *os.File, args []string) *exec.Cmd diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/exec.go b/vendor/src/github.com/docker/libcontainer/namespaces/exec.go deleted file mode 100644 index ff00396979..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/exec.go +++ /dev/null @@ -1,229 +0,0 @@ -// +build linux - -package namespaces - -import ( - "encoding/json" - "io" - "os" - "os/exec" - "syscall" - - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/cgroups/fs" - "github.com/docker/libcontainer/cgroups/systemd" - "github.com/docker/libcontainer/network" - "github.com/docker/libcontainer/system" -) - -const ( - EXIT_SIGNAL_OFFSET = 128 -) - -// TODO(vishh): This is part of the libcontainer API and it does much more than just namespaces related work. -// Move this to libcontainer package. -// Exec performs setup outside of a namespace so that a container can be -// executed. Exec is a high level function for working with container namespaces. -func Exec(container *libcontainer.Config, stdin io.Reader, stdout, stderr io.Writer, console, dataPath string, args []string, createCommand CreateCommand, startCallback func()) (int, error) { - var err error - - // create a pipe so that we can syncronize with the namespaced process and - // pass the state and configuration to the child process - parent, child, err := newInitPipe() - if err != nil { - return -1, err - } - defer parent.Close() - - command := createCommand(container, console, dataPath, os.Args[0], child, args) - // Note: these are only used in non-tty mode - // if there is a tty for the container it will be opened within the namespace and the - // fds will be duped to stdin, stdiout, and stderr - command.Stdin = stdin - command.Stdout = stdout - command.Stderr = stderr - - if err := command.Start(); err != nil { - child.Close() - return -1, err - } - child.Close() - - wait := func() (*os.ProcessState, error) { - ps, err := command.Process.Wait() - // we should kill all processes in cgroup when init is died if we use - // host PID namespace - if !container.Namespaces.Contains(libcontainer.NEWPID) { - killAllPids(container) - } - return ps, err - } - - terminate := func(terr error) (int, error) { - // TODO: log the errors for kill and wait - command.Process.Kill() - wait() - return -1, terr - } - - started, err := system.GetProcessStartTime(command.Process.Pid) - if err != nil { - return terminate(err) - } - - // Do this before syncing with child so that no children - // can escape the cgroup - cgroupPaths, err := SetupCgroups(container, command.Process.Pid) - if err != nil { - return terminate(err) - } - defer cgroups.RemovePaths(cgroupPaths) - - var networkState network.NetworkState - if err := InitializeNetworking(container, command.Process.Pid, &networkState); err != nil { - return terminate(err) - } - // send the state to the container's init process then shutdown writes for the parent - if err := json.NewEncoder(parent).Encode(networkState); err != nil { - return terminate(err) - } - // shutdown writes for the parent side of the pipe - if err := syscall.Shutdown(int(parent.Fd()), syscall.SHUT_WR); err != nil { - return terminate(err) - } - - state := &libcontainer.State{ - InitPid: command.Process.Pid, - InitStartTime: started, - NetworkState: networkState, - CgroupPaths: cgroupPaths, - } - - if err := libcontainer.SaveState(dataPath, state); err != nil { - return terminate(err) - } - defer libcontainer.DeleteState(dataPath) - - // wait for the child process to fully complete and receive an error message - // if one was encoutered - var ierr *initError - if err := json.NewDecoder(parent).Decode(&ierr); err != nil && err != io.EOF { - return terminate(err) - } - if ierr != nil { - return terminate(ierr) - } - - if startCallback != nil { - startCallback() - } - - ps, err := wait() - if err != nil { - if _, ok := err.(*exec.ExitError); !ok { - return -1, err - } - } - // waiting for pipe flushing - command.Wait() - - waitStatus := ps.Sys().(syscall.WaitStatus) - if waitStatus.Signaled() { - return EXIT_SIGNAL_OFFSET + int(waitStatus.Signal()), nil - } - return waitStatus.ExitStatus(), nil -} - -// killAllPids itterates over all of the container's processes -// sending a SIGKILL to each process. -func killAllPids(container *libcontainer.Config) error { - var ( - procs []*os.Process - freeze = fs.Freeze - getPids = fs.GetPids - ) - if systemd.UseSystemd() { - freeze = systemd.Freeze - getPids = systemd.GetPids - } - freeze(container.Cgroups, cgroups.Frozen) - pids, err := getPids(container.Cgroups) - if err != nil { - return err - } - for _, pid := range pids { - // TODO: log err without aborting if we are unable to find - // a single PID - if p, err := os.FindProcess(pid); err == nil { - procs = append(procs, p) - p.Kill() - } - } - freeze(container.Cgroups, cgroups.Thawed) - for _, p := range procs { - p.Wait() - } - return err -} - -// DefaultCreateCommand will return an exec.Cmd with the Cloneflags set to the proper namespaces -// defined on the container's configuration and use the current binary as the init with the -// args provided -// -// console: the /dev/console to setup inside the container -// init: the program executed inside the namespaces -// root: the path to the container json file and information -// pipe: sync pipe to synchronize the parent and child processes -// args: the arguments to pass to the container to run as the user's program -func DefaultCreateCommand(container *libcontainer.Config, console, dataPath, init string, pipe *os.File, args []string) *exec.Cmd { - // get our binary name from arg0 so we can always reexec ourself - env := []string{ - "console=" + console, - "pipe=3", - "data_path=" + dataPath, - } - - command := exec.Command(init, append([]string{"init", "--"}, args...)...) - // make sure the process is executed inside the context of the rootfs - command.Dir = container.RootFs - command.Env = append(os.Environ(), env...) - - if command.SysProcAttr == nil { - command.SysProcAttr = &syscall.SysProcAttr{} - } - command.SysProcAttr.Cloneflags = uintptr(GetNamespaceFlags(container.Namespaces)) - - command.SysProcAttr.Pdeathsig = syscall.SIGKILL - command.ExtraFiles = []*os.File{pipe} - - return command -} - -// SetupCgroups applies the cgroup restrictions to the process running in the container based -// on the container's configuration -func SetupCgroups(container *libcontainer.Config, nspid int) (map[string]string, error) { - if container.Cgroups != nil { - c := container.Cgroups - if systemd.UseSystemd() { - return systemd.Apply(c, nspid) - } - return fs.Apply(c, nspid) - } - return map[string]string{}, nil -} - -// InitializeNetworking creates the container's network stack outside of the namespace and moves -// interfaces into the container's net namespaces if necessary -func InitializeNetworking(container *libcontainer.Config, nspid int, networkState *network.NetworkState) error { - for _, config := range container.Networks { - strategy, err := network.GetStrategy(config.Type) - if err != nil { - return err - } - if err := strategy.Create((*network.Network)(config), nspid, networkState); err != nil { - return err - } - } - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/execin.go b/vendor/src/github.com/docker/libcontainer/namespaces/execin.go deleted file mode 100644 index ddff5c3a9f..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/execin.go +++ /dev/null @@ -1,132 +0,0 @@ -// +build linux - -package namespaces - -import ( - "encoding/json" - "fmt" - "io" - "os" - "os/exec" - "path/filepath" - "strconv" - "syscall" - - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/apparmor" - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/label" - "github.com/docker/libcontainer/system" -) - -// ExecIn reexec's the initPath with the argv 0 rewrite to "nsenter" so that it is able to run the -// setns code in a single threaded environment joining the existing containers' namespaces. -func ExecIn(container *libcontainer.Config, state *libcontainer.State, userArgs []string, initPath, action string, - stdin io.Reader, stdout, stderr io.Writer, console string, startCallback func(*exec.Cmd)) (int, error) { - - args := []string{fmt.Sprintf("nsenter-%s", action), "--nspid", strconv.Itoa(state.InitPid)} - - if console != "" { - args = append(args, "--console", console) - } - - cmd := &exec.Cmd{ - Path: initPath, - Args: append(args, append([]string{"--"}, userArgs...)...), - } - - if filepath.Base(initPath) == initPath { - if lp, err := exec.LookPath(initPath); err == nil { - cmd.Path = lp - } - } - - parent, child, err := newInitPipe() - if err != nil { - return -1, err - } - defer parent.Close() - - // Note: these are only used in non-tty mode - // if there is a tty for the container it will be opened within the namespace and the - // fds will be duped to stdin, stdiout, and stderr - cmd.Stdin = stdin - cmd.Stdout = stdout - cmd.Stderr = stderr - cmd.ExtraFiles = []*os.File{child} - - if err := cmd.Start(); err != nil { - child.Close() - return -1, err - } - child.Close() - - terminate := func(terr error) (int, error) { - // TODO: log the errors for kill and wait - cmd.Process.Kill() - cmd.Wait() - return -1, terr - } - - // Enter cgroups. - if err := EnterCgroups(state, cmd.Process.Pid); err != nil { - return terminate(err) - } - - // finish cgroups' setup, unblock the child process. - if _, err := parent.WriteString("1"); err != nil { - return terminate(err) - } - - if err := json.NewEncoder(parent).Encode(container); err != nil { - return terminate(err) - } - - if startCallback != nil { - startCallback(cmd) - } - - if err := cmd.Wait(); err != nil { - if _, ok := err.(*exec.ExitError); !ok { - return -1, err - } - } - return cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus(), nil -} - -// Finalize expects that the setns calls have been setup and that is has joined an -// existing namespace -func FinalizeSetns(container *libcontainer.Config, args []string) error { - // clear the current processes env and replace it with the environment defined on the container - if err := LoadContainerEnvironment(container); err != nil { - return err - } - - if err := setupRlimits(container); err != nil { - return fmt.Errorf("setup rlimits %s", err) - } - - if err := FinalizeNamespace(container); err != nil { - return err - } - - if err := apparmor.ApplyProfile(container.AppArmorProfile); err != nil { - return fmt.Errorf("set apparmor profile %s: %s", container.AppArmorProfile, err) - } - - if container.ProcessLabel != "" { - if err := label.SetProcessLabel(container.ProcessLabel); err != nil { - return err - } - } - - if err := system.Execv(args[0], args[0:], os.Environ()); err != nil { - return err - } - - panic("unreachable") -} - -func EnterCgroups(state *libcontainer.State, pid int) error { - return cgroups.EnterPid(state.CgroupPaths, pid) -} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/init.go b/vendor/src/github.com/docker/libcontainer/namespaces/init.go deleted file mode 100644 index 0a4ff19620..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/init.go +++ /dev/null @@ -1,331 +0,0 @@ -// +build linux - -package namespaces - -import ( - "encoding/json" - "fmt" - "io/ioutil" - "os" - "strings" - "syscall" - - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/apparmor" - "github.com/docker/libcontainer/console" - "github.com/docker/libcontainer/label" - "github.com/docker/libcontainer/mount" - "github.com/docker/libcontainer/netlink" - "github.com/docker/libcontainer/network" - "github.com/docker/libcontainer/security/capabilities" - "github.com/docker/libcontainer/security/restrict" - "github.com/docker/libcontainer/system" - "github.com/docker/libcontainer/user" - "github.com/docker/libcontainer/utils" -) - -// TODO(vishh): This is part of the libcontainer API and it does much more than just namespaces related work. -// Move this to libcontainer package. -// Init is the init process that first runs inside a new namespace to setup mounts, users, networking, -// and other options required for the new container. -// The caller of Init function has to ensure that the go runtime is locked to an OS thread -// (using runtime.LockOSThread) else system calls like setns called within Init may not work as intended. -func Init(container *libcontainer.Config, uncleanRootfs, consolePath string, pipe *os.File, args []string) (err error) { - defer func() { - // if we have an error during the initialization of the container's init then send it back to the - // parent process in the form of an initError. - if err != nil { - // ensure that any data sent from the parent is consumed so it doesn't - // receive ECONNRESET when the child writes to the pipe. - ioutil.ReadAll(pipe) - if err := json.NewEncoder(pipe).Encode(initError{ - Message: err.Error(), - }); err != nil { - panic(err) - } - } - // ensure that this pipe is always closed - pipe.Close() - }() - - rootfs, err := utils.ResolveRootfs(uncleanRootfs) - if err != nil { - return err - } - - // clear the current processes env and replace it with the environment - // defined on the container - if err := LoadContainerEnvironment(container); err != nil { - return err - } - - // We always read this as it is a way to sync with the parent as well - var networkState *network.NetworkState - if err := json.NewDecoder(pipe).Decode(&networkState); err != nil { - return err - } - // join any namespaces via a path to the namespace fd if provided - if err := joinExistingNamespaces(container.Namespaces); err != nil { - return err - } - if consolePath != "" { - if err := console.OpenAndDup(consolePath); err != nil { - return err - } - } - if _, err := syscall.Setsid(); err != nil { - return fmt.Errorf("setsid %s", err) - } - if consolePath != "" { - if err := system.Setctty(); err != nil { - return fmt.Errorf("setctty %s", err) - } - } - - if err := setupNetwork(container, networkState); err != nil { - return fmt.Errorf("setup networking %s", err) - } - if err := setupRoute(container); err != nil { - return fmt.Errorf("setup route %s", err) - } - - if err := setupRlimits(container); err != nil { - return fmt.Errorf("setup rlimits %s", err) - } - - label.Init() - - if err := mount.InitializeMountNamespace(rootfs, - consolePath, - container.RestrictSys, - (*mount.MountConfig)(container.MountConfig)); err != nil { - return fmt.Errorf("setup mount namespace %s", err) - } - - if container.Hostname != "" { - if err := syscall.Sethostname([]byte(container.Hostname)); err != nil { - return fmt.Errorf("unable to sethostname %q: %s", container.Hostname, err) - } - } - - if err := apparmor.ApplyProfile(container.AppArmorProfile); err != nil { - return fmt.Errorf("set apparmor profile %s: %s", container.AppArmorProfile, err) - } - - if err := label.SetProcessLabel(container.ProcessLabel); err != nil { - return fmt.Errorf("set process label %s", err) - } - - // TODO: (crosbymichael) make this configurable at the Config level - if container.RestrictSys { - if err := restrict.Restrict("proc/sys", "proc/sysrq-trigger", "proc/irq", "proc/bus"); err != nil { - return err - } - } - - pdeathSignal, err := system.GetParentDeathSignal() - if err != nil { - return fmt.Errorf("get parent death signal %s", err) - } - - if err := FinalizeNamespace(container); err != nil { - return fmt.Errorf("finalize namespace %s", err) - } - - // FinalizeNamespace can change user/group which clears the parent death - // signal, so we restore it here. - if err := RestoreParentDeathSignal(pdeathSignal); err != nil { - return fmt.Errorf("restore parent death signal %s", err) - } - - return system.Execv(args[0], args[0:], os.Environ()) -} - -// RestoreParentDeathSignal sets the parent death signal to old. -func RestoreParentDeathSignal(old int) error { - if old == 0 { - return nil - } - - current, err := system.GetParentDeathSignal() - if err != nil { - return fmt.Errorf("get parent death signal %s", err) - } - - if old == current { - return nil - } - - if err := system.ParentDeathSignal(uintptr(old)); err != nil { - return fmt.Errorf("set parent death signal %s", err) - } - - // Signal self if parent is already dead. Does nothing if running in a new - // PID namespace, as Getppid will always return 0. - if syscall.Getppid() == 1 { - return syscall.Kill(syscall.Getpid(), syscall.SIGKILL) - } - - return nil -} - -// SetupUser changes the groups, gid, and uid for the user inside the container -func SetupUser(container *libcontainer.Config) error { - // Set up defaults. - defaultExecUser := user.ExecUser{ - Uid: syscall.Getuid(), - Gid: syscall.Getgid(), - Home: "/", - } - - passwdPath, err := user.GetPasswdPath() - if err != nil { - return err - } - - groupPath, err := user.GetGroupPath() - if err != nil { - return err - } - - execUser, err := user.GetExecUserPath(container.User, &defaultExecUser, passwdPath, groupPath) - if err != nil { - return fmt.Errorf("get supplementary groups %s", err) - } - - suppGroups := append(execUser.Sgids, container.AdditionalGroups...) - - if err := syscall.Setgroups(suppGroups); err != nil { - return fmt.Errorf("setgroups %s", err) - } - - if err := system.Setgid(execUser.Gid); err != nil { - return fmt.Errorf("setgid %s", err) - } - - if err := system.Setuid(execUser.Uid); err != nil { - return fmt.Errorf("setuid %s", err) - } - - // if we didn't get HOME already, set it based on the user's HOME - if envHome := os.Getenv("HOME"); envHome == "" { - if err := os.Setenv("HOME", execUser.Home); err != nil { - return fmt.Errorf("set HOME %s", err) - } - } - - return nil -} - -// setupVethNetwork uses the Network config if it is not nil to initialize -// the new veth interface inside the container for use by changing the name to eth0 -// setting the MTU and IP address along with the default gateway -func setupNetwork(container *libcontainer.Config, networkState *network.NetworkState) error { - for _, config := range container.Networks { - strategy, err := network.GetStrategy(config.Type) - if err != nil { - return err - } - - err1 := strategy.Initialize((*network.Network)(config), networkState) - if err1 != nil { - return err1 - } - } - return nil -} - -func setupRoute(container *libcontainer.Config) error { - for _, config := range container.Routes { - if err := netlink.AddRoute(config.Destination, config.Source, config.Gateway, config.InterfaceName); err != nil { - return err - } - } - return nil -} - -func setupRlimits(container *libcontainer.Config) error { - for _, rlimit := range container.Rlimits { - l := &syscall.Rlimit{Max: rlimit.Hard, Cur: rlimit.Soft} - if err := syscall.Setrlimit(rlimit.Type, l); err != nil { - return fmt.Errorf("error setting rlimit type %v: %v", rlimit.Type, err) - } - } - return nil -} - -// FinalizeNamespace drops the caps, sets the correct user -// and working dir, and closes any leaky file descriptors -// before execing the command inside the namespace -func FinalizeNamespace(container *libcontainer.Config) error { - // Ensure that all non-standard fds we may have accidentally - // inherited are marked close-on-exec so they stay out of the - // container - if err := utils.CloseExecFrom(3); err != nil { - return fmt.Errorf("close open file descriptors %s", err) - } - - // drop capabilities in bounding set before changing user - if err := capabilities.DropBoundingSet(container.Capabilities); err != nil { - return fmt.Errorf("drop bounding set %s", err) - } - - // preserve existing capabilities while we change users - if err := system.SetKeepCaps(); err != nil { - return fmt.Errorf("set keep caps %s", err) - } - - if err := SetupUser(container); err != nil { - return fmt.Errorf("setup user %s", err) - } - - if err := system.ClearKeepCaps(); err != nil { - return fmt.Errorf("clear keep caps %s", err) - } - - // drop all other capabilities - if err := capabilities.DropCapabilities(container.Capabilities); err != nil { - return fmt.Errorf("drop capabilities %s", err) - } - - if container.WorkingDir != "" { - if err := syscall.Chdir(container.WorkingDir); err != nil { - return fmt.Errorf("chdir to %s %s", container.WorkingDir, err) - } - } - - return nil -} - -func LoadContainerEnvironment(container *libcontainer.Config) error { - os.Clearenv() - for _, pair := range container.Env { - p := strings.SplitN(pair, "=", 2) - if len(p) < 2 { - return fmt.Errorf("invalid environment '%v'", pair) - } - if err := os.Setenv(p[0], p[1]); err != nil { - return err - } - } - return nil -} - -// joinExistingNamespaces gets all the namespace paths specified for the container and -// does a setns on the namespace fd so that the current process joins the namespace. -func joinExistingNamespaces(namespaces []libcontainer.Namespace) error { - for _, ns := range namespaces { - if ns.Path != "" { - f, err := os.OpenFile(ns.Path, os.O_RDONLY, 0) - if err != nil { - return err - } - err = system.Setns(f.Fd(), uintptr(namespaceInfo[ns.Type])) - f.Close() - if err != nil { - return err - } - } - } - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.c b/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.c deleted file mode 100644 index 4ab21774fb..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.c +++ /dev/null @@ -1,245 +0,0 @@ -// +build cgo -// -// formated with indent -linux nsenter.c - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define pr_perror(fmt, ...) fprintf(stderr, "nsenter: " fmt ": %m\n", ##__VA_ARGS__) - -static const kBufSize = 256; -static const char *kNsEnter = "nsenter"; - -void get_args(int *argc, char ***argv) -{ - // Read argv - int fd = open("/proc/self/cmdline", O_RDONLY); - if (fd < 0) { - pr_perror("Unable to open /proc/self/cmdline"); - exit(1); - } - // Read the whole commandline. - ssize_t contents_size = 0; - ssize_t contents_offset = 0; - char *contents = NULL; - ssize_t bytes_read = 0; - do { - contents_size += kBufSize; - contents = (char *)realloc(contents, contents_size); - bytes_read = - read(fd, contents + contents_offset, - contents_size - contents_offset); - if (bytes_read < 0) { - pr_perror("Unable to read from /proc/self/cmdline"); - exit(1); - } - contents_offset += bytes_read; - } - while (bytes_read > 0); - close(fd); - - // Parse the commandline into an argv. /proc/self/cmdline has \0 delimited args. - ssize_t i; - *argc = 0; - for (i = 0; i < contents_offset; i++) { - if (contents[i] == '\0') { - (*argc)++; - } - } - *argv = (char **)malloc(sizeof(char *) * ((*argc) + 1)); - int idx; - for (idx = 0; idx < (*argc); idx++) { - (*argv)[idx] = contents; - contents += strlen(contents) + 1; - } - (*argv)[*argc] = NULL; -} - -// Use raw setns syscall for versions of glibc that don't include it (namely glibc-2.12) -#if __GLIBC__ == 2 && __GLIBC_MINOR__ < 14 -#define _GNU_SOURCE -#include -#include "syscall.h" -#ifdef SYS_setns -int setns(int fd, int nstype) -{ - return syscall(SYS_setns, fd, nstype); -} -#endif -#endif - -void print_usage() -{ - fprintf(stderr, - "nsenter --nspid --console -- cmd1 arg1 arg2...\n"); -} - -void nsenter() -{ - int argc, c; - char **argv; - get_args(&argc, &argv); - - // check argv 0 to ensure that we are supposed to setns - // we use strncmp to test for a value of "nsenter" but also allows alternate implmentations - // after the setns code path to continue to use the argv 0 to determine actions to be run - // resulting in the ability to specify "nsenter-mknod", "nsenter-exec", etc... - if (strncmp(argv[0], kNsEnter, strlen(kNsEnter)) != 0) { - return; - } -#ifdef PR_SET_CHILD_SUBREAPER - if (prctl(PR_SET_CHILD_SUBREAPER, 1, 0, 0, 0) == -1) { - pr_perror("Failed to set child subreaper"); - exit(1); - } -#endif - - static const struct option longopts[] = { - {"nspid", required_argument, NULL, 'n'}, - {"console", required_argument, NULL, 't'}, - {NULL, 0, NULL, 0} - }; - - pid_t init_pid = -1; - char *init_pid_str = NULL; - char *console = NULL; - while ((c = getopt_long_only(argc, argv, "n:c:", longopts, NULL)) != -1) { - switch (c) { - case 'n': - init_pid_str = optarg; - break; - case 't': - console = optarg; - break; - } - } - - if (init_pid_str == NULL) { - print_usage(); - exit(1); - } - - init_pid = strtol(init_pid_str, NULL, 10); - if ((init_pid == 0 && errno == EINVAL) || errno == ERANGE) { - pr_perror("Failed to parse PID from \"%s\" with output \"%d\"", - init_pid_str, init_pid); - print_usage(); - exit(1); - } - - argc -= 3; - argv += 3; - - if (setsid() == -1) { - pr_perror("setsid failed"); - exit(1); - } - // before we setns we need to dup the console - int consolefd = -1; - if (console != NULL) { - consolefd = open(console, O_RDWR); - if (consolefd < 0) { - pr_perror("Failed to open console %s", console); - exit(1); - } - } - // blocking until the parent placed the process inside correct cgroups. - unsigned char s; - if (read(3, &s, 1) != 1 || s != '1') { - pr_perror("failed to receive synchronization data from parent"); - exit(1); - } - // Setns on all supported namespaces. - char ns_dir[PATH_MAX]; - memset(ns_dir, 0, PATH_MAX); - snprintf(ns_dir, PATH_MAX - 1, "/proc/%d/ns/", init_pid); - - int ns_dir_fd; - ns_dir_fd = open(ns_dir, O_RDONLY | O_DIRECTORY); - if (ns_dir_fd < 0) { - pr_perror("Unable to open %s", ns_dir); - exit(1); - } - - char *namespaces[] = { "ipc", "uts", "net", "pid", "mnt" }; - const int num = sizeof(namespaces) / sizeof(char *); - int i; - for (i = 0; i < num; i++) { - // A zombie process has links on namespaces, but they can't be opened - struct stat st; - if (fstatat(ns_dir_fd, namespaces[i], &st, AT_SYMLINK_NOFOLLOW) - == -1) { - if (errno == ENOENT) - continue; - pr_perror("Failed to stat ns file %s for ns %s", - ns_dir, namespaces[i]); - exit(1); - } - - int fd = openat(ns_dir_fd, namespaces[i], O_RDONLY); - if (fd == -1) { - pr_perror("Failed to open ns file %s for ns %s", - ns_dir, namespaces[i]); - exit(1); - } - // Set the namespace. - if (setns(fd, 0) == -1) { - pr_perror("Failed to setns for %s", namespaces[i]); - exit(1); - } - close(fd); - } - close(ns_dir_fd); - - // We must fork to actually enter the PID namespace. - int child = fork(); - if (child == -1) { - pr_perror("Unable to fork a process"); - exit(1); - } - if (child == 0) { - if (consolefd != -1) { - if (dup2(consolefd, STDIN_FILENO) != 0) { - pr_perror("Failed to dup 0"); - exit(1); - } - if (dup2(consolefd, STDOUT_FILENO) != STDOUT_FILENO) { - pr_perror("Failed to dup 1"); - exit(1); - } - if (dup2(consolefd, STDERR_FILENO) != STDERR_FILENO) { - pr_perror("Failed to dup 2\n"); - exit(1); - } - } - // Finish executing, let the Go runtime take over. - return; - } else { - // Parent, wait for the child. - int status = 0; - if (waitpid(child, &status, 0) == -1) { - pr_perror("nsenter: Failed to waitpid with error"); - exit(1); - } - // Forward the child's exit code or re-send its death signal. - if (WIFEXITED(status)) { - exit(WEXITSTATUS(status)); - } else if (WIFSIGNALED(status)) { - kill(getpid(), WTERMSIG(status)); - } - - exit(1); - } - - return; -} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.go b/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.go deleted file mode 100644 index 7d21e8e59f..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter.go +++ /dev/null @@ -1,10 +0,0 @@ -// +build linux - -package nsenter - -/* -__attribute__((constructor)) init() { - nsenter(); -} -*/ -import "C" diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/utils.go b/vendor/src/github.com/docker/libcontainer/namespaces/utils.go deleted file mode 100644 index de71a379f8..0000000000 --- a/vendor/src/github.com/docker/libcontainer/namespaces/utils.go +++ /dev/null @@ -1,45 +0,0 @@ -// +build linux - -package namespaces - -import ( - "os" - "syscall" - - "github.com/docker/libcontainer" -) - -type initError struct { - Message string `json:"message,omitempty"` -} - -func (i initError) Error() string { - return i.Message -} - -var namespaceInfo = map[libcontainer.NamespaceType]int{ - libcontainer.NEWNET: syscall.CLONE_NEWNET, - libcontainer.NEWNS: syscall.CLONE_NEWNS, - libcontainer.NEWUSER: syscall.CLONE_NEWUSER, - libcontainer.NEWIPC: syscall.CLONE_NEWIPC, - libcontainer.NEWUTS: syscall.CLONE_NEWUTS, - libcontainer.NEWPID: syscall.CLONE_NEWPID, -} - -// New returns a newly initialized Pipe for communication between processes -func newInitPipe() (parent *os.File, child *os.File, err error) { - fds, err := syscall.Socketpair(syscall.AF_LOCAL, syscall.SOCK_STREAM|syscall.SOCK_CLOEXEC, 0) - if err != nil { - return nil, nil, err - } - return os.NewFile(uintptr(fds[1]), "parent"), os.NewFile(uintptr(fds[0]), "child"), nil -} - -// GetNamespaceFlags parses the container's Namespaces options to set the correct -// flags on clone, unshare, and setns -func GetNamespaceFlags(namespaces libcontainer.Namespaces) (flag int) { - for _, v := range namespaces { - flag |= namespaceInfo[v.Type] - } - return flag -} diff --git a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go index 3cc3cc94f7..3ecb81fb78 100644 --- a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go +++ b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go @@ -7,7 +7,6 @@ import ( "math/rand" "net" "os" - "path/filepath" "sync/atomic" "syscall" "unsafe" @@ -23,6 +22,7 @@ const ( IFLA_VLAN_ID = 1 IFLA_NET_NS_FD = 28 IFLA_ADDRESS = 1 + IFLA_BRPORT_MODE = 4 SIOC_BRADDBR = 0x89a0 SIOC_BRDELBR = 0x89a1 SIOC_BRADDIF = 0x89a2 @@ -1253,25 +1253,33 @@ func SetMacAddress(name, addr string) error { } func SetHairpinMode(iface *net.Interface, enabled bool) error { - sysPath := filepath.Join("/sys/class/net", iface.Name, "brport/hairpin_mode") - - sysFile, err := os.OpenFile(sysPath, os.O_WRONLY, 0) + s, err := getNetlinkSocket() if err != nil { return err } - defer sysFile.Close() + defer s.Close() + req := newNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - var writeVal []byte + msg := newIfInfomsg(syscall.AF_BRIDGE) + msg.Type = syscall.RTM_SETLINK + msg.Flags = syscall.NLM_F_REQUEST + msg.Index = int32(iface.Index) + msg.Change = DEFAULT_CHANGE + req.AddData(msg) + + mode := []byte{0} if enabled { - writeVal = []byte("1") - } else { - writeVal = []byte("0") + mode[0] = byte(1) } - if _, err := sysFile.Write(writeVal); err != nil { + + br := newRtAttr(syscall.IFLA_PROTINFO|syscall.NLA_F_NESTED, nil) + newRtAttrChild(br, IFLA_BRPORT_MODE, mode) + req.AddData(br) + if err := s.Send(req); err != nil { return err } - return nil + return s.HandleAck(req.Seq) } func ChangeName(iface *net.Interface, newName string) error { diff --git a/vendor/src/github.com/docker/libcontainer/network/loopback.go b/vendor/src/github.com/docker/libcontainer/network/loopback.go deleted file mode 100644 index 1667b4d82a..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/loopback.go +++ /dev/null @@ -1,23 +0,0 @@ -// +build linux - -package network - -import ( - "fmt" -) - -// Loopback is a network strategy that provides a basic loopback device -type Loopback struct { -} - -func (l *Loopback) Create(n *Network, nspid int, networkState *NetworkState) error { - return nil -} - -func (l *Loopback) Initialize(config *Network, networkState *NetworkState) error { - // Do not set the MTU on the loopback interface - use the default. - if err := InterfaceUp("lo"); err != nil { - return fmt.Errorf("lo up %s", err) - } - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/network/network.go b/vendor/src/github.com/docker/libcontainer/network/network.go deleted file mode 100644 index 40b25b135b..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/network.go +++ /dev/null @@ -1,117 +0,0 @@ -// +build linux - -package network - -import ( - "net" - - "github.com/docker/libcontainer/netlink" -) - -func InterfaceUp(name string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkLinkUp(iface) -} - -func InterfaceDown(name string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkLinkDown(iface) -} - -func ChangeInterfaceName(old, newName string) error { - iface, err := net.InterfaceByName(old) - if err != nil { - return err - } - return netlink.NetworkChangeName(iface, newName) -} - -func CreateVethPair(name1, name2 string, txQueueLen int) error { - return netlink.NetworkCreateVethPair(name1, name2, txQueueLen) -} - -func SetInterfaceInNamespacePid(name string, nsPid int) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkSetNsPid(iface, nsPid) -} - -func SetInterfaceInNamespaceFd(name string, fd uintptr) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkSetNsFd(iface, int(fd)) -} - -func SetInterfaceMaster(name, master string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - masterIface, err := net.InterfaceByName(master) - if err != nil { - return err - } - return netlink.AddToBridge(iface, masterIface) -} - -func SetDefaultGateway(ip, ifaceName string) error { - return netlink.AddDefaultGw(ip, ifaceName) -} - -func SetInterfaceMac(name string, macaddr string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkSetMacAddress(iface, macaddr) -} - -func SetInterfaceIp(name string, rawIp string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - ip, ipNet, err := net.ParseCIDR(rawIp) - if err != nil { - return err - } - return netlink.NetworkLinkAddIp(iface, ip, ipNet) -} - -func DeleteInterfaceIp(name string, rawIp string) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - ip, ipNet, err := net.ParseCIDR(rawIp) - if err != nil { - return err - } - return netlink.NetworkLinkDelIp(iface, ip, ipNet) -} - -func SetMtu(name string, mtu int) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.NetworkSetMTU(iface, mtu) -} - -func SetHairpinMode(name string, enabled bool) error { - iface, err := net.InterfaceByName(name) - if err != nil { - return err - } - return netlink.SetHairpinMode(iface, enabled) -} diff --git a/vendor/src/github.com/docker/libcontainer/network/stats.go b/vendor/src/github.com/docker/libcontainer/network/stats.go deleted file mode 100644 index e2156c74da..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/stats.go +++ /dev/null @@ -1,74 +0,0 @@ -package network - -import ( - "io/ioutil" - "path/filepath" - "strconv" - "strings" -) - -type NetworkStats struct { - RxBytes uint64 `json:"rx_bytes"` - RxPackets uint64 `json:"rx_packets"` - RxErrors uint64 `json:"rx_errors"` - RxDropped uint64 `json:"rx_dropped"` - TxBytes uint64 `json:"tx_bytes"` - TxPackets uint64 `json:"tx_packets"` - TxErrors uint64 `json:"tx_errors"` - TxDropped uint64 `json:"tx_dropped"` -} - -// Returns the network statistics for the network interfaces represented by the NetworkRuntimeInfo. -func GetStats(networkState *NetworkState) (*NetworkStats, error) { - // This can happen if the network runtime information is missing - possible if the container was created by an old version of libcontainer. - if networkState.VethHost == "" { - return &NetworkStats{}, nil - } - - out := &NetworkStats{} - - type netStatsPair struct { - // Where to write the output. - Out *uint64 - - // The network stats file to read. - File string - } - - // Ingress for host veth is from the container. Hence tx_bytes stat on the host veth is actually number of bytes received by the container. - netStats := []netStatsPair{ - {Out: &out.RxBytes, File: "tx_bytes"}, - {Out: &out.RxPackets, File: "tx_packets"}, - {Out: &out.RxErrors, File: "tx_errors"}, - {Out: &out.RxDropped, File: "tx_dropped"}, - - {Out: &out.TxBytes, File: "rx_bytes"}, - {Out: &out.TxPackets, File: "rx_packets"}, - {Out: &out.TxErrors, File: "rx_errors"}, - {Out: &out.TxDropped, File: "rx_dropped"}, - } - for _, netStat := range netStats { - data, err := readSysfsNetworkStats(networkState.VethHost, netStat.File) - if err != nil { - return nil, err - } - *(netStat.Out) = data - } - - return out, nil -} - -// Reads the specified statistics available under /sys/class/net//statistics -func readSysfsNetworkStats(ethInterface, statsFile string) (uint64, error) { - fullPath := filepath.Join("/sys/class/net", ethInterface, "statistics", statsFile) - data, err := ioutil.ReadFile(fullPath) - if err != nil { - return 0, err - } - value, err := strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64) - if err != nil { - return 0, err - } - - return value, err -} diff --git a/vendor/src/github.com/docker/libcontainer/network/strategy.go b/vendor/src/github.com/docker/libcontainer/network/strategy.go deleted file mode 100644 index 019fe62f41..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/strategy.go +++ /dev/null @@ -1,34 +0,0 @@ -// +build linux - -package network - -import ( - "errors" -) - -var ( - ErrNotValidStrategyType = errors.New("not a valid network strategy type") -) - -var strategies = map[string]NetworkStrategy{ - "veth": &Veth{}, - "loopback": &Loopback{}, -} - -// NetworkStrategy represents a specific network configuration for -// a container's networking stack -type NetworkStrategy interface { - Create(*Network, int, *NetworkState) error - Initialize(*Network, *NetworkState) error -} - -// GetStrategy returns the specific network strategy for the -// provided type. If no strategy is registered for the type an -// ErrNotValidStrategyType is returned. -func GetStrategy(tpe string) (NetworkStrategy, error) { - s, exists := strategies[tpe] - if !exists { - return nil, ErrNotValidStrategyType - } - return s, nil -} diff --git a/vendor/src/github.com/docker/libcontainer/network/types.go b/vendor/src/github.com/docker/libcontainer/network/types.go deleted file mode 100644 index dcf00420f3..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/types.go +++ /dev/null @@ -1,50 +0,0 @@ -package network - -// Network defines configuration for a container's networking stack -// -// The network configuration can be omited from a container causing the -// container to be setup with the host's networking stack -type Network struct { - // Type sets the networks type, commonly veth and loopback - Type string `json:"type,omitempty"` - - // The bridge to use. - Bridge string `json:"bridge,omitempty"` - - // Prefix for the veth interfaces. - VethPrefix string `json:"veth_prefix,omitempty"` - - // MacAddress contains the MAC address to set on the network interface - MacAddress string `json:"mac_address,omitempty"` - - // Address contains the IPv4 and mask to set on the network interface - Address string `json:"address,omitempty"` - - // IPv6Address contains the IPv6 and mask to set on the network interface - IPv6Address string `json:"ipv6_address,omitempty"` - - // Gateway sets the gateway address that is used as the default for the interface - Gateway string `json:"gateway,omitempty"` - - // IPv6Gateway sets the ipv6 gateway address that is used as the default for the interface - IPv6Gateway string `json:"ipv6_gateway,omitempty"` - - // Mtu sets the mtu value for the interface and will be mirrored on both the host and - // container's interfaces if a pair is created, specifically in the case of type veth - // Note: This does not apply to loopback interfaces. - Mtu int `json:"mtu,omitempty"` - - // TxQueueLen sets the tx_queuelen value for the interface and will be mirrored on both the host and - // container's interfaces if a pair is created, specifically in the case of type veth - // Note: This does not apply to loopback interfaces. - TxQueueLen int `json:"txqueuelen,omitempty"` -} - -// Struct describing the network specific runtime state that will be maintained by libcontainer for all running containers -// Do not depend on it outside of libcontainer. -type NetworkState struct { - // The name of the veth interface on the Host. - VethHost string `json:"veth_host,omitempty"` - // The name of the veth interface created inside the container for the child. - VethChild string `json:"veth_child,omitempty"` -} diff --git a/vendor/src/github.com/docker/libcontainer/network/veth.go b/vendor/src/github.com/docker/libcontainer/network/veth.go deleted file mode 100644 index 3d7dc8729e..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/veth.go +++ /dev/null @@ -1,122 +0,0 @@ -// +build linux - -package network - -import ( - "fmt" - - "github.com/docker/libcontainer/netlink" - "github.com/docker/libcontainer/utils" -) - -// Veth is a network strategy that uses a bridge and creates -// a veth pair, one that stays outside on the host and the other -// is placed inside the container's namespace -type Veth struct { -} - -const defaultDevice = "eth0" - -func (v *Veth) Create(n *Network, nspid int, networkState *NetworkState) error { - var ( - bridge = n.Bridge - prefix = n.VethPrefix - txQueueLen = n.TxQueueLen - ) - if bridge == "" { - return fmt.Errorf("bridge is not specified") - } - if prefix == "" { - return fmt.Errorf("veth prefix is not specified") - } - name1, name2, err := createVethPair(prefix, txQueueLen) - if err != nil { - return err - } - if err := SetInterfaceMaster(name1, bridge); err != nil { - return err - } - if err := SetMtu(name1, n.Mtu); err != nil { - return err - } - if err := InterfaceUp(name1); err != nil { - return err - } - if err := SetInterfaceInNamespacePid(name2, nspid); err != nil { - return err - } - networkState.VethHost = name1 - networkState.VethChild = name2 - - return nil -} - -func (v *Veth) Initialize(config *Network, networkState *NetworkState) error { - var vethChild = networkState.VethChild - if vethChild == "" { - return fmt.Errorf("vethChild is not specified") - } - if err := InterfaceDown(vethChild); err != nil { - return fmt.Errorf("interface down %s %s", vethChild, err) - } - if err := ChangeInterfaceName(vethChild, defaultDevice); err != nil { - return fmt.Errorf("change %s to %s %s", vethChild, defaultDevice, err) - } - if config.MacAddress != "" { - if err := SetInterfaceMac(defaultDevice, config.MacAddress); err != nil { - return fmt.Errorf("set %s mac %s", defaultDevice, err) - } - } - if err := SetInterfaceIp(defaultDevice, config.Address); err != nil { - return fmt.Errorf("set %s ip %s", defaultDevice, err) - } - if config.IPv6Address != "" { - if err := SetInterfaceIp(defaultDevice, config.IPv6Address); err != nil { - return fmt.Errorf("set %s ipv6 %s", defaultDevice, err) - } - } - - if err := SetMtu(defaultDevice, config.Mtu); err != nil { - return fmt.Errorf("set %s mtu to %d %s", defaultDevice, config.Mtu, err) - } - if err := InterfaceUp(defaultDevice); err != nil { - return fmt.Errorf("%s up %s", defaultDevice, err) - } - if config.Gateway != "" { - if err := SetDefaultGateway(config.Gateway, defaultDevice); err != nil { - return fmt.Errorf("set gateway to %s on device %s failed with %s", config.Gateway, defaultDevice, err) - } - } - if config.IPv6Gateway != "" { - if err := SetDefaultGateway(config.IPv6Gateway, defaultDevice); err != nil { - return fmt.Errorf("set gateway for ipv6 to %s on device %s failed with %s", config.IPv6Gateway, defaultDevice, err) - } - } - return nil -} - -// createVethPair will automatically generage two random names for -// the veth pair and ensure that they have been created -func createVethPair(prefix string, txQueueLen int) (name1 string, name2 string, err error) { - for i := 0; i < 10; i++ { - if name1, err = utils.GenerateRandomName(prefix, 7); err != nil { - return - } - - if name2, err = utils.GenerateRandomName(prefix, 7); err != nil { - return - } - - if err = CreateVethPair(name1, name2, txQueueLen); err != nil { - if err == netlink.ErrInterfaceExists { - continue - } - - return - } - - break - } - - return -} diff --git a/vendor/src/github.com/docker/libcontainer/network/veth_test.go b/vendor/src/github.com/docker/libcontainer/network/veth_test.go deleted file mode 100644 index b92b284eb0..0000000000 --- a/vendor/src/github.com/docker/libcontainer/network/veth_test.go +++ /dev/null @@ -1,53 +0,0 @@ -// +build linux - -package network - -import ( - "testing" - - "github.com/docker/libcontainer/netlink" -) - -func TestGenerateVethNames(t *testing.T) { - if testing.Short() { - return - } - - prefix := "veth" - - name1, name2, err := createVethPair(prefix, 0) - if err != nil { - t.Fatal(err) - } - - if name1 == "" { - t.Fatal("name1 should not be empty") - } - - if name2 == "" { - t.Fatal("name2 should not be empty") - } -} - -func TestCreateDuplicateVethPair(t *testing.T) { - if testing.Short() { - return - } - - prefix := "veth" - - name1, name2, err := createVethPair(prefix, 0) - if err != nil { - t.Fatal(err) - } - - // retry to create the name interfaces and make sure that we get the correct error - err = CreateVethPair(name1, name2, 0) - if err == nil { - t.Fatal("expected error to not be nil with duplicate interface") - } - - if err != netlink.ErrInterfaceExists { - t.Fatalf("expected error to be ErrInterfaceExists but received %q", err) - } -} diff --git a/vendor/src/github.com/docker/libcontainer/network_linux.go b/vendor/src/github.com/docker/libcontainer/network_linux.go new file mode 100644 index 0000000000..46c606a2bb --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/network_linux.go @@ -0,0 +1,213 @@ +// +build linux + +package libcontainer + +import ( + "fmt" + "io/ioutil" + "net" + "path/filepath" + "strconv" + "strings" + + "github.com/docker/libcontainer/netlink" + "github.com/docker/libcontainer/utils" +) + +var strategies = map[string]networkStrategy{ + "veth": &veth{}, + "loopback": &loopback{}, +} + +// networkStrategy represents a specific network configuration for +// a container's networking stack +type networkStrategy interface { + create(*network, int) error + initialize(*network) error +} + +// getStrategy returns the specific network strategy for the +// provided type. +func getStrategy(tpe string) (networkStrategy, error) { + s, exists := strategies[tpe] + if !exists { + return nil, fmt.Errorf("unknown strategy type %q", tpe) + } + return s, nil +} + +// Returns the network statistics for the network interfaces represented by the NetworkRuntimeInfo. +func getNetworkInterfaceStats(interfaceName string) (*NetworkInterface, error) { + out := &NetworkInterface{Name: interfaceName} + // This can happen if the network runtime information is missing - possible if the + // container was created by an old version of libcontainer. + if interfaceName == "" { + return out, nil + } + type netStatsPair struct { + // Where to write the output. + Out *uint64 + // The network stats file to read. + File string + } + // Ingress for host veth is from the container. Hence tx_bytes stat on the host veth is actually number of bytes received by the container. + netStats := []netStatsPair{ + {Out: &out.RxBytes, File: "tx_bytes"}, + {Out: &out.RxPackets, File: "tx_packets"}, + {Out: &out.RxErrors, File: "tx_errors"}, + {Out: &out.RxDropped, File: "tx_dropped"}, + + {Out: &out.TxBytes, File: "rx_bytes"}, + {Out: &out.TxPackets, File: "rx_packets"}, + {Out: &out.TxErrors, File: "rx_errors"}, + {Out: &out.TxDropped, File: "rx_dropped"}, + } + for _, netStat := range netStats { + data, err := readSysfsNetworkStats(interfaceName, netStat.File) + if err != nil { + return nil, err + } + *(netStat.Out) = data + } + return out, nil +} + +// Reads the specified statistics available under /sys/class/net//statistics +func readSysfsNetworkStats(ethInterface, statsFile string) (uint64, error) { + data, err := ioutil.ReadFile(filepath.Join("/sys/class/net", ethInterface, "statistics", statsFile)) + if err != nil { + return 0, err + } + return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64) +} + +// loopback is a network strategy that provides a basic loopback device +type loopback struct { +} + +func (l *loopback) create(n *network, nspid int) error { + return nil +} + +func (l *loopback) initialize(config *network) error { + iface, err := net.InterfaceByName("lo") + if err != nil { + return err + } + return netlink.NetworkLinkUp(iface) +} + +// veth is a network strategy that uses a bridge and creates +// a veth pair, one that is attached to the bridge on the host and the other +// is placed inside the container's namespace +type veth struct { +} + +func (v *veth) create(n *network, nspid int) (err error) { + tmpName, err := v.generateTempPeerName() + if err != nil { + return err + } + n.TempVethPeerName = tmpName + defer func() { + if err != nil { + netlink.NetworkLinkDel(n.HostInterfaceName) + netlink.NetworkLinkDel(n.TempVethPeerName) + } + }() + if n.Bridge == "" { + return fmt.Errorf("bridge is not specified") + } + bridge, err := net.InterfaceByName(n.Bridge) + if err != nil { + return err + } + if err := netlink.NetworkCreateVethPair(n.HostInterfaceName, n.TempVethPeerName, n.TxQueueLen); err != nil { + return err + } + host, err := net.InterfaceByName(n.HostInterfaceName) + if err != nil { + return err + } + if err := netlink.AddToBridge(host, bridge); err != nil { + return err + } + if err := netlink.NetworkSetMTU(host, n.Mtu); err != nil { + return err + } + if n.HairpinMode { + if err := netlink.SetHairpinMode(host, true); err != nil { + return err + } + } + if err := netlink.NetworkLinkUp(host); err != nil { + return err + } + child, err := net.InterfaceByName(n.TempVethPeerName) + if err != nil { + return err + } + return netlink.NetworkSetNsPid(child, nspid) +} + +func (v *veth) generateTempPeerName() (string, error) { + return utils.GenerateRandomName("veth", 7) +} + +func (v *veth) initialize(config *network) error { + peer := config.TempVethPeerName + if peer == "" { + return fmt.Errorf("peer is not specified") + } + child, err := net.InterfaceByName(peer) + if err != nil { + return err + } + if err := netlink.NetworkLinkDown(child); err != nil { + return err + } + if err := netlink.NetworkChangeName(child, config.Name); err != nil { + return err + } + // get the interface again after we changed the name as the index also changes. + if child, err = net.InterfaceByName(config.Name); err != nil { + return err + } + if config.MacAddress != "" { + if err := netlink.NetworkSetMacAddress(child, config.MacAddress); err != nil { + return err + } + } + ip, ipNet, err := net.ParseCIDR(config.Address) + if err != nil { + return err + } + if err := netlink.NetworkLinkAddIp(child, ip, ipNet); err != nil { + return err + } + if config.IPv6Address != "" { + if ip, ipNet, err = net.ParseCIDR(config.IPv6Address); err != nil { + return err + } + if err := netlink.NetworkLinkAddIp(child, ip, ipNet); err != nil { + return err + } + } + if err := netlink.NetworkSetMTU(child, config.Mtu); err != nil { + return err + } + if err := netlink.NetworkLinkUp(child); err != nil { + return err + } + if config.Gateway != "" { + if err := netlink.AddDefaultGw(config.Gateway, config.Name); err != nil { + return err + } + } + if config.IPv6Gateway != "" { + if err := netlink.AddDefaultGw(config.IPv6Gateway, config.Name); err != nil { + return err + } + } + return nil +} diff --git a/vendor/src/github.com/docker/libcontainer/notify_linux.go b/vendor/src/github.com/docker/libcontainer/notify_linux.go index a4923273a3..cf81e24d44 100644 --- a/vendor/src/github.com/docker/libcontainer/notify_linux.go +++ b/vendor/src/github.com/docker/libcontainer/notify_linux.go @@ -12,11 +12,11 @@ import ( const oomCgroupName = "memory" -// NotifyOnOOM returns channel on which you can expect event about OOM, +// notifyOnOOM returns channel on which you can expect event about OOM, // if process died without OOM this channel will be closed. // s is current *libcontainer.State for container. -func NotifyOnOOM(s *State) (<-chan struct{}, error) { - dir := s.CgroupPaths[oomCgroupName] +func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) { + dir := paths[oomCgroupName] if dir == "" { return nil, fmt.Errorf("There is no path for %q in state", oomCgroupName) } @@ -26,6 +26,7 @@ func NotifyOnOOM(s *State) (<-chan struct{}, error) { } fd, _, syserr := syscall.RawSyscall(syscall.SYS_EVENTFD2, 0, syscall.FD_CLOEXEC, 0) if syserr != 0 { + oomControl.Close() return nil, syserr } diff --git a/vendor/src/github.com/docker/libcontainer/notify_linux_test.go b/vendor/src/github.com/docker/libcontainer/notify_linux_test.go index 5d1d54576b..09bdf64432 100644 --- a/vendor/src/github.com/docker/libcontainer/notify_linux_test.go +++ b/vendor/src/github.com/docker/libcontainer/notify_linux_test.go @@ -27,12 +27,10 @@ func TestNotifyOnOOM(t *testing.T) { t.Fatal(err) } var eventFd, oomControlFd int - st := &State{ - CgroupPaths: map[string]string{ - "memory": memoryPath, - }, + paths := map[string]string{ + "memory": memoryPath, } - ooms, err := NotifyOnOOM(st) + ooms, err := notifyOnOOM(paths) if err != nil { t.Fatal("expected no error, got:", err) } diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/README.md b/vendor/src/github.com/docker/libcontainer/nsenter/README.md similarity index 100% rename from vendor/src/github.com/docker/libcontainer/namespaces/nsenter/README.md rename to vendor/src/github.com/docker/libcontainer/nsenter/README.md diff --git a/vendor/src/github.com/docker/libcontainer/nsenter/nsenter.go b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter.go new file mode 100644 index 0000000000..07f4d63e43 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter.go @@ -0,0 +1,12 @@ +// +build linux,!gccgo + +package nsenter + +/* +#cgo CFLAGS: -Wall +extern void nsexec(); +void __attribute__((constructor)) init(void) { + nsexec(); +} +*/ +import "C" diff --git a/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_gccgo.go b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_gccgo.go new file mode 100644 index 0000000000..63c7a3ec22 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_gccgo.go @@ -0,0 +1,25 @@ +// +build linux,gccgo + +package nsenter + +/* +#cgo CFLAGS: -Wall +extern void nsexec(); +void __attribute__((constructor)) init(void) { + nsexec(); +} +*/ +import "C" + +// AlwaysFalse is here to stay false +// (and be exported so the compiler doesn't optimize out its reference) +var AlwaysFalse bool + +func init() { + if AlwaysFalse { + // by referencing this C init() in a noop test, it will ensure the compiler + // links in the C function. + // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65134 + C.init() + } +} diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter_test.go b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_test.go similarity index 60% rename from vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter_test.go rename to vendor/src/github.com/docker/libcontainer/nsenter/nsenter_test.go index 14870c457e..34e1f52118 100644 --- a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter_test.go +++ b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_test.go @@ -1,17 +1,20 @@ package nsenter import ( + "encoding/json" "fmt" "os" "os/exec" - "os/signal" "strings" - "syscall" "testing" ) +type pid struct { + Pid int `json:"Pid"` +} + func TestNsenterAlivePid(t *testing.T) { - args := []string{"nsenter-exec", "--nspid", fmt.Sprintf("%d", os.Getpid())} + args := []string{"nsenter-exec"} r, w, err := os.Pipe() if err != nil { t.Fatalf("failed to create pipe %v", err) @@ -20,30 +23,39 @@ func TestNsenterAlivePid(t *testing.T) { cmd := &exec.Cmd{ Path: os.Args[0], Args: args, - ExtraFiles: []*os.File{r}, + ExtraFiles: []*os.File{w}, + Env: []string{fmt.Sprintf("_LIBCONTAINER_INITPID=%d", os.Getpid())}, } if err := cmd.Start(); err != nil { t.Fatalf("nsenter failed to start %v", err) } - r.Close() + w.Close() - // unblock the child process - if _, err := w.WriteString("1"); err != nil { - t.Fatalf("parent failed to write synchronization data %v", err) + decoder := json.NewDecoder(r) + var pid *pid + + if err := decoder.Decode(&pid); err != nil { + t.Fatalf("%v", err) } if err := cmd.Wait(); err != nil { t.Fatalf("nsenter exits with a non-zero exit status") } + p, err := os.FindProcess(pid.Pid) + if err != nil { + t.Fatalf("%v", err) + } + p.Wait() } func TestNsenterInvalidPid(t *testing.T) { - args := []string{"nsenter-exec", "--nspid", "-1"} + args := []string{"nsenter-exec"} cmd := &exec.Cmd{ Path: os.Args[0], Args: args, + Env: []string{"_LIBCONTAINER_INITPID=-1"}, } err := cmd.Run() @@ -53,21 +65,16 @@ func TestNsenterInvalidPid(t *testing.T) { } func TestNsenterDeadPid(t *testing.T) { - - c := make(chan os.Signal) - signal.Notify(c, syscall.SIGCHLD) dead_cmd := exec.Command("true") - if err := dead_cmd.Start(); err != nil { + if err := dead_cmd.Run(); err != nil { t.Fatal(err) } - defer dead_cmd.Wait() - <-c // dead_cmd is zombie - - args := []string{"nsenter-exec", "--nspid", fmt.Sprintf("%d", dead_cmd.Process.Pid)} + args := []string{"nsenter-exec"} cmd := &exec.Cmd{ Path: os.Args[0], Args: args, + Env: []string{fmt.Sprintf("_LIBCONTAINER_INITPID=%d", dead_cmd.Process.Pid)}, } err := cmd.Run() diff --git a/vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter_unsupported.go b/vendor/src/github.com/docker/libcontainer/nsenter/nsenter_unsupported.go similarity index 100% rename from vendor/src/github.com/docker/libcontainer/namespaces/nsenter/nsenter_unsupported.go rename to vendor/src/github.com/docker/libcontainer/nsenter/nsenter_unsupported.go diff --git a/vendor/src/github.com/docker/libcontainer/nsenter/nsexec.c b/vendor/src/github.com/docker/libcontainer/nsenter/nsexec.c new file mode 100644 index 0000000000..e7658f3856 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsenter/nsexec.c @@ -0,0 +1,168 @@ +#define _GNU_SOURCE +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* All arguments should be above stack, because it grows down */ +struct clone_arg { + /* + * Reserve some space for clone() to locate arguments + * and retcode in this place + */ + char stack[4096] __attribute__ ((aligned(8))); + char stack_ptr[0]; + jmp_buf *env; +}; + +#define pr_perror(fmt, ...) fprintf(stderr, "nsenter: " fmt ": %m\n", ##__VA_ARGS__) + +static int child_func(void *_arg) +{ + struct clone_arg *arg = (struct clone_arg *)_arg; + longjmp(*arg->env, 1); +} + +// Use raw setns syscall for versions of glibc that don't include it (namely glibc-2.12) +#if __GLIBC__ == 2 && __GLIBC_MINOR__ < 14 +#define _GNU_SOURCE +#include "syscall.h" +#ifdef SYS_setns +int setns(int fd, int nstype) +{ + return syscall(SYS_setns, fd, nstype); +} +#endif +#endif + +static int clone_parent(jmp_buf * env) __attribute__ ((noinline)); +static int clone_parent(jmp_buf * env) +{ + struct clone_arg ca; + int child; + + ca.env = env; + child = clone(child_func, ca.stack_ptr, CLONE_PARENT | SIGCHLD, &ca); + + return child; +} + +void nsexec() +{ + char *namespaces[] = { "ipc", "uts", "net", "pid", "mnt" }; + const int num = sizeof(namespaces) / sizeof(char *); + jmp_buf env; + char buf[PATH_MAX], *val; + int i, tfd, child, len, consolefd = -1; + pid_t pid; + char *console; + + val = getenv("_LIBCONTAINER_INITPID"); + if (val == NULL) + return; + + pid = atoi(val); + snprintf(buf, sizeof(buf), "%d", pid); + if (strcmp(val, buf)) { + pr_perror("Unable to parse _LIBCONTAINER_INITPID"); + exit(1); + } + + console = getenv("_LIBCONTAINER_CONSOLE_PATH"); + if (console != NULL) { + consolefd = open(console, O_RDWR); + if (consolefd < 0) { + pr_perror("Failed to open console %s", console); + exit(1); + } + } + + /* Check that the specified process exists */ + snprintf(buf, PATH_MAX - 1, "/proc/%d/ns", pid); + tfd = open(buf, O_DIRECTORY | O_RDONLY); + if (tfd == -1) { + pr_perror("Failed to open \"%s\"", buf); + exit(1); + } + + for (i = 0; i < num; i++) { + struct stat st; + int fd; + + /* Symlinks on all namespaces exist for dead processes, but they can't be opened */ + if (fstatat(tfd, namespaces[i], &st, AT_SYMLINK_NOFOLLOW) == -1) { + // Ignore nonexistent namespaces. + if (errno == ENOENT) + continue; + } + + fd = openat(tfd, namespaces[i], O_RDONLY); + if (fd == -1) { + pr_perror("Failed to open ns file %s for ns %s", buf, + namespaces[i]); + exit(1); + } + // Set the namespace. + if (setns(fd, 0) == -1) { + pr_perror("Failed to setns for %s", namespaces[i]); + exit(1); + } + close(fd); + } + + if (setjmp(env) == 1) { + if (setsid() == -1) { + pr_perror("setsid failed"); + exit(1); + } + if (consolefd != -1) { + if (ioctl(consolefd, TIOCSCTTY, 0) == -1) { + pr_perror("ioctl TIOCSCTTY failed"); + exit(1); + } + if (dup2(consolefd, STDIN_FILENO) != STDIN_FILENO) { + pr_perror("Failed to dup 0"); + exit(1); + } + if (dup2(consolefd, STDOUT_FILENO) != STDOUT_FILENO) { + pr_perror("Failed to dup 1"); + exit(1); + } + if (dup2(consolefd, STDERR_FILENO) != STDERR_FILENO) { + pr_perror("Failed to dup 2"); + exit(1); + } + } + // Finish executing, let the Go runtime take over. + return; + } + + child = clone_parent(&env); + if (child < 0) { + pr_perror("Unable to fork"); + exit(1); + } + + len = snprintf(buf, sizeof(buf), "{ \"pid\" : %d }\n", child); + + if (write(3, buf, len) != len) { + pr_perror("Unable to send a child pid"); + kill(child, SIGKILL); + exit(1); + } + + exit(0); +} diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/Makefile b/vendor/src/github.com/docker/libcontainer/nsinit/Makefile new file mode 100644 index 0000000000..57adf154d8 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsinit/Makefile @@ -0,0 +1,2 @@ +all: + go build -o nsinit . diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/config.go b/vendor/src/github.com/docker/libcontainer/nsinit/config.go index 74c7b3c09f..e50bb3c11d 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/config.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/config.go @@ -1,29 +1,286 @@ package main import ( + "bytes" "encoding/json" - "fmt" - "log" + "io" + "math" + "os" + "path/filepath" + "strings" + "syscall" + "github.com/Sirupsen/logrus" "github.com/codegangsta/cli" + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/utils" ) +const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV + +var createFlags = []cli.Flag{ + cli.IntFlag{Name: "parent-death-signal", Usage: "set the signal that will be delivered to the process in case the parent dies"}, + cli.BoolFlag{Name: "read-only", Usage: "set the container's rootfs as read-only"}, + cli.StringSliceFlag{Name: "bind", Value: &cli.StringSlice{}, Usage: "add bind mounts to the container"}, + cli.StringSliceFlag{Name: "tmpfs", Value: &cli.StringSlice{}, Usage: "add tmpfs mounts to the container"}, + cli.IntFlag{Name: "cpushares", Usage: "set the cpushares for the container"}, + cli.IntFlag{Name: "memory-limit", Usage: "set the memory limit for the container"}, + cli.IntFlag{Name: "memory-swap", Usage: "set the memory swap limit for the container"}, + cli.StringFlag{Name: "cpuset-cpus", Usage: "set the cpuset cpus"}, + cli.StringFlag{Name: "cpuset-mems", Usage: "set the cpuset mems"}, + cli.StringFlag{Name: "apparmor-profile", Usage: "set the apparmor profile"}, + cli.StringFlag{Name: "process-label", Usage: "set the process label"}, + cli.StringFlag{Name: "mount-label", Usage: "set the mount label"}, + cli.StringFlag{Name: "rootfs", Usage: "set the rootfs"}, + cli.IntFlag{Name: "userns-root-uid", Usage: "set the user namespace root uid"}, + cli.StringFlag{Name: "hostname", Value: "nsinit", Usage: "hostname value for the container"}, + cli.StringFlag{Name: "net", Value: "", Usage: "network namespace"}, + cli.StringFlag{Name: "ipc", Value: "", Usage: "ipc namespace"}, + cli.StringFlag{Name: "pid", Value: "", Usage: "pid namespace"}, + cli.StringFlag{Name: "uts", Value: "", Usage: "uts namespace"}, + cli.StringFlag{Name: "mnt", Value: "", Usage: "mount namespace"}, + cli.StringFlag{Name: "veth-bridge", Usage: "veth bridge"}, + cli.StringFlag{Name: "veth-address", Usage: "veth ip address"}, + cli.StringFlag{Name: "veth-gateway", Usage: "veth gateway address"}, + cli.IntFlag{Name: "veth-mtu", Usage: "veth mtu"}, +} + var configCommand = cli.Command{ - Name: "config", - Usage: "display the container configuration", - Action: configAction, + Name: "config", + Usage: "generate a standard configuration file for a container", + Flags: append([]cli.Flag{ + cli.StringFlag{Name: "file,f", Value: "stdout", Usage: "write the configuration to the specified file"}, + }, createFlags...), + Action: func(context *cli.Context) { + template := getTemplate() + modify(template, context) + data, err := json.MarshalIndent(template, "", "\t") + if err != nil { + fatal(err) + } + var f *os.File + filePath := context.String("file") + switch filePath { + case "stdout", "": + f = os.Stdout + default: + if f, err = os.Create(filePath); err != nil { + fatal(err) + } + defer f.Close() + } + if _, err := io.Copy(f, bytes.NewBuffer(data)); err != nil { + fatal(err) + } + }, } -func configAction(context *cli.Context) { - container, err := loadConfig() - if err != nil { - log.Fatal(err) +func modify(config *configs.Config, context *cli.Context) { + config.ParentDeathSignal = context.Int("parent-death-signal") + config.Readonlyfs = context.Bool("read-only") + config.Cgroups.CpusetCpus = context.String("cpuset-cpus") + config.Cgroups.CpusetMems = context.String("cpuset-mems") + config.Cgroups.CpuShares = int64(context.Int("cpushares")) + config.Cgroups.Memory = int64(context.Int("memory-limit")) + config.Cgroups.MemorySwap = int64(context.Int("memory-swap")) + config.AppArmorProfile = context.String("apparmor-profile") + config.ProcessLabel = context.String("process-label") + config.MountLabel = context.String("mount-label") + + rootfs := context.String("rootfs") + if rootfs != "" { + config.Rootfs = rootfs } - data, err := json.MarshalIndent(container, "", "\t") - if err != nil { - log.Fatal(err) + userns_uid := context.Int("userns-root-uid") + if userns_uid != 0 { + config.Namespaces.Add(configs.NEWUSER, "") + config.UidMappings = []configs.IDMap{ + {ContainerID: 0, HostID: userns_uid, Size: 1}, + {ContainerID: 1, HostID: 1, Size: userns_uid - 1}, + {ContainerID: userns_uid + 1, HostID: userns_uid + 1, Size: math.MaxInt32 - userns_uid}, + } + config.GidMappings = []configs.IDMap{ + {ContainerID: 0, HostID: userns_uid, Size: 1}, + {ContainerID: 1, HostID: 1, Size: userns_uid - 1}, + {ContainerID: userns_uid + 1, HostID: userns_uid + 1, Size: math.MaxInt32 - userns_uid}, + } + for _, node := range config.Devices { + node.Uid = uint32(userns_uid) + node.Gid = uint32(userns_uid) + } + } + for _, rawBind := range context.StringSlice("bind") { + mount := &configs.Mount{ + Device: "bind", + Flags: syscall.MS_BIND | syscall.MS_REC, + } + parts := strings.SplitN(rawBind, ":", 3) + switch len(parts) { + default: + logrus.Fatalf("invalid bind mount %s", rawBind) + case 2: + mount.Source, mount.Destination = parts[0], parts[1] + case 3: + mount.Source, mount.Destination = parts[0], parts[1] + switch parts[2] { + case "ro": + mount.Flags |= syscall.MS_RDONLY + case "rw": + default: + logrus.Fatalf("invalid bind mount mode %s", parts[2]) + } + } + config.Mounts = append(config.Mounts, mount) + } + for _, tmpfs := range context.StringSlice("tmpfs") { + config.Mounts = append(config.Mounts, &configs.Mount{ + Device: "tmpfs", + Destination: tmpfs, + Flags: syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV, + }) + } + for flag, value := range map[string]configs.NamespaceType{ + "net": configs.NEWNET, + "mnt": configs.NEWNS, + "pid": configs.NEWPID, + "ipc": configs.NEWIPC, + "uts": configs.NEWUTS, + } { + switch v := context.String(flag); v { + case "host": + config.Namespaces.Remove(value) + case "", "private": + if !config.Namespaces.Contains(value) { + config.Namespaces.Add(value, "") + } + if flag == "net" { + config.Networks = []*configs.Network{ + { + Type: "loopback", + Address: "127.0.0.1/0", + Gateway: "localhost", + }, + } + } + if flag == "uts" { + config.Hostname = context.String("hostname") + } + default: + config.Namespaces.Remove(value) + config.Namespaces.Add(value, v) + } + } + if bridge := context.String("veth-bridge"); bridge != "" { + hostName, err := utils.GenerateRandomName("veth", 7) + if err != nil { + logrus.Fatal(err) + } + network := &configs.Network{ + Type: "veth", + Name: "eth0", + Bridge: bridge, + Address: context.String("veth-address"), + Gateway: context.String("veth-gateway"), + Mtu: context.Int("veth-mtu"), + HostInterfaceName: hostName, + } + config.Networks = append(config.Networks, network) } - - fmt.Printf("%s", data) +} + +func getTemplate() *configs.Config { + cwd, err := os.Getwd() + if err != nil { + panic(err) + } + return &configs.Config{ + Rootfs: cwd, + ParentDeathSignal: int(syscall.SIGKILL), + Capabilities: []string{ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE", + }, + Namespaces: configs.Namespaces([]configs.Namespace{ + {Type: configs.NEWNS}, + {Type: configs.NEWUTS}, + {Type: configs.NEWIPC}, + {Type: configs.NEWPID}, + {Type: configs.NEWNET}, + }), + Cgroups: &configs.Cgroup{ + Name: filepath.Base(cwd), + Parent: "nsinit", + AllowAllDevices: false, + AllowedDevices: configs.DefaultAllowedDevices, + }, + Devices: configs.DefaultAutoCreatedDevices, + MaskPaths: []string{ + "/proc/kcore", + }, + ReadonlyPaths: []string{ + "/proc/sys", "/proc/sysrq-trigger", "/proc/irq", "/proc/bus", + }, + Mounts: []*configs.Mount{ + { + Source: "proc", + Destination: "/proc", + Device: "proc", + Flags: defaultMountFlags, + }, + { + Source: "tmpfs", + Destination: "/dev", + Device: "tmpfs", + Flags: syscall.MS_NOSUID | syscall.MS_STRICTATIME, + Data: "mode=755", + }, + { + Source: "devpts", + Destination: "/dev/pts", + Device: "devpts", + Flags: syscall.MS_NOSUID | syscall.MS_NOEXEC, + Data: "newinstance,ptmxmode=0666,mode=0620,gid=5", + }, + { + Device: "tmpfs", + Source: "shm", + Destination: "/dev/shm", + Data: "mode=1777,size=65536k", + Flags: defaultMountFlags, + }, + { + Source: "mqueue", + Destination: "/dev/mqueue", + Device: "mqueue", + Flags: defaultMountFlags, + }, + { + Source: "sysfs", + Destination: "/sys", + Device: "sysfs", + Flags: defaultMountFlags | syscall.MS_RDONLY, + }, + }, + Rlimits: []configs.Rlimit{ + { + Type: syscall.RLIMIT_NOFILE, + Hard: 1024, + Soft: 1024, + }, + }, + } + } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/exec.go b/vendor/src/github.com/docker/libcontainer/nsinit/exec.go index 6fc553b8f9..9d302aa31e 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/exec.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/exec.go @@ -1,208 +1,116 @@ package main import ( - "fmt" - "io" - "log" "os" "os/exec" "os/signal" "syscall" - "text/tabwriter" "github.com/codegangsta/cli" - "github.com/docker/docker/pkg/term" "github.com/docker/libcontainer" - consolepkg "github.com/docker/libcontainer/console" - "github.com/docker/libcontainer/namespaces" + "github.com/docker/libcontainer/utils" ) +var standardEnvironment = &cli.StringSlice{ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "HOSTNAME=nsinit", + "TERM=xterm", +} + var execCommand = cli.Command{ Name: "exec", Usage: "execute a new command inside a container", Action: execAction, - Flags: []cli.Flag{ - cli.BoolFlag{Name: "list", Usage: "list all registered exec functions"}, - cli.StringFlag{Name: "func", Value: "exec", Usage: "function name to exec inside a container"}, - }, + Flags: append([]cli.Flag{ + cli.BoolFlag{Name: "tty,t", Usage: "allocate a TTY to the container"}, + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + cli.StringFlag{Name: "config", Value: "", Usage: "path to the configuration file"}, + cli.StringFlag{Name: "user,u", Value: "root", Usage: "set the user, uid, and/or gid for the process"}, + cli.StringFlag{Name: "cwd", Value: "", Usage: "set the current working dir"}, + cli.StringSliceFlag{Name: "env", Value: standardEnvironment, Usage: "set environment variables for the process"}, + }, createFlags...), } func execAction(context *cli.Context) { - if context.Bool("list") { - w := tabwriter.NewWriter(os.Stdout, 10, 1, 3, ' ', 0) - fmt.Fprint(w, "NAME\tUSAGE\n") - - for k, f := range argvs { - fmt.Fprintf(w, "%s\t%s\n", k, f.Usage) - } - - w.Flush() - - return - } - - var exitCode int - - container, err := loadConfig() + factory, err := loadFactory(context) if err != nil { - log.Fatal(err) + fatal(err) } - - state, err := libcontainer.GetState(dataPath) - if err != nil && !os.IsNotExist(err) { - log.Fatalf("unable to read state.json: %s", err) - } - - if state != nil { - exitCode, err = startInExistingContainer(container, state, context.String("func"), context) - } else { - exitCode, err = startContainer(container, dataPath, []string(context.Args())) - } - + config, err := loadConfig(context) if err != nil { - log.Fatalf("failed to exec: %s", err) + fatal(err) + } + created := false + container, err := factory.Load(context.String("id")) + if err != nil { + created = true + if container, err = factory.Create(context.String("id"), config); err != nil { + fatal(err) + } + } + process := &libcontainer.Process{ + Args: context.Args(), + Env: context.StringSlice("env"), + User: context.String("user"), + Cwd: context.String("cwd"), + Stdin: os.Stdin, + Stdout: os.Stdout, + Stderr: os.Stderr, + } + rootuid, err := config.HostUID() + if err != nil { + fatal(err) + } + tty, err := newTty(context, process, rootuid) + if err != nil { + fatal(err) + } + if err := tty.attach(process); err != nil { + fatal(err) + } + go handleSignals(process, tty) + err = container.Start(process) + if err != nil { + tty.Close() + if created { + container.Destroy() + } + fatal(err) } - os.Exit(exitCode) -} - -// the process for execing a new process inside an existing container is that we have to exec ourself -// with the nsenter argument so that the C code can setns an the namespaces that we require. Then that -// code path will drop us into the path that we can do the final setup of the namespace and exec the users -// application. -func startInExistingContainer(config *libcontainer.Config, state *libcontainer.State, action string, context *cli.Context) (int, error) { - var ( - master *os.File - console string - err error - - sigc = make(chan os.Signal, 10) - - stdin = os.Stdin - stdout = os.Stdout - stderr = os.Stderr - ) - signal.Notify(sigc) - - if config.Tty { - stdin = nil - stdout = nil - stderr = nil - - master, console, err = consolepkg.CreateMasterAndConsole() - if err != nil { - return -1, err - } - - go io.Copy(master, os.Stdin) - go io.Copy(os.Stdout, master) - - state, err := term.SetRawTerminal(os.Stdin.Fd()) - if err != nil { - return -1, err - } - - defer term.RestoreTerminal(os.Stdin.Fd(), state) - } - - startCallback := func(cmd *exec.Cmd) { - go func() { - resizeTty(master) - - for sig := range sigc { - switch sig { - case syscall.SIGWINCH: - resizeTty(master) - default: - cmd.Process.Signal(sig) - } + status, err := process.Wait() + if err != nil { + exitError, ok := err.(*exec.ExitError) + if ok { + status = exitError.ProcessState + } else { + tty.Close() + if created { + container.Destroy() } - }() + fatal(err) + } } - - return namespaces.ExecIn(config, state, context.Args(), os.Args[0], action, stdin, stdout, stderr, console, startCallback) + if created { + if err := container.Destroy(); err != nil { + tty.Close() + fatal(err) + } + } + tty.Close() + os.Exit(utils.ExitStatus(status.Sys().(syscall.WaitStatus))) } -// startContainer starts the container. Returns the exit status or -1 and an -// error. -// -// Signals sent to the current process will be forwarded to container. -func startContainer(container *libcontainer.Config, dataPath string, args []string) (int, error) { - var ( - cmd *exec.Cmd - sigc = make(chan os.Signal, 10) - ) - +func handleSignals(container *libcontainer.Process, tty *tty) { + sigc := make(chan os.Signal, 10) signal.Notify(sigc) - - createCommand := func(container *libcontainer.Config, console, dataPath, init string, pipe *os.File, args []string) *exec.Cmd { - cmd = namespaces.DefaultCreateCommand(container, console, dataPath, init, pipe, args) - if logPath != "" { - cmd.Env = append(cmd.Env, fmt.Sprintf("log=%s", logPath)) + tty.resize() + for sig := range sigc { + switch sig { + case syscall.SIGWINCH: + tty.resize() + default: + container.Signal(sig) } - return cmd - } - - var ( - master *os.File - console string - err error - - stdin = os.Stdin - stdout = os.Stdout - stderr = os.Stderr - ) - - if container.Tty { - stdin = nil - stdout = nil - stderr = nil - - master, console, err = consolepkg.CreateMasterAndConsole() - if err != nil { - return -1, err - } - - go io.Copy(master, os.Stdin) - go io.Copy(os.Stdout, master) - - state, err := term.SetRawTerminal(os.Stdin.Fd()) - if err != nil { - return -1, err - } - - defer term.RestoreTerminal(os.Stdin.Fd(), state) - } - - startCallback := func() { - go func() { - resizeTty(master) - - for sig := range sigc { - switch sig { - case syscall.SIGWINCH: - resizeTty(master) - default: - cmd.Process.Signal(sig) - } - } - }() - } - - return namespaces.Exec(container, stdin, stdout, stderr, console, dataPath, args, createCommand, startCallback) -} - -func resizeTty(master *os.File) { - if master == nil { - return - } - - ws, err := term.GetWinsize(os.Stdin.Fd()) - if err != nil { - return - } - - if err := term.SetWinsize(master.Fd(), ws); err != nil { - return } } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/init.go b/vendor/src/github.com/docker/libcontainer/nsinit/init.go index 6df9b1d894..7b2cf1935d 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/init.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/init.go @@ -1,47 +1,28 @@ package main import ( - "log" - "os" "runtime" - "strconv" + log "github.com/Sirupsen/logrus" "github.com/codegangsta/cli" - "github.com/docker/libcontainer/namespaces" + "github.com/docker/libcontainer" + _ "github.com/docker/libcontainer/nsenter" ) -var ( - dataPath = os.Getenv("data_path") - console = os.Getenv("console") - rawPipeFd = os.Getenv("pipe") - - initCommand = cli.Command{ - Name: "init", - Usage: "runs the init process inside the namespace", - Action: initAction, - } -) - -func initAction(context *cli.Context) { - runtime.LockOSThread() - - container, err := loadConfig() - if err != nil { - log.Fatal(err) - } - - rootfs, err := os.Getwd() - if err != nil { - log.Fatal(err) - } - - pipeFd, err := strconv.Atoi(rawPipeFd) - if err != nil { - log.Fatal(err) - } - - pipe := os.NewFile(uintptr(pipeFd), "pipe") - if err := namespaces.Init(container, rootfs, console, pipe, []string(context.Args())); err != nil { - log.Fatalf("unable to initialize for container: %s", err) - } +var initCommand = cli.Command{ + Name: "init", + Usage: "runs the init process inside the namespace", + Action: func(context *cli.Context) { + log.SetLevel(log.DebugLevel) + runtime.GOMAXPROCS(1) + runtime.LockOSThread() + factory, err := libcontainer.New("") + if err != nil { + fatal(err) + } + if err := factory.StartInitialization(3); err != nil { + fatal(err) + } + panic("This line should never been executed") + }, } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/main.go b/vendor/src/github.com/docker/libcontainer/nsinit/main.go index 53625ca82c..922d74ccbb 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/main.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/main.go @@ -1,57 +1,22 @@ package main import ( - "log" "os" - "strings" + log "github.com/Sirupsen/logrus" "github.com/codegangsta/cli" ) -var ( - logPath = os.Getenv("log") - argvs = make(map[string]*rFunc) -) - -func init() { - argvs["exec"] = &rFunc{ - Usage: "execute a process inside an existing container", - Action: nsenterExec, - } - - argvs["mknod"] = &rFunc{ - Usage: "mknod a device inside an existing container", - Action: nsenterMknod, - } - - argvs["ip"] = &rFunc{ - Usage: "display the container's network interfaces", - Action: nsenterIp, - } -} - func main() { - // we need to check our argv 0 for any registred functions to run instead of the - // normal cli code path - f, exists := argvs[strings.TrimPrefix(os.Args[0], "nsenter-")] - if exists { - runFunc(f) - - return - } - app := cli.NewApp() - app.Name = "nsinit" - app.Version = "0.1" + app.Version = "2" app.Author = "libcontainer maintainers" app.Flags = []cli.Flag{ - cli.StringFlag{Name: "nspid"}, - cli.StringFlag{Name: "console"}, + cli.StringFlag{Name: "root", Value: ".", Usage: "root directory for containers"}, + cli.StringFlag{Name: "log-file", Value: "", Usage: "set the log file to output logs to"}, + cli.BoolFlag{Name: "debug", Usage: "enable debug output in the logs"}, } - - app.Before = preload - app.Commands = []cli.Command{ configCommand, execCommand, @@ -60,8 +25,21 @@ func main() { pauseCommand, statsCommand, unpauseCommand, + stateCommand, + } + app.Before = func(context *cli.Context) error { + if context.GlobalBool("debug") { + log.SetLevel(log.DebugLevel) + } + if path := context.GlobalString("log-file"); path != "" { + f, err := os.Create(path) + if err != nil { + return err + } + log.SetOutput(f) + } + return nil } - if err := app.Run(os.Args); err != nil { log.Fatal(err) } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/nsenter.go b/vendor/src/github.com/docker/libcontainer/nsinit/nsenter.go deleted file mode 100644 index 8dc149f4fb..0000000000 --- a/vendor/src/github.com/docker/libcontainer/nsinit/nsenter.go +++ /dev/null @@ -1,84 +0,0 @@ -package main - -import ( - "fmt" - "log" - "net" - "os" - "strconv" - "strings" - "text/tabwriter" - - "github.com/docker/libcontainer" - "github.com/docker/libcontainer/devices" - "github.com/docker/libcontainer/mount/nodes" - "github.com/docker/libcontainer/namespaces" - _ "github.com/docker/libcontainer/namespaces/nsenter" -) - -// nsenterExec exec's a process inside an existing container -func nsenterExec(config *libcontainer.Config, args []string) { - if err := namespaces.FinalizeSetns(config, args); err != nil { - log.Fatalf("failed to nsenter: %s", err) - } -} - -// nsenterMknod runs mknod inside an existing container -// -// mknod -func nsenterMknod(config *libcontainer.Config, args []string) { - if len(args) != 4 { - log.Fatalf("expected mknod to have 4 arguments not %d", len(args)) - } - - t := rune(args[1][0]) - - major, err := strconv.Atoi(args[2]) - if err != nil { - log.Fatal(err) - } - - minor, err := strconv.Atoi(args[3]) - if err != nil { - log.Fatal(err) - } - - n := &devices.Device{ - Path: args[0], - Type: t, - MajorNumber: int64(major), - MinorNumber: int64(minor), - } - - if err := nodes.CreateDeviceNode("/", n); err != nil { - log.Fatal(err) - } -} - -// nsenterIp displays the network interfaces inside a container's net namespace -func nsenterIp(config *libcontainer.Config, args []string) { - interfaces, err := net.Interfaces() - if err != nil { - log.Fatal(err) - } - - w := tabwriter.NewWriter(os.Stdout, 10, 1, 3, ' ', 0) - fmt.Fprint(w, "NAME\tMTU\tMAC\tFLAG\tADDRS\n") - - for _, iface := range interfaces { - addrs, err := iface.Addrs() - if err != nil { - log.Fatal(err) - } - - o := []string{} - - for _, a := range addrs { - o = append(o, a.String()) - } - - fmt.Fprintf(w, "%s\t%d\t%s\t%s\t%s\n", iface.Name, iface.MTU, iface.HardwareAddr, iface.Flags, strings.Join(o, ",")) - } - - w.Flush() -} diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/oom.go b/vendor/src/github.com/docker/libcontainer/nsinit/oom.go index 106abeb268..a59b753336 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/oom.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/oom.go @@ -4,25 +4,27 @@ import ( "log" "github.com/codegangsta/cli" - "github.com/docker/libcontainer" ) var oomCommand = cli.Command{ - Name: "oom", - Usage: "display oom notifications for a container", - Action: oomAction, -} - -func oomAction(context *cli.Context) { - state, err := libcontainer.GetState(dataPath) - if err != nil { - log.Fatal(err) - } - n, err := libcontainer.NotifyOnOOM(state) - if err != nil { - log.Fatal(err) - } - for _ = range n { - log.Printf("OOM notification received") - } + Name: "oom", + Usage: "display oom notifications for a container", + Flags: []cli.Flag{ + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + }, + Action: func(context *cli.Context) { + container, err := getContainer(context) + if err != nil { + log.Fatal(err) + } + n, err := container.NotifyOOM() + if err != nil { + log.Fatal(err) + } + for x := range n { + // hack for calm down go1.4 gofmt + _ = x + log.Printf("OOM notification received") + } + }, } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/pause.go b/vendor/src/github.com/docker/libcontainer/nsinit/pause.go index ada24250c1..89af0b6f73 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/pause.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/pause.go @@ -4,46 +4,38 @@ import ( "log" "github.com/codegangsta/cli" - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/cgroups/fs" - "github.com/docker/libcontainer/cgroups/systemd" ) var pauseCommand = cli.Command{ - Name: "pause", - Usage: "pause the container's processes", - Action: pauseAction, + Name: "pause", + Usage: "pause the container's processes", + Flags: []cli.Flag{ + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + }, + Action: func(context *cli.Context) { + container, err := getContainer(context) + if err != nil { + log.Fatal(err) + } + if err = container.Pause(); err != nil { + log.Fatal(err) + } + }, } var unpauseCommand = cli.Command{ - Name: "unpause", - Usage: "unpause the container's processes", - Action: unpauseAction, -} - -func pauseAction(context *cli.Context) { - if err := toggle(cgroups.Frozen); err != nil { - log.Fatal(err) - } -} - -func unpauseAction(context *cli.Context) { - if err := toggle(cgroups.Thawed); err != nil { - log.Fatal(err) - } -} - -func toggle(state cgroups.FreezerState) error { - container, err := loadConfig() - if err != nil { - return err - } - - if systemd.UseSystemd() { - err = systemd.Freeze(container.Cgroups, state) - } else { - err = fs.Freeze(container.Cgroups, state) - } - - return err + Name: "unpause", + Usage: "unpause the container's processes", + Flags: []cli.Flag{ + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + }, + Action: func(context *cli.Context) { + container, err := getContainer(context) + if err != nil { + log.Fatal(err) + } + if err = container.Resume(); err != nil { + log.Fatal(err) + } + }, } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/state.go b/vendor/src/github.com/docker/libcontainer/nsinit/state.go new file mode 100644 index 0000000000..46981bb799 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsinit/state.go @@ -0,0 +1,31 @@ +package main + +import ( + "encoding/json" + "fmt" + + "github.com/codegangsta/cli" +) + +var stateCommand = cli.Command{ + Name: "state", + Usage: "get the container's current state", + Flags: []cli.Flag{ + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + }, + Action: func(context *cli.Context) { + container, err := getContainer(context) + if err != nil { + fatal(err) + } + state, err := container.State() + if err != nil { + fatal(err) + } + data, err := json.MarshalIndent(state, "", "\t") + if err != nil { + fatal(err) + } + fmt.Printf("%s", data) + }, +} diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/stats.go b/vendor/src/github.com/docker/libcontainer/nsinit/stats.go index 612b4a4bae..49087fa236 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/stats.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/stats.go @@ -3,37 +3,29 @@ package main import ( "encoding/json" "fmt" - "log" "github.com/codegangsta/cli" - "github.com/docker/libcontainer" ) var statsCommand = cli.Command{ - Name: "stats", - Usage: "display statistics for the container", - Action: statsAction, -} - -func statsAction(context *cli.Context) { - container, err := loadConfig() - if err != nil { - log.Fatal(err) - } - - state, err := libcontainer.GetState(dataPath) - if err != nil { - log.Fatal(err) - } - - stats, err := libcontainer.GetStats(container, state) - if err != nil { - log.Fatal(err) - } - data, err := json.MarshalIndent(stats, "", "\t") - if err != nil { - log.Fatal(err) - } - - fmt.Printf("%s", data) + Name: "stats", + Usage: "display statistics for the container", + Flags: []cli.Flag{ + cli.StringFlag{Name: "id", Value: "nsinit", Usage: "specify the ID for a container"}, + }, + Action: func(context *cli.Context) { + container, err := getContainer(context) + if err != nil { + fatal(err) + } + stats, err := container.Stats() + if err != nil { + fatal(err) + } + data, err := json.MarshalIndent(stats, "", "\t") + if err != nil { + fatal(err) + } + fmt.Printf("%s", data) + }, } diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/tty.go b/vendor/src/github.com/docker/libcontainer/nsinit/tty.go new file mode 100644 index 0000000000..668939745a --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/nsinit/tty.go @@ -0,0 +1,65 @@ +package main + +import ( + "io" + "os" + + "github.com/codegangsta/cli" + "github.com/docker/docker/pkg/term" + "github.com/docker/libcontainer" +) + +func newTty(context *cli.Context, p *libcontainer.Process, rootuid int) (*tty, error) { + if context.Bool("tty") { + console, err := p.NewConsole(rootuid) + if err != nil { + return nil, err + } + return &tty{ + console: console, + }, nil + } + return &tty{}, nil +} + +type tty struct { + console libcontainer.Console + state *term.State +} + +func (t *tty) Close() error { + if t.console != nil { + t.console.Close() + } + if t.state != nil { + term.RestoreTerminal(os.Stdin.Fd(), t.state) + } + return nil +} + +func (t *tty) attach(process *libcontainer.Process) error { + if t.console != nil { + go io.Copy(t.console, os.Stdin) + go io.Copy(os.Stdout, t.console) + state, err := term.SetRawTerminal(os.Stdin.Fd()) + if err != nil { + return err + } + t.state = state + process.Stderr = nil + process.Stdout = nil + process.Stdin = nil + } + return nil +} + +func (t *tty) resize() error { + if t.console == nil { + return nil + } + ws, err := term.GetWinsize(os.Stdin.Fd()) + if err != nil { + return err + } + return term.SetWinsize(t.console.Fd(), ws) +} diff --git a/vendor/src/github.com/docker/libcontainer/nsinit/utils.go b/vendor/src/github.com/docker/libcontainer/nsinit/utils.go index 6a8aafbf17..4deca76640 100644 --- a/vendor/src/github.com/docker/libcontainer/nsinit/utils.go +++ b/vendor/src/github.com/docker/libcontainer/nsinit/utils.go @@ -2,89 +2,58 @@ package main import ( "encoding/json" - "log" + "fmt" "os" - "path/filepath" "github.com/codegangsta/cli" "github.com/docker/libcontainer" + "github.com/docker/libcontainer/configs" ) -// rFunc is a function registration for calling after an execin -type rFunc struct { - Usage string - Action func(*libcontainer.Config, []string) -} - -func loadConfig() (*libcontainer.Config, error) { - f, err := os.Open(filepath.Join(dataPath, "container.json")) - if err != nil { - return nil, err - } - defer f.Close() - - var container *libcontainer.Config - if err := json.NewDecoder(f).Decode(&container); err != nil { - return nil, err - } - - return container, nil -} - -func openLog(name string) error { - f, err := os.OpenFile(name, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0755) - if err != nil { - return err - } - - log.SetOutput(f) - - return nil -} - -func findUserArgs() []string { - i := 0 - for _, a := range os.Args { - i++ - - if a == "--" { - break +func loadConfig(context *cli.Context) (*configs.Config, error) { + if path := context.String("config"); path != "" { + f, err := os.Open(path) + if err != nil { + return nil, err } + defer f.Close() + var config *configs.Config + if err := json.NewDecoder(f).Decode(&config); err != nil { + return nil, err + } + return config, nil } - - return os.Args[i:] -} - -// loadConfigFromFd loads a container's config from the sync pipe that is provided by -// fd 3 when running a process -func loadConfigFromFd() (*libcontainer.Config, error) { - pipe := os.NewFile(3, "pipe") - defer pipe.Close() - - var config *libcontainer.Config - if err := json.NewDecoder(pipe).Decode(&config); err != nil { - return nil, err - } + config := getTemplate() + modify(config, context) return config, nil } -func preload(context *cli.Context) error { - if logPath != "" { - if err := openLog(logPath); err != nil { - return err - } - } - - return nil +func loadFactory(context *cli.Context) (libcontainer.Factory, error) { + return libcontainer.New(context.GlobalString("root"), libcontainer.Cgroupfs) } -func runFunc(f *rFunc) { - userArgs := findUserArgs() - - config, err := loadConfigFromFd() +func getContainer(context *cli.Context) (libcontainer.Container, error) { + factory, err := loadFactory(context) if err != nil { - log.Fatalf("unable to receive config from sync pipe: %s", err) + return nil, err } - - f.Action(config, userArgs) + container, err := factory.Load(context.String("id")) + if err != nil { + return nil, err + } + return container, nil +} + +func fatal(err error) { + if lerr, ok := err.(libcontainer.Error); ok { + lerr.Detail(os.Stderr) + os.Exit(1) + } + fmt.Fprintln(os.Stderr, err) + os.Exit(1) +} + +func fatalf(t string, v ...interface{}) { + fmt.Fprintf(os.Stderr, t, v...) + os.Exit(1) } diff --git a/vendor/src/github.com/docker/libcontainer/process.go b/vendor/src/github.com/docker/libcontainer/process.go index 489666a587..12f90daf79 100644 --- a/vendor/src/github.com/docker/libcontainer/process.go +++ b/vendor/src/github.com/docker/libcontainer/process.go @@ -1,27 +1,82 @@ package libcontainer -import "io" +import ( + "fmt" + "io" + "math" + "os" +) -// Configuration for a process to be run inside a container. -type ProcessConfig struct { +type processOperations interface { + wait() (*os.ProcessState, error) + signal(sig os.Signal) error + pid() int +} + +// Process specifies the configuration and IO for a process inside +// a container. +type Process struct { // The command to be run followed by any arguments. Args []string - // Map of environment variables to their values. + // Env specifies the environment variables for the process. Env []string + // User will set the uid and gid of the executing process running inside the container + // local to the contaienr's user and group configuration. + User string + + // Cwd will change the processes current working directory inside the container's rootfs. + Cwd string + // Stdin is a pointer to a reader which provides the standard input stream. + Stdin io.Reader + // Stdout is a pointer to a writer which receives the standard output stream. + Stdout io.Writer + // Stderr is a pointer to a writer which receives the standard error stream. - // - // If a reader or writer is nil, the input stream is assumed to be empty and the output is - // discarded. - // - // The readers and writers, if supplied, are closed when the process terminates. Their Close - // methods should be idempotent. - // - // Stdout and Stderr may refer to the same writer in which case the output is interspersed. - Stdin io.ReadCloser - Stdout io.WriteCloser - Stderr io.WriteCloser + Stderr io.Writer + + // consolePath is the path to the console allocated to the container. + consolePath string + + ops processOperations +} + +// Wait waits for the process to exit. +// Wait releases any resources associated with the Process +func (p Process) Wait() (*os.ProcessState, error) { + if p.ops == nil { + return nil, newGenericError(fmt.Errorf("invalid process"), ProcessNotExecuted) + } + return p.ops.wait() +} + +// Pid returns the process ID +func (p Process) Pid() (int, error) { + // math.MinInt32 is returned here, because it's invalid value + // for the kill() system call. + if p.ops == nil { + return math.MinInt32, newGenericError(fmt.Errorf("invalid process"), ProcessNotExecuted) + } + return p.ops.pid(), nil +} + +// Signal sends a signal to the Process. +func (p Process) Signal(sig os.Signal) error { + if p.ops == nil { + return newGenericError(fmt.Errorf("invalid process"), ProcessNotExecuted) + } + return p.ops.signal(sig) +} + +// NewConsole creates new console for process and returns it +func (p *Process) NewConsole(rootuid int) (Console, error) { + console, err := newConsole(rootuid, rootuid) + if err != nil { + return nil, err + } + p.consolePath = console.Path() + return console, nil } diff --git a/vendor/src/github.com/docker/libcontainer/process_linux.go b/vendor/src/github.com/docker/libcontainer/process_linux.go new file mode 100644 index 0000000000..5aab5a7f55 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/process_linux.go @@ -0,0 +1,240 @@ +// +build linux + +package libcontainer + +import ( + "encoding/json" + "io" + "os" + "os/exec" + "syscall" + + "github.com/docker/libcontainer/cgroups" + "github.com/docker/libcontainer/system" +) + +type parentProcess interface { + // pid returns the pid for the running process. + pid() int + + // start starts the process execution. + start() error + + // send a SIGKILL to the process and wait for the exit. + terminate() error + + // wait waits on the process returning the process state. + wait() (*os.ProcessState, error) + + // startTime return's the process start time. + startTime() (string, error) + + signal(os.Signal) error +} + +type setnsProcess struct { + cmd *exec.Cmd + parentPipe *os.File + childPipe *os.File + cgroupPaths map[string]string + config *initConfig +} + +func (p *setnsProcess) startTime() (string, error) { + return system.GetProcessStartTime(p.pid()) +} + +func (p *setnsProcess) signal(s os.Signal) error { + return p.cmd.Process.Signal(s) +} + +func (p *setnsProcess) start() (err error) { + defer p.parentPipe.Close() + if err = p.execSetns(); err != nil { + return newSystemError(err) + } + if len(p.cgroupPaths) > 0 { + if err := cgroups.EnterPid(p.cgroupPaths, p.cmd.Process.Pid); err != nil { + return newSystemError(err) + } + } + if err := json.NewEncoder(p.parentPipe).Encode(p.config); err != nil { + return newSystemError(err) + } + if err := syscall.Shutdown(int(p.parentPipe.Fd()), syscall.SHUT_WR); err != nil { + return newSystemError(err) + } + // wait for the child process to fully complete and receive an error message + // if one was encoutered + var ierr *genericError + if err := json.NewDecoder(p.parentPipe).Decode(&ierr); err != nil && err != io.EOF { + return newSystemError(err) + } + if ierr != nil { + return newSystemError(ierr) + } + + return nil +} + +// execSetns runs the process that executes C code to perform the setns calls +// because setns support requires the C process to fork off a child and perform the setns +// before the go runtime boots, we wait on the process to die and receive the child's pid +// over the provided pipe. +func (p *setnsProcess) execSetns() error { + err := p.cmd.Start() + p.childPipe.Close() + if err != nil { + return newSystemError(err) + } + status, err := p.cmd.Process.Wait() + if err != nil { + p.cmd.Wait() + return newSystemError(err) + } + if !status.Success() { + p.cmd.Wait() + return newSystemError(&exec.ExitError{ProcessState: status}) + } + var pid *pid + if err := json.NewDecoder(p.parentPipe).Decode(&pid); err != nil { + p.cmd.Wait() + return newSystemError(err) + } + + process, err := os.FindProcess(pid.Pid) + if err != nil { + return err + } + + p.cmd.Process = process + return nil +} + +// terminate sends a SIGKILL to the forked process for the setns routine then waits to +// avoid the process becomming a zombie. +func (p *setnsProcess) terminate() error { + err := p.cmd.Process.Kill() + if _, werr := p.wait(); err == nil { + err = werr + } + return err +} + +func (p *setnsProcess) wait() (*os.ProcessState, error) { + err := p.cmd.Wait() + if err != nil { + return p.cmd.ProcessState, err + } + + return p.cmd.ProcessState, nil +} + +func (p *setnsProcess) pid() int { + return p.cmd.Process.Pid +} + +type initProcess struct { + cmd *exec.Cmd + parentPipe *os.File + childPipe *os.File + config *initConfig + manager cgroups.Manager +} + +func (p *initProcess) pid() int { + return p.cmd.Process.Pid +} + +func (p *initProcess) start() error { + defer p.parentPipe.Close() + err := p.cmd.Start() + p.childPipe.Close() + if err != nil { + return newSystemError(err) + } + // Do this before syncing with child so that no children + // can escape the cgroup + if err := p.manager.Apply(p.pid()); err != nil { + return newSystemError(err) + } + defer func() { + if err != nil { + // TODO: should not be the responsibility to call here + p.manager.Destroy() + } + }() + if err := p.createNetworkInterfaces(); err != nil { + return newSystemError(err) + } + if err := p.sendConfig(); err != nil { + return newSystemError(err) + } + // wait for the child process to fully complete and receive an error message + // if one was encoutered + var ierr *genericError + if err := json.NewDecoder(p.parentPipe).Decode(&ierr); err != nil && err != io.EOF { + return newSystemError(err) + } + if ierr != nil { + return newSystemError(ierr) + } + return nil +} + +func (p *initProcess) wait() (*os.ProcessState, error) { + err := p.cmd.Wait() + if err != nil { + return p.cmd.ProcessState, err + } + // we should kill all processes in cgroup when init is died if we use host PID namespace + if p.cmd.SysProcAttr.Cloneflags&syscall.CLONE_NEWPID == 0 { + killCgroupProcesses(p.manager) + } + return p.cmd.ProcessState, nil +} + +func (p *initProcess) terminate() error { + if p.cmd.Process == nil { + return nil + } + err := p.cmd.Process.Kill() + if _, werr := p.wait(); err == nil { + err = werr + } + return err +} + +func (p *initProcess) startTime() (string, error) { + return system.GetProcessStartTime(p.pid()) +} + +func (p *initProcess) sendConfig() error { + // send the state to the container's init process then shutdown writes for the parent + if err := json.NewEncoder(p.parentPipe).Encode(p.config); err != nil { + return err + } + // shutdown writes for the parent side of the pipe + return syscall.Shutdown(int(p.parentPipe.Fd()), syscall.SHUT_WR) +} + +func (p *initProcess) createNetworkInterfaces() error { + for _, config := range p.config.Config.Networks { + strategy, err := getStrategy(config.Type) + if err != nil { + return err + } + n := &network{ + Network: *config, + } + if err := strategy.create(n, p.pid()); err != nil { + return err + } + p.config.Networks = append(p.config.Networks, n) + } + return nil +} + +func (p *initProcess) signal(s os.Signal) error { + return p.cmd.Process.Signal(s) +} diff --git a/vendor/src/github.com/docker/libcontainer/rootfs_linux.go b/vendor/src/github.com/docker/libcontainer/rootfs_linux.go new file mode 100644 index 0000000000..6caa07a0c5 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/rootfs_linux.go @@ -0,0 +1,356 @@ +// +build linux + +package libcontainer + +import ( + "fmt" + "io/ioutil" + "os" + "path/filepath" + "strings" + "syscall" + "time" + + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/label" +) + +const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV + +// setupRootfs sets up the devices, mount points, and filesystems for use inside a +// new mount namespace. +func setupRootfs(config *configs.Config, console *linuxConsole) (err error) { + if err := prepareRoot(config); err != nil { + return newSystemError(err) + } + for _, m := range config.Mounts { + if err := mountToRootfs(m, config.Rootfs, config.MountLabel); err != nil { + return newSystemError(err) + } + } + if err := createDevices(config); err != nil { + return newSystemError(err) + } + if err := setupPtmx(config, console); err != nil { + return newSystemError(err) + } + // stdin, stdout and stderr could be pointing to /dev/null from parent namespace. + // re-open them inside this namespace. + if err := reOpenDevNull(config.Rootfs); err != nil { + return newSystemError(err) + } + if err := setupDevSymlinks(config.Rootfs); err != nil { + return newSystemError(err) + } + if err := syscall.Chdir(config.Rootfs); err != nil { + return newSystemError(err) + } + if config.NoPivotRoot { + err = msMoveRoot(config.Rootfs) + } else { + err = pivotRoot(config.Rootfs, config.PivotDir) + } + if err != nil { + return newSystemError(err) + } + if config.Readonlyfs { + if err := setReadonly(); err != nil { + return newSystemError(err) + } + } + syscall.Umask(0022) + return nil +} + +func mountToRootfs(m *configs.Mount, rootfs, mountLabel string) error { + var ( + dest = m.Destination + data = label.FormatMountLabel(m.Data, mountLabel) + ) + if !strings.HasPrefix(dest, rootfs) { + dest = filepath.Join(rootfs, dest) + } + + switch m.Device { + case "proc", "mqueue", "sysfs": + if err := os.MkdirAll(dest, 0755); err != nil && !os.IsExist(err) { + return err + } + return syscall.Mount(m.Source, dest, m.Device, uintptr(m.Flags), "") + case "tmpfs": + stat, err := os.Stat(dest) + if err != nil { + if err := os.MkdirAll(dest, 0755); err != nil && !os.IsExist(err) { + return err + } + } + if err := syscall.Mount(m.Source, dest, m.Device, uintptr(m.Flags), data); err != nil { + return err + } + if stat != nil { + if err = os.Chmod(dest, stat.Mode()); err != nil { + return err + } + } + return nil + case "devpts": + if err := os.MkdirAll(dest, 0755); err != nil && !os.IsExist(err) { + return err + } + return syscall.Mount(m.Source, dest, m.Device, uintptr(m.Flags), data) + case "bind": + stat, err := os.Stat(m.Source) + if err != nil { + // error out if the source of a bind mount does not exist as we will be + // unable to bind anything to it. + return err + } + if err := createIfNotExists(dest, stat.IsDir()); err != nil { + return err + } + if err := syscall.Mount(m.Source, dest, m.Device, uintptr(m.Flags), data); err != nil { + return err + } + if m.Flags&syscall.MS_RDONLY != 0 { + if err := syscall.Mount(m.Source, dest, m.Device, uintptr(m.Flags|syscall.MS_REMOUNT), ""); err != nil { + return err + } + } + if m.Relabel != "" { + if err := label.Relabel(m.Source, mountLabel, m.Relabel); err != nil { + return err + } + } + if m.Flags&syscall.MS_PRIVATE != 0 { + if err := syscall.Mount("", dest, "none", uintptr(syscall.MS_PRIVATE), ""); err != nil { + return err + } + } + default: + return fmt.Errorf("unknown mount device %q to %q", m.Device, m.Destination) + } + return nil +} + +func setupDevSymlinks(rootfs string) error { + var links = [][2]string{ + {"/proc/self/fd", "/dev/fd"}, + {"/proc/self/fd/0", "/dev/stdin"}, + {"/proc/self/fd/1", "/dev/stdout"}, + {"/proc/self/fd/2", "/dev/stderr"}, + } + // kcore support can be toggled with CONFIG_PROC_KCORE; only create a symlink + // in /dev if it exists in /proc. + if _, err := os.Stat("/proc/kcore"); err == nil { + links = append(links, [2]string{"/proc/kcore", "/dev/kcore"}) + } + for _, link := range links { + var ( + src = link[0] + dst = filepath.Join(rootfs, link[1]) + ) + if err := os.Symlink(src, dst); err != nil && !os.IsExist(err) { + return fmt.Errorf("symlink %s %s %s", src, dst, err) + } + } + return nil +} + +// If stdin, stdout or stderr are pointing to '/dev/null' in the global mount namespace, +// this method will make them point to '/dev/null' in this namespace. +func reOpenDevNull(rootfs string) error { + var stat, devNullStat syscall.Stat_t + file, err := os.Open(filepath.Join(rootfs, "/dev/null")) + if err != nil { + return fmt.Errorf("Failed to open /dev/null - %s", err) + } + defer file.Close() + if err := syscall.Fstat(int(file.Fd()), &devNullStat); err != nil { + return err + } + for fd := 0; fd < 3; fd++ { + if err := syscall.Fstat(fd, &stat); err != nil { + return err + } + if stat.Rdev == devNullStat.Rdev { + // Close and re-open the fd. + if err := syscall.Dup2(int(file.Fd()), fd); err != nil { + return err + } + } + } + return nil +} + +// Create the device nodes in the container. +func createDevices(config *configs.Config) error { + oldMask := syscall.Umask(0000) + for _, node := range config.Devices { + if err := createDeviceNode(config.Rootfs, node); err != nil { + syscall.Umask(oldMask) + return err + } + } + syscall.Umask(oldMask) + return nil +} + +// Creates the device node in the rootfs of the container. +func createDeviceNode(rootfs string, node *configs.Device) error { + dest := filepath.Join(rootfs, node.Path) + if err := os.MkdirAll(filepath.Dir(dest), 0755); err != nil { + return err + } + if err := mknodDevice(dest, node); err != nil { + if os.IsExist(err) { + return nil + } + if err != syscall.EPERM { + return err + } + // containers running in a user namespace are not allowed to mknod + // devices so we can just bind mount it from the host. + f, err := os.Create(dest) + if err != nil && !os.IsExist(err) { + return err + } + if f != nil { + f.Close() + } + return syscall.Mount(node.Path, dest, "bind", syscall.MS_BIND, "") + } + return nil +} + +func mknodDevice(dest string, node *configs.Device) error { + fileMode := node.FileMode + switch node.Type { + case 'c': + fileMode |= syscall.S_IFCHR + case 'b': + fileMode |= syscall.S_IFBLK + default: + return fmt.Errorf("%c is not a valid device type for device %s", node.Type, node.Path) + } + if err := syscall.Mknod(dest, uint32(fileMode), node.Mkdev()); err != nil { + return err + } + return syscall.Chown(dest, int(node.Uid), int(node.Gid)) +} + +func prepareRoot(config *configs.Config) error { + flag := syscall.MS_PRIVATE | syscall.MS_REC + if config.NoPivotRoot { + flag = syscall.MS_SLAVE | syscall.MS_REC + } + if err := syscall.Mount("", "/", "", uintptr(flag), ""); err != nil { + return err + } + return syscall.Mount(config.Rootfs, config.Rootfs, "bind", syscall.MS_BIND|syscall.MS_REC, "") +} + +func setReadonly() error { + return syscall.Mount("/", "/", "bind", syscall.MS_BIND|syscall.MS_REMOUNT|syscall.MS_RDONLY|syscall.MS_REC, "") +} + +func setupPtmx(config *configs.Config, console *linuxConsole) error { + ptmx := filepath.Join(config.Rootfs, "dev/ptmx") + if err := os.Remove(ptmx); err != nil && !os.IsNotExist(err) { + return err + } + if err := os.Symlink("pts/ptmx", ptmx); err != nil { + return fmt.Errorf("symlink dev ptmx %s", err) + } + if console != nil { + return console.mount(config.Rootfs, config.MountLabel, 0, 0) + } + return nil +} + +func pivotRoot(rootfs, pivotBaseDir string) error { + if pivotBaseDir == "" { + pivotBaseDir = "/" + } + tmpDir := filepath.Join(rootfs, pivotBaseDir) + if err := os.MkdirAll(tmpDir, 0755); err != nil { + return fmt.Errorf("can't create tmp dir %s, error %v", tmpDir, err) + } + pivotDir, err := ioutil.TempDir(tmpDir, ".pivot_root") + if err != nil { + return fmt.Errorf("can't create pivot_root dir %s, error %v", pivotDir, err) + } + if err := syscall.PivotRoot(rootfs, pivotDir); err != nil { + return fmt.Errorf("pivot_root %s", err) + } + if err := syscall.Chdir("/"); err != nil { + return fmt.Errorf("chdir / %s", err) + } + // path to pivot dir now changed, update + pivotDir = filepath.Join(pivotBaseDir, filepath.Base(pivotDir)) + if err := syscall.Unmount(pivotDir, syscall.MNT_DETACH); err != nil { + return fmt.Errorf("unmount pivot_root dir %s", err) + } + return os.Remove(pivotDir) +} + +func msMoveRoot(rootfs string) error { + if err := syscall.Mount(rootfs, "/", "", syscall.MS_MOVE, ""); err != nil { + return err + } + if err := syscall.Chroot("."); err != nil { + return err + } + return syscall.Chdir("/") +} + +// createIfNotExists creates a file or a directory only if it does not already exist. +func createIfNotExists(path string, isDir bool) error { + if _, err := os.Stat(path); err != nil { + if os.IsNotExist(err) { + if isDir { + return os.MkdirAll(path, 0755) + } + if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { + return err + } + f, err := os.OpenFile(path, os.O_CREATE, 0755) + if err != nil { + return err + } + f.Close() + } + } + return nil +} + +// remountReadonly will bind over the top of an existing path and ensure that it is read-only. +func remountReadonly(path string) error { + for i := 0; i < 5; i++ { + if err := syscall.Mount("", path, "", syscall.MS_REMOUNT|syscall.MS_RDONLY, ""); err != nil && !os.IsNotExist(err) { + switch err { + case syscall.EINVAL: + // Probably not a mountpoint, use bind-mount + if err := syscall.Mount(path, path, "", syscall.MS_BIND, ""); err != nil { + return err + } + return syscall.Mount(path, path, "", syscall.MS_BIND|syscall.MS_REMOUNT|syscall.MS_RDONLY|syscall.MS_REC|defaultMountFlags, "") + case syscall.EBUSY: + time.Sleep(100 * time.Millisecond) + continue + default: + return err + } + } + return nil + } + return fmt.Errorf("unable to mount %s as readonly max retries reached", path) +} + +// maskFile bind mounts /dev/null over the top of the specified path inside a container +// to avoid security issues from processes reading information from non-namespace aware mounts ( proc/kcore ). +func maskFile(path string) error { + if err := syscall.Mount("/dev/null", path, "", syscall.MS_BIND, ""); err != nil && !os.IsNotExist(err) { + return err + } + return nil +} diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/apparmor.json b/vendor/src/github.com/docker/libcontainer/sample_configs/apparmor.json index 96f73cb794..843c2c61ea 100644 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/apparmor.json +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/apparmor.json @@ -1,196 +1,340 @@ { - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "apparmor_profile": "docker-default", - "mount_config": { - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type":"NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWPID"}, - {"type": "NEWUTS"} - ], - "networks": [ - { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - } - ], - "tty": true, - "user": "daemon" + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/rootfs/jessie", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "", + "hostname": "nsinit", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, + { + "type": "NEWPID", + "path": "" + }, + { + "type": "NEWNET", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "docker-default", + "process_label": "", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": null, + "gid_mappings": null, + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] } diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/attach_to_bridge.json b/vendor/src/github.com/docker/libcontainer/sample_configs/attach_to_bridge.json index e5c03a7ef4..11335b25fe 100644 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/attach_to_bridge.json +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/attach_to_bridge.json @@ -1,202 +1,353 @@ { - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "mount_config": { - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type": "NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWPID"}, - {"type": "NEWUTS"} - ], - "networks": [ + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/rootfs/jessie", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "", + "hostname": "koye", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, + { + "type": "NEWPID", + "path": "" + }, + { + "type": "NEWNET", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + }, { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - }, - { - "address": "172.17.0.101/16", - "bridge": "docker0", - "veth_prefix": "veth", - "gateway": "172.17.42.1", - "mtu": 1500, - "type": "veth" - } - ], - "tty": true + "type": "veth", + "name": "eth0", + "bridge": "docker0", + "mac_address": "", + "address": "172.17.0.101/16", + "gateway": "172.17.42.1", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 1500, + "txqueuelen": 0, + "host_interface_name": "vethnsinit" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "", + "process_label": "", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": null, + "gid_mappings": null, + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] } diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/host-pid.json b/vendor/src/github.com/docker/libcontainer/sample_configs/host-pid.json index f47af930e6..bf46150443 100644 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/host-pid.json +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/host-pid.json @@ -1,200 +1,336 @@ { - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "mount_config": { - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "mounts": [ - { - "type": "tmpfs", - "destination": "/tmp" - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type": "NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWUTS"} - ], - "networks": [ + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/rootfs/jessie", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "", + "hostname": "nsinit", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - } - ], - "tty": true, - "user": "daemon" + "type": "NEWNET", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "", + "process_label": "", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": null, + "gid_mappings": null, + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] } diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/minimal.json b/vendor/src/github.com/docker/libcontainer/sample_configs/minimal.json index 01de467468..c2f7410956 100644 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/minimal.json +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/minimal.json @@ -1,201 +1,340 @@ { - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "mount_config": { - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "mounts": [ - { - "type": "tmpfs", - "destination": "/tmp" - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type": "NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWPID"}, - {"type": "NEWUTS"} - ], - "networks": [ - { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - } - ], - "tty": true, - "user": "daemon" + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/home/michael/development/gocode/src/github.com/docker/libcontainer", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "", + "hostname": "nsinit", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, + { + "type": "NEWPID", + "path": "" + }, + { + "type": "NEWNET", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "", + "process_label": "", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": null, + "gid_mappings": null, + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] } diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/route_source_address_selection.json b/vendor/src/github.com/docker/libcontainer/sample_configs/route_source_address_selection.json deleted file mode 100644 index 9c62045a4b..0000000000 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/route_source_address_selection.json +++ /dev/null @@ -1,209 +0,0 @@ -{ - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "mount_config": { - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type": "NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWPID"}, - {"type": "NEWUTS"} - ], - "networks": [ - { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - }, - { - "address": "172.17.0.101/16", - "bridge": "docker0", - "veth_prefix": "veth", - "mtu": 1500, - "type": "veth" - } - ], - "routes": [ - { - "destination": "0.0.0.0/0", - "source": "172.17.0.101", - "gateway": "172.17.42.1", - "interface_name": "eth0" - } - ], - "tty": true -} diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/selinux.json b/vendor/src/github.com/docker/libcontainer/sample_configs/selinux.json index 15556488a2..dddfdf1440 100644 --- a/vendor/src/github.com/docker/libcontainer/sample_configs/selinux.json +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/selinux.json @@ -1,197 +1,340 @@ { - "capabilities": [ - "CHOWN", - "DAC_OVERRIDE", - "FOWNER", - "MKNOD", - "NET_RAW", - "SETGID", - "SETUID", - "SETFCAP", - "SETPCAP", - "NET_BIND_SERVICE", - "SYS_CHROOT", - "KILL" - ], - "cgroups": { - "allowed_devices": [ - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "m", - "major_number": -1, - "minor_number": -1, - "type": 98 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 1, - "path": "/dev/console", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "path": "/dev/tty0", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 4, - "minor_number": 1, - "path": "/dev/tty1", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 136, - "minor_number": -1, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 5, - "minor_number": 2, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "major_number": 10, - "minor_number": 200, - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ], - "name": "docker-koye", - "parent": "docker" - }, - "restrict_sys": true, - "process_label": "system_u:system_r:svirt_lxc_net_t:s0:c164,c475", - "mount_config": { - "mount_label": "system_u:system_r:svirt_lxc_net_t:s0:c164,c475", - "device_nodes": [ - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 3, - "path": "/dev/null", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 5, - "path": "/dev/zero", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 7, - "path": "/dev/full", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 5, - "path": "/dev/tty", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 9, - "path": "/dev/urandom", - "type": 99 - }, - { - "cgroup_permissions": "rwm", - "file_mode": 438, - "major_number": 1, - "minor_number": 8, - "path": "/dev/random", - "type": 99 - } - ] - }, - "environment": [ - "HOME=/", - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", - "HOSTNAME=koye", - "TERM=xterm" - ], - "hostname": "koye", - "namespaces": [ - {"type": "NEWIPC"}, - {"type": "NEWNET"}, - {"type": "NEWNS"}, - {"type": "NEWPID"}, - {"type": "NEWUTS"} - ], - "networks": [ - { - "address": "127.0.0.1/0", - "gateway": "localhost", - "mtu": 1500, - "type": "loopback" - } - ], - "tty": true, - "user": "daemon" + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/rootfs/jessie", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "system_u:system_r:svirt_lxc_net_t:s0:c164,c475", + "hostname": "nsinit", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, + { + "type": "NEWPID", + "path": "" + }, + { + "type": "NEWNET", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "", + "process_label": "system_u:system_r:svirt_lxc_net_t:s0:c164,c475", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": null, + "gid_mappings": null, + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] } diff --git a/vendor/src/github.com/docker/libcontainer/sample_configs/userns.json b/vendor/src/github.com/docker/libcontainer/sample_configs/userns.json new file mode 100644 index 0000000000..2b1fb90b4a --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/sample_configs/userns.json @@ -0,0 +1,376 @@ +{ + "no_pivot_root": false, + "parent_death_signal": 0, + "pivot_dir": "", + "rootfs": "/rootfs/jessie", + "readonlyfs": false, + "mounts": [ + { + "source": "shm", + "destination": "/dev/shm", + "device": "tmpfs", + "flags": 14, + "data": "mode=1777,size=65536k", + "relabel": "" + }, + { + "source": "mqueue", + "destination": "/dev/mqueue", + "device": "mqueue", + "flags": 14, + "data": "", + "relabel": "" + }, + { + "source": "sysfs", + "destination": "/sys", + "device": "sysfs", + "flags": 15, + "data": "", + "relabel": "" + } + ], + "devices": [ + { + "type": 99, + "path": "/dev/fuse", + "major": 10, + "minor": 229, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "mount_label": "", + "hostname": "nsinit", + "namespaces": [ + { + "type": "NEWNS", + "path": "" + }, + { + "type": "NEWUTS", + "path": "" + }, + { + "type": "NEWIPC", + "path": "" + }, + { + "type": "NEWPID", + "path": "" + }, + { + "type": "NEWNET", + "path": "" + }, + { + "type": "NEWUSER", + "path": "" + } + ], + "capabilities": [ + "CHOWN", + "DAC_OVERRIDE", + "FSETID", + "FOWNER", + "MKNOD", + "NET_RAW", + "SETGID", + "SETUID", + "SETFCAP", + "SETPCAP", + "NET_BIND_SERVICE", + "SYS_CHROOT", + "KILL", + "AUDIT_WRITE" + ], + "networks": [ + { + "type": "loopback", + "name": "", + "bridge": "", + "mac_address": "", + "address": "127.0.0.1/0", + "gateway": "localhost", + "ipv6_address": "", + "ipv6_gateway": "", + "mtu": 0, + "txqueuelen": 0, + "host_interface_name": "" + } + ], + "routes": null, + "cgroups": { + "name": "libcontainer", + "parent": "nsinit", + "allow_all_devices": false, + "allowed_devices": [ + { + "type": 99, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 98, + "path": "", + "major": -1, + "minor": -1, + "permissions": "m", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/console", + "major": 5, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty0", + "major": 4, + "minor": 0, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty1", + "major": 4, + "minor": 1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 136, + "minor": -1, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 5, + "minor": 2, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "", + "major": 10, + "minor": 200, + "permissions": "rwm", + "file_mode": 0, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/null", + "major": 1, + "minor": 3, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/zero", + "major": 1, + "minor": 5, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/full", + "major": 1, + "minor": 7, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/tty", + "major": 5, + "minor": 0, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/urandom", + "major": 1, + "minor": 9, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + }, + { + "type": 99, + "path": "/dev/random", + "major": 1, + "minor": 8, + "permissions": "rwm", + "file_mode": 438, + "uid": 0, + "gid": 0 + } + ], + "memory": 0, + "memory_reservation": 0, + "memory_swap": 0, + "cpu_shares": 0, + "cpu_quota": 0, + "cpu_period": 0, + "cpuset_cpus": "", + "cpuset_mems": "", + "blkio_weight": 0, + "freezer": "", + "slice": "" + }, + "apparmor_profile": "", + "process_label": "", + "rlimits": [ + { + "type": 7, + "hard": 1024, + "soft": 1024 + } + ], + "additional_groups": null, + "uid_mappings": [ + { + "container_id": 0, + "host_id": 1000, + "size": 1 + }, + { + "container_id": 1, + "host_id": 1, + "size": 999 + }, + { + "container_id": 1001, + "host_id": 1001, + "size": 2147482647 + } + ], + "gid_mappings": [ + { + "container_id": 0, + "host_id": 1000, + "size": 1 + }, + { + "container_id": 1, + "host_id": 1, + "size": 999 + }, + { + "container_id": 1001, + "host_id": 1001, + "size": 2147482647 + } + ], + "mask_paths": [ + "/proc/kcore" + ], + "readonly_paths": [ + "/proc/sys", + "/proc/sysrq-trigger", + "/proc/irq", + "/proc/bus" + ] +} diff --git a/vendor/src/github.com/docker/libcontainer/security/capabilities/capabilities.go b/vendor/src/github.com/docker/libcontainer/security/capabilities/capabilities.go deleted file mode 100644 index 7aef5fa67f..0000000000 --- a/vendor/src/github.com/docker/libcontainer/security/capabilities/capabilities.go +++ /dev/null @@ -1,56 +0,0 @@ -package capabilities - -import ( - "os" - - "github.com/syndtr/gocapability/capability" -) - -const allCapabilityTypes = capability.CAPS | capability.BOUNDS - -// DropBoundingSet drops the capability bounding set to those specified in the -// container configuration. -func DropBoundingSet(capabilities []string) error { - c, err := capability.NewPid(os.Getpid()) - if err != nil { - return err - } - - keep := getEnabledCapabilities(capabilities) - c.Clear(capability.BOUNDS) - c.Set(capability.BOUNDS, keep...) - - if err := c.Apply(capability.BOUNDS); err != nil { - return err - } - - return nil -} - -// DropCapabilities drops all capabilities for the current process except those specified in the container configuration. -func DropCapabilities(capList []string) error { - c, err := capability.NewPid(os.Getpid()) - if err != nil { - return err - } - - keep := getEnabledCapabilities(capList) - c.Clear(allCapabilityTypes) - c.Set(allCapabilityTypes, keep...) - - if err := c.Apply(allCapabilityTypes); err != nil { - return err - } - return nil -} - -// getEnabledCapabilities returns the capabilities that should not be dropped by the container. -func getEnabledCapabilities(capList []string) []capability.Cap { - keep := []capability.Cap{} - for _, capability := range capList { - if c := GetCapability(capability); c != nil { - keep = append(keep, c.Value) - } - } - return keep -} diff --git a/vendor/src/github.com/docker/libcontainer/security/capabilities/types.go b/vendor/src/github.com/docker/libcontainer/security/capabilities/types.go deleted file mode 100644 index a960b804c6..0000000000 --- a/vendor/src/github.com/docker/libcontainer/security/capabilities/types.go +++ /dev/null @@ -1,88 +0,0 @@ -package capabilities - -import "github.com/syndtr/gocapability/capability" - -type ( - CapabilityMapping struct { - Key string `json:"key,omitempty"` - Value capability.Cap `json:"value,omitempty"` - } - Capabilities []*CapabilityMapping -) - -func (c *CapabilityMapping) String() string { - return c.Key -} - -func GetCapability(key string) *CapabilityMapping { - for _, capp := range capabilityList { - if capp.Key == key { - cpy := *capp - return &cpy - } - } - return nil -} - -func GetAllCapabilities() []string { - output := make([]string, len(capabilityList)) - for i, capability := range capabilityList { - output[i] = capability.String() - } - return output -} - -// Contains returns true if the specified Capability is -// in the slice -func (c Capabilities) contains(capp string) bool { - return c.get(capp) != nil -} - -func (c Capabilities) get(capp string) *CapabilityMapping { - for _, cap := range c { - if cap.Key == capp { - return cap - } - } - return nil -} - -var capabilityList = Capabilities{ - {Key: "SETPCAP", Value: capability.CAP_SETPCAP}, - {Key: "SYS_MODULE", Value: capability.CAP_SYS_MODULE}, - {Key: "SYS_RAWIO", Value: capability.CAP_SYS_RAWIO}, - {Key: "SYS_PACCT", Value: capability.CAP_SYS_PACCT}, - {Key: "SYS_ADMIN", Value: capability.CAP_SYS_ADMIN}, - {Key: "SYS_NICE", Value: capability.CAP_SYS_NICE}, - {Key: "SYS_RESOURCE", Value: capability.CAP_SYS_RESOURCE}, - {Key: "SYS_TIME", Value: capability.CAP_SYS_TIME}, - {Key: "SYS_TTY_CONFIG", Value: capability.CAP_SYS_TTY_CONFIG}, - {Key: "MKNOD", Value: capability.CAP_MKNOD}, - {Key: "AUDIT_WRITE", Value: capability.CAP_AUDIT_WRITE}, - {Key: "AUDIT_CONTROL", Value: capability.CAP_AUDIT_CONTROL}, - {Key: "MAC_OVERRIDE", Value: capability.CAP_MAC_OVERRIDE}, - {Key: "MAC_ADMIN", Value: capability.CAP_MAC_ADMIN}, - {Key: "NET_ADMIN", Value: capability.CAP_NET_ADMIN}, - {Key: "SYSLOG", Value: capability.CAP_SYSLOG}, - {Key: "CHOWN", Value: capability.CAP_CHOWN}, - {Key: "NET_RAW", Value: capability.CAP_NET_RAW}, - {Key: "DAC_OVERRIDE", Value: capability.CAP_DAC_OVERRIDE}, - {Key: "FOWNER", Value: capability.CAP_FOWNER}, - {Key: "DAC_READ_SEARCH", Value: capability.CAP_DAC_READ_SEARCH}, - {Key: "FSETID", Value: capability.CAP_FSETID}, - {Key: "KILL", Value: capability.CAP_KILL}, - {Key: "SETGID", Value: capability.CAP_SETGID}, - {Key: "SETUID", Value: capability.CAP_SETUID}, - {Key: "LINUX_IMMUTABLE", Value: capability.CAP_LINUX_IMMUTABLE}, - {Key: "NET_BIND_SERVICE", Value: capability.CAP_NET_BIND_SERVICE}, - {Key: "NET_BROADCAST", Value: capability.CAP_NET_BROADCAST}, - {Key: "IPC_LOCK", Value: capability.CAP_IPC_LOCK}, - {Key: "IPC_OWNER", Value: capability.CAP_IPC_OWNER}, - {Key: "SYS_CHROOT", Value: capability.CAP_SYS_CHROOT}, - {Key: "SYS_PTRACE", Value: capability.CAP_SYS_PTRACE}, - {Key: "SYS_BOOT", Value: capability.CAP_SYS_BOOT}, - {Key: "LEASE", Value: capability.CAP_LEASE}, - {Key: "SETFCAP", Value: capability.CAP_SETFCAP}, - {Key: "WAKE_ALARM", Value: capability.CAP_WAKE_ALARM}, - {Key: "BLOCK_SUSPEND", Value: capability.CAP_BLOCK_SUSPEND}, -} diff --git a/vendor/src/github.com/docker/libcontainer/security/capabilities/types_test.go b/vendor/src/github.com/docker/libcontainer/security/capabilities/types_test.go deleted file mode 100644 index 06e8a2b01c..0000000000 --- a/vendor/src/github.com/docker/libcontainer/security/capabilities/types_test.go +++ /dev/null @@ -1,19 +0,0 @@ -package capabilities - -import ( - "testing" -) - -func TestCapabilitiesContains(t *testing.T) { - caps := Capabilities{ - GetCapability("MKNOD"), - GetCapability("SETPCAP"), - } - - if caps.contains("SYS_ADMIN") { - t.Fatal("capabilities should not contain SYS_ADMIN") - } - if !caps.contains("MKNOD") { - t.Fatal("capabilities should contain MKNOD but does not") - } -} diff --git a/vendor/src/github.com/docker/libcontainer/security/restrict/restrict.go b/vendor/src/github.com/docker/libcontainer/security/restrict/restrict.go deleted file mode 100644 index dd765b1f1b..0000000000 --- a/vendor/src/github.com/docker/libcontainer/security/restrict/restrict.go +++ /dev/null @@ -1,53 +0,0 @@ -// +build linux - -package restrict - -import ( - "fmt" - "os" - "syscall" - "time" -) - -const defaultMountFlags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV - -func mountReadonly(path string) error { - for i := 0; i < 5; i++ { - if err := syscall.Mount("", path, "", syscall.MS_REMOUNT|syscall.MS_RDONLY, ""); err != nil && !os.IsNotExist(err) { - switch err { - case syscall.EINVAL: - // Probably not a mountpoint, use bind-mount - if err := syscall.Mount(path, path, "", syscall.MS_BIND, ""); err != nil { - return err - } - - return syscall.Mount(path, path, "", syscall.MS_BIND|syscall.MS_REMOUNT|syscall.MS_RDONLY|syscall.MS_REC|defaultMountFlags, "") - case syscall.EBUSY: - time.Sleep(100 * time.Millisecond) - continue - default: - return err - } - } - - return nil - } - - return fmt.Errorf("unable to mount %s as readonly max retries reached", path) -} - -// This has to be called while the container still has CAP_SYS_ADMIN (to be able to perform mounts). -// However, afterwards, CAP_SYS_ADMIN should be dropped (otherwise the user will be able to revert those changes). -func Restrict(mounts ...string) error { - for _, dest := range mounts { - if err := mountReadonly(dest); err != nil { - return fmt.Errorf("unable to remount %s readonly: %s", dest, err) - } - } - - if err := syscall.Mount("/dev/null", "/proc/kcore", "", syscall.MS_BIND, ""); err != nil && !os.IsNotExist(err) { - return fmt.Errorf("unable to bind-mount /dev/null over /proc/kcore: %s", err) - } - - return nil -} diff --git a/vendor/src/github.com/docker/libcontainer/security/restrict/unsupported.go b/vendor/src/github.com/docker/libcontainer/security/restrict/unsupported.go deleted file mode 100644 index 464e8d498d..0000000000 --- a/vendor/src/github.com/docker/libcontainer/security/restrict/unsupported.go +++ /dev/null @@ -1,9 +0,0 @@ -// +build !linux - -package restrict - -import "fmt" - -func Restrict() error { - return fmt.Errorf("not supported") -} diff --git a/vendor/src/github.com/docker/libcontainer/selinux/selinux.go b/vendor/src/github.com/docker/libcontainer/selinux/selinux.go index e5bd820980..28bc405afc 100644 --- a/vendor/src/github.com/docker/libcontainer/selinux/selinux.go +++ b/vendor/src/github.com/docker/libcontainer/selinux/selinux.go @@ -37,8 +37,8 @@ var ( spaceRegex = regexp.MustCompile(`^([^=]+) (.*)$`) mcsList = make(map[string]bool) selinuxfs = "unknown" - selinuxEnabled = false - selinuxEnabledChecked = false + selinuxEnabled = false // Stores whether selinux is currently enabled + selinuxEnabledChecked = false // Stores whether selinux enablement has been checked or established yet ) type SELinuxContext map[string]string @@ -48,6 +48,11 @@ func SetDisabled() { selinuxEnabled, selinuxEnabledChecked = false, true } +// getSelinuxMountPoint returns the path to the mountpoint of an selinuxfs +// filesystem or an empty string if no mountpoint is found. Selinuxfs is +// a proc-like pseudo-filesystem that exposes the selinux policy API to +// processes. The existence of an selinuxfs mount is used to determine +// whether selinux is currently enabled or not. func getSelinuxMountPoint() string { if selinuxfs != "unknown" { return selinuxfs @@ -74,6 +79,7 @@ func getSelinuxMountPoint() string { return selinuxfs } +// SelinuxEnabled returns whether selinux is currently enabled. func SelinuxEnabled() bool { if selinuxEnabledChecked { return selinuxEnabled @@ -145,13 +151,19 @@ func readCon(name string) (string, error) { return val, err } +// Setfilecon sets the SELinux label for this path or returns an error. func Setfilecon(path string, scon string) error { return system.Lsetxattr(path, xattrNameSelinux, []byte(scon), 0) } -// Return the SELinux label for this path +// Getfilecon returns the SELinux label for this path or returns an error. func Getfilecon(path string) (string, error) { con, err := system.Lgetxattr(path, xattrNameSelinux) + + // Trim the NUL byte at the end of the byte buffer, if present. + if con[len(con)-1] == '\x00' { + con = con[:len(con)-1] + } return string(con), err } @@ -163,11 +175,12 @@ func Getfscreatecon() (string, error) { return readCon(fmt.Sprintf("/proc/self/task/%d/attr/fscreate", syscall.Gettid())) } -// Return the SELinux label of the current process thread. +// Getcon returns the SELinux label of the current process thread, or an error. func Getcon() (string, error) { return readCon(fmt.Sprintf("/proc/self/task/%d/attr/current", syscall.Gettid())) } +// Getpidcon returns the SELinux label of the given pid, or an error. func Getpidcon(pid int) (string, error) { return readCon(fmt.Sprintf("/proc/%d/attr/current", pid)) } diff --git a/vendor/src/github.com/docker/libcontainer/setns_init_linux.go b/vendor/src/github.com/docker/libcontainer/setns_init_linux.go new file mode 100644 index 0000000000..f77219d27a --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/setns_init_linux.go @@ -0,0 +1,35 @@ +// +build linux + +package libcontainer + +import ( + "os" + + "github.com/docker/libcontainer/apparmor" + "github.com/docker/libcontainer/label" + "github.com/docker/libcontainer/system" +) + +// linuxSetnsInit performs the container's initialization for running a new process +// inside an existing container. +type linuxSetnsInit struct { + config *initConfig +} + +func (l *linuxSetnsInit) Init() error { + if err := setupRlimits(l.config.Config); err != nil { + return err + } + if err := finalizeNamespace(l.config); err != nil { + return err + } + if err := apparmor.ApplyProfile(l.config.Config.AppArmorProfile); err != nil { + return err + } + if l.config.Config.ProcessLabel != "" { + if err := label.SetProcessLabel(l.config.Config.ProcessLabel); err != nil { + return err + } + } + return system.Execv(l.config.Args[0], l.config.Args[0:], os.Environ()) +} diff --git a/vendor/src/github.com/docker/libcontainer/stacktrace/capture.go b/vendor/src/github.com/docker/libcontainer/stacktrace/capture.go new file mode 100644 index 0000000000..15b3482ccc --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stacktrace/capture.go @@ -0,0 +1,23 @@ +package stacktrace + +import "runtime" + +// Caputure captures a stacktrace for the current calling go program +// +// skip is the number of frames to skip +func Capture(userSkip int) Stacktrace { + var ( + skip = userSkip + 1 // add one for our own function + frames []Frame + ) + for i := skip; ; i++ { + pc, file, line, ok := runtime.Caller(i) + if !ok { + break + } + frames = append(frames, NewFrame(pc, file, line)) + } + return Stacktrace{ + Frames: frames, + } +} diff --git a/vendor/src/github.com/docker/libcontainer/stacktrace/capture_test.go b/vendor/src/github.com/docker/libcontainer/stacktrace/capture_test.go new file mode 100644 index 0000000000..3f435d51a6 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stacktrace/capture_test.go @@ -0,0 +1,27 @@ +package stacktrace + +import "testing" + +func captureFunc() Stacktrace { + return Capture(0) +} + +func TestCaptureTestFunc(t *testing.T) { + stack := captureFunc() + + if len(stack.Frames) == 0 { + t.Fatal("expected stack frames to be returned") + } + + // the first frame is the caller + frame := stack.Frames[0] + if expected := "captureFunc"; frame.Function != expected { + t.Fatalf("expteced function %q but recevied %q", expected, frame.Function) + } + if expected := "github.com/docker/libcontainer/stacktrace"; frame.Package != expected { + t.Fatalf("expected package %q but received %q", expected, frame.Package) + } + if expected := "capture_test.go"; frame.File != expected { + t.Fatalf("expected file %q but received %q", expected, frame.File) + } +} diff --git a/vendor/src/github.com/docker/libcontainer/stacktrace/frame.go b/vendor/src/github.com/docker/libcontainer/stacktrace/frame.go new file mode 100644 index 0000000000..5edea1b751 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stacktrace/frame.go @@ -0,0 +1,35 @@ +package stacktrace + +import ( + "path/filepath" + "runtime" + "strings" +) + +// NewFrame returns a new stack frame for the provided information +func NewFrame(pc uintptr, file string, line int) Frame { + fn := runtime.FuncForPC(pc) + pack, name := parseFunctionName(fn.Name()) + return Frame{ + Line: line, + File: filepath.Base(file), + Package: pack, + Function: name, + } +} + +func parseFunctionName(name string) (string, string) { + i := strings.LastIndex(name, ".") + if i == -1 { + return "", name + } + return name[:i], name[i+1:] +} + +// Frame contains all the information for a stack frame within a go program +type Frame struct { + File string + Function string + Package string + Line int +} diff --git a/vendor/src/github.com/docker/libcontainer/stacktrace/frame_test.go b/vendor/src/github.com/docker/libcontainer/stacktrace/frame_test.go new file mode 100644 index 0000000000..ae95ec4847 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stacktrace/frame_test.go @@ -0,0 +1,20 @@ +package stacktrace + +import "testing" + +func TestParsePackageName(t *testing.T) { + var ( + name = "github.com/docker/libcontainer/stacktrace.captureFunc" + expectedPackage = "github.com/docker/libcontainer/stacktrace" + expectedFunction = "captureFunc" + ) + + pack, funcName := parseFunctionName(name) + if pack != expectedPackage { + t.Fatalf("expected package %q but received %q", expectedPackage, pack) + } + + if funcName != expectedFunction { + t.Fatalf("expected function %q but received %q", expectedFunction, funcName) + } +} diff --git a/vendor/src/github.com/docker/libcontainer/stacktrace/stacktrace.go b/vendor/src/github.com/docker/libcontainer/stacktrace/stacktrace.go new file mode 100644 index 0000000000..5e8b58d2d2 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stacktrace/stacktrace.go @@ -0,0 +1,5 @@ +package stacktrace + +type Stacktrace struct { + Frames []Frame +} diff --git a/vendor/src/github.com/docker/libcontainer/standard_init_linux.go b/vendor/src/github.com/docker/libcontainer/standard_init_linux.go new file mode 100644 index 0000000000..29619d3cdc --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/standard_init_linux.go @@ -0,0 +1,94 @@ +// +build linux + +package libcontainer + +import ( + "os" + "syscall" + + "github.com/docker/libcontainer/apparmor" + "github.com/docker/libcontainer/configs" + "github.com/docker/libcontainer/label" + "github.com/docker/libcontainer/system" +) + +type linuxStandardInit struct { + config *initConfig +} + +func (l *linuxStandardInit) Init() error { + // join any namespaces via a path to the namespace fd if provided + if err := joinExistingNamespaces(l.config.Config.Namespaces); err != nil { + return err + } + var console *linuxConsole + if l.config.Console != "" { + console = newConsoleFromPath(l.config.Console) + if err := console.dupStdio(); err != nil { + return err + } + } + if _, err := syscall.Setsid(); err != nil { + return err + } + if console != nil { + if err := system.Setctty(); err != nil { + return err + } + } + if err := setupNetwork(l.config); err != nil { + return err + } + if err := setupRoute(l.config.Config); err != nil { + return err + } + if err := setupRlimits(l.config.Config); err != nil { + return err + } + label.Init() + // InitializeMountNamespace() can be executed only for a new mount namespace + if l.config.Config.Namespaces.Contains(configs.NEWNS) { + if err := setupRootfs(l.config.Config, console); err != nil { + return err + } + } + if hostname := l.config.Config.Hostname; hostname != "" { + if err := syscall.Sethostname([]byte(hostname)); err != nil { + return err + } + } + if err := apparmor.ApplyProfile(l.config.Config.AppArmorProfile); err != nil { + return err + } + if err := label.SetProcessLabel(l.config.Config.ProcessLabel); err != nil { + return err + } + for _, path := range l.config.Config.ReadonlyPaths { + if err := remountReadonly(path); err != nil { + return err + } + } + for _, path := range l.config.Config.MaskPaths { + if err := maskFile(path); err != nil { + return err + } + } + pdeath, err := system.GetParentDeathSignal() + if err != nil { + return err + } + if err := finalizeNamespace(l.config); err != nil { + return err + } + // finalizeNamespace can change user/group which clears the parent death + // signal, so we restore it here. + if err := pdeath.Restore(); err != nil { + return err + } + // Signal self if parent is already dead. Does nothing if running in a new + // PID namespace, as Getppid will always return 0. + if syscall.Getppid() == 1 { + return syscall.Kill(syscall.Getpid(), syscall.SIGKILL) + } + return system.Execv(l.config.Args[0], l.config.Args[0:], os.Environ()) +} diff --git a/vendor/src/github.com/docker/libcontainer/state.go b/vendor/src/github.com/docker/libcontainer/state.go deleted file mode 100644 index 208b4c6276..0000000000 --- a/vendor/src/github.com/docker/libcontainer/state.go +++ /dev/null @@ -1,77 +0,0 @@ -package libcontainer - -import ( - "encoding/json" - "os" - "path/filepath" - - "github.com/docker/libcontainer/network" -) - -// State represents a running container's state -type State struct { - // InitPid is the init process id in the parent namespace - InitPid int `json:"init_pid,omitempty"` - - // InitStartTime is the init process start time - InitStartTime string `json:"init_start_time,omitempty"` - - // Network runtime state. - NetworkState network.NetworkState `json:"network_state,omitempty"` - - // Path to all the cgroups setup for a container. Key is cgroup subsystem name. - CgroupPaths map[string]string `json:"cgroup_paths,omitempty"` -} - -// The running state of the container. -type RunState int - -const ( - // The name of the runtime state file - stateFile = "state.json" - - // The container exists and is running. - Running RunState = iota - - // The container exists, it is in the process of being paused. - Pausing - - // The container exists, but all its processes are paused. - Paused - - // The container does not exist. - Destroyed -) - -// SaveState writes the container's runtime state to a state.json file -// in the specified path -func SaveState(basePath string, state *State) error { - f, err := os.Create(filepath.Join(basePath, stateFile)) - if err != nil { - return err - } - defer f.Close() - - return json.NewEncoder(f).Encode(state) -} - -// GetState reads the state.json file for a running container -func GetState(basePath string) (*State, error) { - f, err := os.Open(filepath.Join(basePath, stateFile)) - if err != nil { - return nil, err - } - defer f.Close() - - var state *State - if err := json.NewDecoder(f).Decode(&state); err != nil { - return nil, err - } - - return state, nil -} - -// DeleteState deletes the state.json file -func DeleteState(basePath string) error { - return os.Remove(filepath.Join(basePath, stateFile)) -} diff --git a/vendor/src/github.com/docker/libcontainer/stats.go b/vendor/src/github.com/docker/libcontainer/stats.go new file mode 100644 index 0000000000..ba72a6fde9 --- /dev/null +++ b/vendor/src/github.com/docker/libcontainer/stats.go @@ -0,0 +1,22 @@ +package libcontainer + +import "github.com/docker/libcontainer/cgroups" + +type Stats struct { + Interfaces []*NetworkInterface + CgroupStats *cgroups.Stats +} + +type NetworkInterface struct { + // Name is the name of the network interface. + Name string + + RxBytes uint64 + RxPackets uint64 + RxErrors uint64 + RxDropped uint64 + TxBytes uint64 + TxPackets uint64 + TxErrors uint64 + TxDropped uint64 +} diff --git a/vendor/src/github.com/docker/libcontainer/system/linux.go b/vendor/src/github.com/docker/libcontainer/system/linux.go index c07ef1532d..2cc3ef803a 100644 --- a/vendor/src/github.com/docker/libcontainer/system/linux.go +++ b/vendor/src/github.com/docker/libcontainer/system/linux.go @@ -8,6 +8,26 @@ import ( "unsafe" ) +type ParentDeathSignal int + +func (p ParentDeathSignal) Restore() error { + if p == 0 { + return nil + } + current, err := GetParentDeathSignal() + if err != nil { + return err + } + if p == current { + return nil + } + return p.Set() +} + +func (p ParentDeathSignal) Set() error { + return SetParentDeathSignal(uintptr(p)) +} + func Execv(cmd string, args []string, env []string) error { name, err := exec.LookPath(cmd) if err != nil { @@ -17,23 +37,20 @@ func Execv(cmd string, args []string, env []string) error { return syscall.Exec(name, args, env) } -func ParentDeathSignal(sig uintptr) error { +func SetParentDeathSignal(sig uintptr) error { if _, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_SET_PDEATHSIG, sig, 0); err != 0 { return err } return nil } -func GetParentDeathSignal() (int, error) { +func GetParentDeathSignal() (ParentDeathSignal, error) { var sig int - _, _, err := syscall.RawSyscall(syscall.SYS_PRCTL, syscall.PR_GET_PDEATHSIG, uintptr(unsafe.Pointer(&sig)), 0) - if err != 0 { return -1, err } - - return sig, nil + return ParentDeathSignal(sig), nil } func SetKeepCaps() error { diff --git a/vendor/src/github.com/docker/libcontainer/types.go b/vendor/src/github.com/docker/libcontainer/types.go deleted file mode 100644 index c341137ec8..0000000000 --- a/vendor/src/github.com/docker/libcontainer/types.go +++ /dev/null @@ -1,11 +0,0 @@ -package libcontainer - -import ( - "github.com/docker/libcontainer/cgroups" - "github.com/docker/libcontainer/network" -) - -type ContainerStats struct { - NetworkStats *network.NetworkStats `json:"network_stats,omitempty"` - CgroupStats *cgroups.Stats `json:"cgroup_stats,omitempty"` -} diff --git a/vendor/src/github.com/docker/libcontainer/update-vendor.sh b/vendor/src/github.com/docker/libcontainer/update-vendor.sh index df66a0a8d5..12077256e8 100755 --- a/vendor/src/github.com/docker/libcontainer/update-vendor.sh +++ b/vendor/src/github.com/docker/libcontainer/update-vendor.sh @@ -42,7 +42,8 @@ clone() { # the following lines are in sorted order, FYI clone git github.com/codegangsta/cli 1.1.0 clone git github.com/coreos/go-systemd v2 -clone git github.com/godbus/dbus v1 -clone git github.com/syndtr/gocapability 3c85049eae +clone git github.com/godbus/dbus v2 +clone git github.com/Sirupsen/logrus v0.6.6 +clone git github.com/syndtr/gocapability e55e583369 # intentionally not vendoring Docker itself... that'd be a circle :) diff --git a/vendor/src/github.com/docker/libcontainer/utils/utils.go b/vendor/src/github.com/docker/libcontainer/utils/utils.go index 76184ce00b..094bce5300 100644 --- a/vendor/src/github.com/docker/libcontainer/utils/utils.go +++ b/vendor/src/github.com/docker/libcontainer/utils/utils.go @@ -10,6 +10,10 @@ import ( "syscall" ) +const ( + exitSignalOffset = 128 +) + // GenerateRandomName returns a new name joined with a prefix. This size // specified is used to truncate the randomly generated value func GenerateRandomName(prefix string, size int) (string, error) { @@ -53,3 +57,12 @@ func CloseExecFrom(minFd int) error { } return nil } + +// ExitStatus returns the correct exit status for a process based on if it +// was signaled or existed cleanly. +func ExitStatus(status syscall.WaitStatus) int { + if status.Signaled() { + return exitSignalOffset + int(status.Signal()) + } + return status.ExitStatus() +} diff --git a/vendor/src/github.com/godbus/dbus/CONTRIBUTING.md b/vendor/src/github.com/godbus/dbus/CONTRIBUTING.md new file mode 100644 index 0000000000..c88f9b2bdd --- /dev/null +++ b/vendor/src/github.com/godbus/dbus/CONTRIBUTING.md @@ -0,0 +1,50 @@ +# How to Contribute + +## Getting Started + +- Fork the repository on GitHub +- Read the [README](README.markdown) for build and test instructions +- Play with the project, submit bugs, submit patches! + +## Contribution Flow + +This is a rough outline of what a contributor's workflow looks like: + +- Create a topic branch from where you want to base your work (usually master). +- Make commits of logical units. +- Make sure your commit messages are in the proper format (see below). +- Push your changes to a topic branch in your fork of the repository. +- Make sure the tests pass, and add any new tests as appropriate. +- Submit a pull request to the original repository. + +Thanks for your contributions! + +### Format of the Commit Message + +We follow a rough convention for commit messages that is designed to answer two +questions: what changed and why. The subject line should feature the what and +the body of the commit should describe the why. + +``` +scripts: add the test-cluster command + +this uses tmux to setup a test cluster that you can easily kill and +start for debugging. + +Fixes #38 +``` + +The format can be described more formally as follows: + +``` +: + + + +