Merge branch 'master' into 257-container_real_running_state-fix

This commit is contained in:
shin- 2013-04-02 06:37:50 -07:00
commit 8edf0ca7f3
22 changed files with 650 additions and 278 deletions

View file

@ -7,6 +7,7 @@ Caleb Spare <cespare@gmail.com>
Charles Hooper <charles.hooper@dotcloud.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Robinson <gottagetmac@gmail.com>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
ezbercih <cem.ezberci@gmail.com>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>

93
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,93 @@
# Contributing to Docker
Want to hack on Docker? Awesome! There are instructions to get you
started on the website: http://docker.io/gettingstarted.html
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
## Contribution guidelines
### Pull requests are always welcome
We are always thrilled to receive pull requests, and do our best to
process them as fast as possible. Not sure if that typo is worth a pull
request? Do it! We will appreciate it.
If your pull request is not accepted on the first try, don't be
discouraged! If there's a problem with the implementation, hopefully you
received feedback on what to improve.
We're trying very hard to keep Docker lean and focused. We don't want it
to do everything for everybody. This means that we might decide against
incorporating a new feature. However, there might be a way to implement
that feature *on top of* docker.
### Discuss your design on the mailing list
We recommend discussing your plans [on the mailing
list](https://groups.google.com/forum/?fromgroups#!forum/docker-club)
before starting to code - especially for more ambitious contributions.
This gives other contributors a chance to point you in the right
direction, give feedback on your design, and maybe point out if someone
else is working on the same thing.
### Create issues...
Any significant improvement should be documented as [a github
issue](https://github.com/dotcloud/docker/issues) before anybody
starts working on it.
### ...but check for existing issues first!
Please take a moment to check that an issue doesn't already exist
documenting your bug report or improvement proposal. If it does, it
never hurts to add a quick "+1" or "I have this problem too". This will
help prioritize the most common problems and requests.
### Conventions
Fork the repo and make changes on your fork in a feature branch:
- If it's a bugfix branch, name it XXX-something where XXX is the number of the
issue
- If it's a feature branch, create an enhancement issue to announce your
intentions, and name it XXX-something where XXX is the number of the issue.
Submit unit tests for your changes. Go has a great test framework built in; use
it! Take a look at existing tests for inspiration. Run the full test suite on
your branch before submitting a pull request.
Make sure you include relevant updates or additions to documentation when
creating or modifying features.
Write clean code. Universally formatted code promotes ease of writing, reading,
and maintenance. Always run `go fmt` before committing your changes. Most
editors have plugins that do this automatically, and there's also a git
pre-commit hook:
```
curl -o .git/hooks/pre-commit https://raw.github.com/edsrzf/gofmt-git-hook/master/fmt-check && chmod +x .git/hooks/pre-commit
```
Pull requests descriptions should be as clear as possible and include a
reference to all the issues that they address.
Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Be
sure to post a comment after pushing. The new commits will show up in the pull
request automatically, but the reviewers will not be notified unless you
comment.
Before the pull request is merged, make sure that you squash your commits into
logical units of work using `git rebase -i` and `git push -f`. After every
commit the test suite should be passing. Include documentation changes in the
same commit so that a revert would remove all traces of the feature or fix.
Commits that fix or close an issue should include a reference like `Closes #XXX`
or `Fixes #XXX`, which will automatically close the issue when merged.
Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.

View file

@ -41,3 +41,6 @@ endif
test: all
@(cd $(DOCKER_DIR); sudo -E go test $(GO_OPTIONS))
fmt:
@gofmt -s -l -w .

View file

@ -192,11 +192,10 @@ echo "Daemon received: $(docker logs $JOB)"
Contributing to Docker
======================
Want to hack on Docker? Awesome! There are instructions to get you started on the website: http://docker.io/documentation/contributing/contributing.html
Want to hack on Docker? Awesome! There are instructions to get you started on the website: http://docs.docker.io/en/latest/contributing/contributing/
They are probably not perfect, please let us know if anything feels wrong or incomplete.
### Pull requests are always welcome
Note
----
@ -206,26 +205,6 @@ Please find it under docs/sources/ and read more about it https://github.com/dot
Please feel free to fix / update the documentation and send us pull requests. More tutorials are also welcome.
### Discuss your design on the mailing list
We recommend discussing your plans [on the mailing list](https://groups.google.com/forum/?fromgroups#!forum/docker-club) before starting to code - especially for more ambitious contributions. This gives other contributors a chance to point
you in the right direction, give feedback on your design, and maybe point out if someone else is working on the same thing.
### Create issues...
Any significant improvement should be documented as [a github issue](https://github.com/dotcloud/docker/issues) before anybody starts working on it.
### ...but check for existing issues first!
Please take a moment to check that an issue doesn't already exist documenting your bug report or improvement proposal.
If it does, it never hurts to add a quick "+1" or "I have this problem too". This will help prioritize the most common problems and requests.
### Write tests
Golang has a great testing suite built in: use it! Take a look at existing tests for inspiration.
Setting up a dev environment
----------------------------

View file

@ -1,6 +1,7 @@
package auth
import (
"bytes"
"encoding/base64"
"encoding/json"
"errors"
@ -111,7 +112,7 @@ func Login(authConfig *AuthConfig) (string, error) {
return "", errors.New(errMsg)
}
b := strings.NewReader(string(jsonBody))
b := bytes.NewReader(jsonBody)
req1, err := http.Post(REGISTRY_SERVER+"/v1/users", "application/json; charset=utf-8", b)
if err != nil {
errMsg = fmt.Sprintf("Server Error: %s", err)
@ -130,6 +131,7 @@ func Login(authConfig *AuthConfig) (string, error) {
status = "Account Created\n"
storeConfig = true
} else if reqStatusCode == 400 {
// FIXME: This should be 'exists', not 'exist'. Need to change on the server first.
if string(reqBody) == "Username or email already exist" {
client := &http.Client{}
req, err := http.NewRequest("GET", REGISTRY_SERVER+"/v1/users", nil)
@ -151,11 +153,11 @@ func Login(authConfig *AuthConfig) (string, error) {
return "", errors.New(status)
}
} else {
status = fmt.Sprintf("Registration: %s", string(reqBody))
status = fmt.Sprintf("Registration: %s", reqBody)
return "", errors.New(status)
}
} else {
status = fmt.Sprintf("[%s] : %s", reqStatusCode, string(reqBody))
status = fmt.Sprintf("[%s] : %s", reqStatusCode, reqBody)
return "", errors.New(status)
}
if storeConfig {

View file

@ -28,7 +28,7 @@ func (srv *Server) Name() string {
// FIXME: Stop violating DRY by repeating usage here and in Subcmd declarations
func (srv *Server) Help() string {
help := "Usage: docker COMMAND [arg...]\n\nA self-sufficient runtime for linux containers.\n\nCommands:\n"
for _, cmd := range [][]interface{}{
for _, cmd := range [][]string{
{"attach", "Attach to a running container"},
{"commit", "Create a new image from a container's changes"},
{"diff", "Inspect changes on a container's filesystem"},
@ -65,7 +65,7 @@ func (srv *Server) CmdLogin(stdin io.ReadCloser, stdout io.Writer, args ...strin
// Read a line on raw terminal with support for simple backspace
// sequences and echo.
//
// This function is necessary because the login command must be done a
// This function is necessary because the login command must be done in a
// raw terminal for two reasons:
// - we have to read a password (without echoing it);
// - the rcli "protocol" only supports cannonical and raw modes and you
@ -146,12 +146,12 @@ func (srv *Server) CmdLogin(stdin io.ReadCloser, stdout io.Writer, args ...strin
newAuthConfig := auth.NewAuthConfig(username, password, email, srv.runtime.root)
status, err := auth.Login(newAuthConfig)
if err != nil {
fmt.Fprintf(stdout, "Error : %s\n", err)
fmt.Fprintln(stdout, "Error:", err)
} else {
srv.runtime.authConfig = newAuthConfig
}
if status != "" {
fmt.Fprintf(stdout, status)
fmt.Fprint(stdout, status)
}
return nil
}
@ -207,7 +207,7 @@ func (srv *Server) CmdInfo(stdin io.ReadCloser, stdout io.Writer, args ...string
if !rcli.DEBUG_FLAG {
return nil
}
fmt.Fprintf(stdout, "debug mode enabled\n")
fmt.Fprintln(stdout, "debug mode enabled")
fmt.Fprintf(stdout, "fds: %d\ngoroutines: %d\n", getTotalUsedFds(), runtime.NumGoroutine())
return nil
}
@ -369,10 +369,10 @@ func (srv *Server) CmdHistory(stdin io.ReadCloser, stdout io.Writer, args ...str
}
w := tabwriter.NewWriter(stdout, 20, 1, 3, ' ', 0)
defer w.Flush()
fmt.Fprintf(w, "ID\tCREATED\tCREATED BY\n")
fmt.Fprintln(w, "ID\tCREATED\tCREATED BY")
return image.WalkHistory(func(img *Image) error {
fmt.Fprintf(w, "%s\t%s\t%s\n",
srv.runtime.repositories.ImageName(img.Id),
srv.runtime.repositories.ImageName(img.ShortId()),
HumanDuration(time.Now().Sub(img.Created))+" ago",
strings.Join(img.ContainerConfig.Cmd, " "),
)
@ -438,7 +438,7 @@ func (srv *Server) CmdImport(stdin io.ReadCloser, stdout io.Writer, args ...stri
u.Host = src
u.Path = ""
}
fmt.Fprintf(stdout, "Downloading from %s\n", u.String())
fmt.Fprintln(stdout, "Downloading from", u)
// Download with curl (pretty progress bar)
// If curl is not available, fallback to http.Get()
resp, err = Download(u.String(), stdout)
@ -458,7 +458,7 @@ func (srv *Server) CmdImport(stdin io.ReadCloser, stdout io.Writer, args ...stri
return err
}
}
fmt.Fprintln(stdout, img.Id)
fmt.Fprintln(stdout, img.ShortId())
return nil
}
@ -564,7 +564,7 @@ func (srv *Server) CmdImages(stdin io.ReadCloser, stdout io.Writer, args ...stri
}
w := tabwriter.NewWriter(stdout, 20, 1, 3, ' ', 0)
if !*quiet {
fmt.Fprintf(w, "REPOSITORY\tTAG\tID\tCREATED\tPARENT\n")
fmt.Fprintln(w, "REPOSITORY\tTAG\tID\tCREATED\tPARENT")
}
var allImages map[string]*Image
var err error
@ -591,7 +591,7 @@ func (srv *Server) CmdImages(stdin io.ReadCloser, stdout io.Writer, args ...stri
for idx, field := range []string{
/* REPOSITORY */ name,
/* TAG */ tag,
/* ID */ id,
/* ID */ TruncateId(id),
/* CREATED */ HumanDuration(time.Now().Sub(image.Created)) + " ago",
/* PARENT */ srv.runtime.repositories.ImageName(image.Parent),
} {
@ -603,7 +603,7 @@ func (srv *Server) CmdImages(stdin io.ReadCloser, stdout io.Writer, args ...stri
}
w.Write([]byte{'\n'})
} else {
stdout.Write([]byte(image.Id + "\n"))
stdout.Write([]byte(image.ShortId() + "\n"))
}
}
}
@ -614,7 +614,7 @@ func (srv *Server) CmdImages(stdin io.ReadCloser, stdout io.Writer, args ...stri
for idx, field := range []string{
/* REPOSITORY */ "<none>",
/* TAG */ "<none>",
/* ID */ id,
/* ID */ TruncateId(id),
/* CREATED */ HumanDuration(time.Now().Sub(image.Created)) + " ago",
/* PARENT */ srv.runtime.repositories.ImageName(image.Parent),
} {
@ -626,7 +626,7 @@ func (srv *Server) CmdImages(stdin io.ReadCloser, stdout io.Writer, args ...stri
}
w.Write([]byte{'\n'})
} else {
stdout.Write([]byte(image.Id + "\n"))
stdout.Write([]byte(image.ShortId() + "\n"))
}
}
}
@ -647,7 +647,7 @@ func (srv *Server) CmdPs(stdin io.ReadCloser, stdout io.Writer, args ...string)
}
w := tabwriter.NewWriter(stdout, 12, 1, 3, ' ', 0)
if !*quiet {
fmt.Fprintf(w, "ID\tIMAGE\tCOMMAND\tCREATED\tSTATUS\tCOMMENT\n")
fmt.Fprintln(w, "ID\tIMAGE\tCOMMAND\tCREATED\tSTATUS\tCOMMENT")
}
for _, container := range srv.runtime.List() {
if !container.State.Running && !*flAll {
@ -700,7 +700,7 @@ func (srv *Server) CmdCommit(stdin io.ReadCloser, stdout io.Writer, args ...stri
if err != nil {
return err
}
fmt.Fprintln(stdout, img.Id)
fmt.Fprintln(stdout, img.ShortId())
return nil
}

View file

@ -24,7 +24,7 @@ func closeWrap(args ...io.Closer) error {
return nil
}
func setTimeout(t *testing.T, msg string, d time.Duration, f func(chan bool)) {
func setTimeout(t *testing.T, msg string, d time.Duration, f func()) {
c := make(chan bool)
// Make sure we are not too long
@ -32,9 +32,12 @@ func setTimeout(t *testing.T, msg string, d time.Duration, f func(chan bool)) {
time.Sleep(d)
c <- true
}()
go f(c)
if timeout := <-c; timeout {
t.Fatalf("Timeout: %s", msg)
go func() {
f()
c <- false
}()
if <-c {
t.Fatal(msg)
}
}
@ -54,75 +57,112 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
return nil
}
// Test the behavior of a client disconnection.
// We expect a client disconnect to leave the stdin of the container open
// Therefore a process will keep his stdin open when a client disconnects
func TestReattachAfterDisconnect(t *testing.T) {
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
// FIXME: low down the timeout (after #230)
setTimeout(t, "TestReattachAfterDisconnect", 12*time.Second, func(timeout chan bool) {
srv := &Server{runtime: runtime}
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
if err := srv.CmdRun(stdin, stdoutPipe, "-i", GetTestImage(runtime).Id, "/bin/cat"); err == nil {
t.Fatal("CmdRun should generate a read/write on closed pipe error. No error found.")
}
close(c1)
}()
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
c1 := make(chan struct{})
go func() {
if err := srv.CmdRun(stdin, stdoutPipe, "-i", GetTestImage(runtime).Id, "/bin/cat"); err != nil {
t.Fatal(err)
}
close(c1)
}()
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
container := runtime.containers.Back().Value.(*Container)
// Recreate the pipes
stdin, stdinPipe = io.Pipe()
stdout, stdoutPipe = io.Pipe()
// Attach to it
c2 := make(chan struct{})
go func() {
if err := srv.CmdAttach(stdin, stdoutPipe, container.Id); err != nil {
t.Fatal(err)
}
close(c2)
}()
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
// Close pipes
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// FIXME: when #230 will be finished, send SIGINT instead of SIGTERM
// we expect cat to stay alive so SIGTERM will have no effect
// and Stop will timeout
if err := container.Stop(); err != nil {
t.Fatal(err)
}
// Wait for run and attach to finish
<-c1
<-c2
// Finished, no timeout
timeout <- false
})
// Close pipes (simulate disconnect)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// as the pipes are close, we expect the process to die,
// therefore CmdRun to unblock. Wait for CmdRun
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
<-c1
})
// Check the status of the container
container := runtime.containers.Back().Value.(*Container)
if container.State.Running {
t.Fatalf("/bin/cat is still running after closing stdin")
}
}
// Expected behaviour, the process stays alive when the client disconnects
func TestAttachDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
// Start the process
if err := container.Start(); err != nil {
t.Fatal(err)
}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
// Attach to it
c1 := make(chan struct{})
go func() {
if err := srv.CmdAttach(stdin, stdoutPipe, container.Id); err != nil {
t.Fatal(err)
}
close(c1)
}()
setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
})
// Close pipes (client disconnects)
if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
t.Fatal(err)
}
// Wait for attach to finish, the client disconnected, therefore, Attach finished his job
setTimeout(t, "Waiting for CmdAttach timed out", 2*time.Second, func() {
<-c1
})
// We closed stdin, expect /bin/cat to still be running
// Wait a little bit to make sure container.monitor() did his thing
err = container.WaitTimeout(500 * time.Millisecond)
if err == nil || !container.State.Running {
t.Fatalf("/bin/cat is not running after closing stdin")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
}

View file

@ -230,6 +230,9 @@ func (container *Container) start() error {
}
func (container *Container) Start() error {
if container.State.Running {
return fmt.Errorf("The container %s is already running.", container.Id)
}
if err := container.EnsureMounted(); err != nil {
return err
}
@ -360,11 +363,10 @@ func (container *Container) allocateNetwork() error {
return nil
}
func (container *Container) releaseNetwork() error {
err := container.network.Release()
func (container *Container) releaseNetwork() {
container.network.Release()
container.network = nil
container.NetworkSettings = &NetworkSettings{}
return err
}
func (container *Container) monitor() {
@ -379,9 +381,7 @@ func (container *Container) monitor() {
exitCode := container.cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
// Cleanup
if err := container.releaseNetwork(); err != nil {
log.Printf("%v: Failed to release network: %v", container.Id, err)
}
container.releaseNetwork()
if container.Config.OpenStdin {
if err := container.stdin.Close(); err != nil {
Debugf("%s: Error close stdin: %s", container.Id, err)
@ -422,7 +422,13 @@ func (container *Container) monitor() {
// Report status back
container.State.setStopped(exitCode)
if err := container.ToDisk(); err != nil {
log.Printf("%s: Failed to dump configuration to the disk: %s", container.Id, err)
// FIXME: there is a race condition here which causes this to fail during the unit tests.
// If another goroutine was waiting for Wait() to return before removing the container's root
// from the filesystem... At this point it may already have done so.
// This is because State.setStopped() has already been called, and has caused Wait()
// to return.
// FIXME: why are we serializing running state to disk in the first place?
//log.Printf("%s: Failed to dump configuration to the disk: %s", container.Id, err)
}
}
@ -452,8 +458,8 @@ func (container *Container) Stop() error {
// 1. Send a SIGTERM
if output, err := exec.Command("lxc-kill", "-n", container.Id, "15").CombinedOutput(); err != nil {
log.Printf(string(output))
log.Printf("Failed to send SIGTERM to the process, force killing")
log.Print(string(output))
log.Print("Failed to send SIGTERM to the process, force killing")
if err := container.Kill(); err != nil {
return err
}
@ -560,11 +566,7 @@ func (container *Container) Unmount() error {
// In case of a collision a lookup with Runtime.Get() will fail, and the caller
// will need to use a langer prefix, or the full-length container Id.
func (container *Container) ShortId() string {
shortLen := 12
if len(container.Id) < shortLen {
shortLen = len(container.Id)
}
return container.Id[:shortLen]
return TruncateId(container.Id)
}
func (container *Container) logPath(name string) string {

View file

@ -227,10 +227,48 @@ func TestCommitRun(t *testing.T) {
t.Fatal(err)
}
if string(output) != "hello\n" {
t.Fatalf("Unexpected output. Expected %s, received: %s (err: %s)", "hello\n", string(output), string(output2))
t.Fatalf("Unexpected output. Expected %s, received: %s (err: %s)", "hello\n", output, output2)
}
}
func TestStart(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
container, err := runtime.Create(
&Config{
Image: GetTestImage(runtime).Id,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
if err := container.Start(); err != nil {
t.Fatal(err)
}
// Give some time to the process to start
container.WaitTimeout(500 * time.Millisecond)
if !container.State.Running {
t.Errorf("Container should be running")
}
if err := container.Start(); err == nil {
t.Fatalf("A running containter should be able to be started")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
}
func TestRun(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
@ -851,7 +889,7 @@ func BenchmarkRunSequencial(b *testing.B) {
b.Fatal(err)
}
if string(output) != "foo" {
b.Fatalf("Unexecpted output: %v", string(output))
b.Fatalf("Unexpected output: %s", output)
}
if err := runtime.Destroy(container); err != nil {
b.Fatal(err)

View file

@ -35,6 +35,5 @@ do
cp -a /dev/$X dev
done
tar -cf- . | docker put busybox
docker run -i -a -u root busybox /bin/echo Success.
tar -cf- . | docker import - busybox
docker run -i -u root busybox /bin/echo Success.

View file

@ -51,8 +51,51 @@ documenting your bug report or improvement proposal. If it does, it
never hurts to add a quick "+1" or "I have this problem too". This will
help prioritize the most common problems and requests.
Write tests
Conventions
~~~~~~~~~~~
Golang has a great testing suite built in: use it! Take a look at
existing tests for inspiration.
Fork the repo and make changes on your fork in a feature branch:
- If it's a bugfix branch, name it XXX-something where XXX is the number of the
issue
- If it's a feature branch, create an enhancement issue to announce your
intentions, and name it XXX-something where XXX is the number of the issue.
Submit unit tests for your changes. Go has a great test framework built in; use
it! Take a look at existing tests for inspiration. Run the full test suite on
your branch before submitting a pull request.
Make sure you include relevant updates or additions to documentation when
creating or modifying features.
Write clean code. Universally formatted code promotes ease of writing, reading,
and maintenance. Always run ``go fmt`` before committing your changes. Most
editors have plugins that do this automatically, and there's also a git
pre-commit hook:
.. code-block:: bash
curl -o .git/hooks/pre-commit https://raw.github.com/edsrzf/gofmt-git-hook/master/fmt-check && chmod +x .git/hooks/pre-commit
Pull requests descriptions should be as clear as possible and include a
reference to all the issues that they address.
Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Be
sure to post a comment after pushing. The new commits will show up in the pull
request automatically, but the reviewers will not be notified unless you
comment.
Before the pull request is merged, make sure that you squash your commits into
logical units of work using ``git rebase -i`` and ``git push -f``. After every
commit the test suite should be passing. Include documentation changes in the
same commit so that a revert would remove all traces of the feature or fix.
Commits that fix or close an issue should include a reference like ``Closes #XXX``
or ``Fixes #XXX``, which will automatically close the issue when merged.
Add your name to the AUTHORS file, but make sure the list is sorted and your
name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten.

View file

@ -7,7 +7,7 @@ Setting up a dev environment
Instructions that have been verified to work on Ubuntu 12.10,
.. code:: bash
.. code-block:: bash
sudo apt-get -y install lxc wget bsdtar curl golang git
@ -24,7 +24,7 @@ Instructions that have been verified to work on Ubuntu 12.10,
Then run the docker daemon,
.. code:: bash
.. code-block:: bash
sudo $GOPATH/bin/docker -d

View file

@ -10,12 +10,13 @@ import (
"time"
)
// A Graph is a store for versioned filesystem images, and the relationship between them.
// A Graph is a store for versioned filesystem images and the relationship between them.
type Graph struct {
Root string
Root string
idIndex *TruncIndex
}
// NewGraph instanciates a new graph at the given root path in the filesystem.
// NewGraph instantiates a new graph at the given root path in the filesystem.
// `root` will be created if it doesn't exist.
func NewGraph(root string) (*Graph, error) {
abspath, err := filepath.Abs(root)
@ -26,9 +27,26 @@ func NewGraph(root string) (*Graph, error) {
if err := os.Mkdir(root, 0700); err != nil && !os.IsExist(err) {
return nil, err
}
return &Graph{
Root: abspath,
}, nil
graph := &Graph{
Root: abspath,
idIndex: NewTruncIndex(),
}
if err := graph.restore(); err != nil {
return nil, err
}
return graph, nil
}
func (graph *Graph) restore() error {
dir, err := ioutil.ReadDir(graph.Root)
if err != nil {
return err
}
for _, v := range dir {
id := v.Name()
graph.idIndex.Add(id)
}
return nil
}
// FIXME: Implement error subclass instead of looking at the error text
@ -47,7 +65,11 @@ func (graph *Graph) Exists(id string) bool {
}
// Get returns the image with the given id, or an error if the image doesn't exist.
func (graph *Graph) Get(id string) (*Image, error) {
func (graph *Graph) Get(name string) (*Image, error) {
id, err := graph.idIndex.Get(name)
if err != nil {
return nil, err
}
// FIXME: return nil when the image doesn't exist, instead of an error
img, err := LoadImage(graph.imageRoot(id))
if err != nil {
@ -101,6 +123,7 @@ func (graph *Graph) Register(layerData Archive, img *Image) error {
return err
}
img.graph = graph
graph.idIndex.Add(img.Id)
return nil
}
@ -117,14 +140,14 @@ func (graph *Graph) Mktemp(id string) (string, error) {
}
// Garbage returns the "garbage", a staging area for deleted images.
// This allows images ot be deleted atomically by os.Rename(), instead of
// os.RemoveAll() which is prone to race conditions
// This allows images to be deleted atomically by os.Rename(), instead of
// os.RemoveAll() which is prone to race conditions.
func (graph *Graph) Garbage() (*Graph, error) {
return NewGraph(path.Join(graph.Root, ":garbage:"))
}
// Check if given error is "not empty"
// Note: this is the way golang do it internally with os.IsNotExists
// Check if given error is "not empty".
// Note: this is the way golang does it internally with os.IsNotExists.
func isNotEmpty(err error) bool {
switch pe := err.(type) {
case nil:
@ -138,13 +161,20 @@ func isNotEmpty(err error) bool {
}
// Delete atomically removes an image from the graph.
func (graph *Graph) Delete(id string) error {
func (graph *Graph) Delete(name string) error {
id, err := graph.idIndex.Get(name)
if err != nil {
return err
}
garbage, err := graph.Garbage()
if err != nil {
return err
}
graph.idIndex.Delete(id)
err = os.Rename(graph.imageRoot(id), garbage.imageRoot(id))
if err != nil {
// FIXME: this introduces a race condition in Delete() if the image is already present
// in garbage. Let's store at random names in grabage instead.
if isNotEmpty(err) {
Debugf("The image %s is already present in garbage. Removing it.", id)
if err = os.RemoveAll(garbage.imageRoot(id)); err != nil {
@ -164,16 +194,20 @@ func (graph *Graph) Delete(id string) error {
return nil
}
// Undelete moves an image back from the garbage to the main graph
// Undelete moves an image back from the garbage to the main graph.
func (graph *Graph) Undelete(id string) error {
garbage, err := graph.Garbage()
if err != nil {
return err
}
return os.Rename(garbage.imageRoot(id), graph.imageRoot(id))
if err := os.Rename(garbage.imageRoot(id), graph.imageRoot(id)); err != nil {
return err
}
graph.idIndex.Add(id)
return nil
}
// GarbageCollect definitely deletes all images moved to the garbage
// GarbageCollect definitely deletes all images moved to the garbage.
func (graph *Graph) GarbageCollect() error {
garbage, err := graph.Garbage()
if err != nil {
@ -182,7 +216,7 @@ func (graph *Graph) GarbageCollect() error {
return os.RemoveAll(garbage.Root)
}
// Map returns a list of all images in the graph, addressable by ID
// Map returns a list of all images in the graph, addressable by ID.
func (graph *Graph) Map() (map[string]*Image, error) {
// FIXME: this should replace All()
all, err := graph.All()
@ -196,7 +230,7 @@ func (graph *Graph) Map() (map[string]*Image, error) {
return images, nil
}
// All returns a list of all images in the graph
// All returns a list of all images in the graph.
func (graph *Graph) All() ([]*Image, error) {
var images []*Image
err := graph.WalkAll(func(image *Image) {

View file

@ -120,6 +120,29 @@ func TestMount(t *testing.T) {
}()
}
// Test that an image can be deleted by its shorthand prefix
func TestDeletePrefix(t *testing.T) {
graph := tempGraph(t)
defer os.RemoveAll(graph.Root)
img := createTestImage(graph, t)
if err := graph.Delete(TruncateId(img.Id)); err != nil {
t.Fatal(err)
}
assertNImages(graph, t, 0)
}
func createTestImage(graph *Graph, t *testing.T) *Image {
archive, err := fakeTar()
if err != nil {
t.Fatal(err)
}
img, err := graph.Create(archive, nil, "Test image")
if err != nil {
t.Fatal(err)
}
return img
}
func TestDelete(t *testing.T) {
graph := tempGraph(t)
defer os.RemoveAll(graph.Root)

View file

@ -150,6 +150,10 @@ func (image *Image) Changes(rw string) ([]Change, error) {
return Changes(layers, rw)
}
func (image *Image) ShortId() string {
return TruncateId(image.Id)
}
func ValidateId(id string) error {
if id == "" {
return fmt.Errorf("Image id can't be empty")

View file

@ -1,7 +1,6 @@
package docker
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
@ -30,40 +29,25 @@ func networkRange(network *net.IPNet) (net.IP, net.IP) {
}
// Converts a 4 bytes IP into a 32 bit integer
func ipToInt(ip net.IP) (int32, error) {
buf := bytes.NewBuffer(ip.To4())
var n int32
if err := binary.Read(buf, binary.BigEndian, &n); err != nil {
return 0, err
}
return n, nil
func ipToInt(ip net.IP) int32 {
return int32(binary.BigEndian.Uint32(ip.To4()))
}
// Converts 32 bit integer into a 4 bytes IP address
func intToIp(n int32) (net.IP, error) {
var buf bytes.Buffer
if err := binary.Write(&buf, binary.BigEndian, &n); err != nil {
return net.IP{}, err
}
ip := net.IPv4(0, 0, 0, 0).To4()
for i := 0; i < net.IPv4len; i++ {
ip[i] = buf.Bytes()[i]
}
return ip, nil
func intToIp(n int32) net.IP {
b := make([]byte, 4)
binary.BigEndian.PutUint32(b, uint32(n))
return net.IP(b)
}
// Given a netmask, calculates the number of available hosts
func networkSize(mask net.IPMask) (int32, error) {
func networkSize(mask net.IPMask) int32 {
m := net.IPv4Mask(0, 0, 0, 0)
for i := 0; i < net.IPv4len; i++ {
m[i] = ^mask[i]
}
buf := bytes.NewBuffer(m)
var n int32
if err := binary.Read(buf, binary.BigEndian, &n); err != nil {
return 0, err
}
return n + 1, nil
return int32(binary.BigEndian.Uint32(m)) + 1
}
// Wrapper around the iptables command
@ -211,66 +195,97 @@ func newPortAllocator(start, end int) (*PortAllocator, error) {
// IP allocator: Atomatically allocate and release networking ports
type IPAllocator struct {
network *net.IPNet
queue chan (net.IP)
network *net.IPNet
queueAlloc chan allocatedIP
queueReleased chan net.IP
inUse map[int32]struct{}
}
func (alloc *IPAllocator) populate() error {
type allocatedIP struct {
ip net.IP
err error
}
func (alloc *IPAllocator) run() {
firstIP, _ := networkRange(alloc.network)
size, err := networkSize(alloc.network.Mask)
if err != nil {
return err
ipNum := ipToInt(firstIP)
ownIP := ipToInt(alloc.network.IP)
size := networkSize(alloc.network.Mask)
pos := int32(1)
max := size - 2 // -1 for the broadcast address, -1 for the gateway address
for {
var (
newNum int32
inUse bool
)
// Find first unused IP, give up after one whole round
for attempt := int32(0); attempt < max; attempt++ {
newNum = ipNum + pos
pos = pos%max + 1
// The network's IP is never okay to use
if newNum == ownIP {
continue
}
if _, inUse = alloc.inUse[newNum]; !inUse {
// We found an unused IP
break
}
}
ip := allocatedIP{ip: intToIp(newNum)}
if inUse {
ip.err = errors.New("No unallocated IP available")
}
select {
case alloc.queueAlloc <- ip:
alloc.inUse[newNum] = struct{}{}
case released := <-alloc.queueReleased:
r := ipToInt(released)
delete(alloc.inUse, r)
if inUse {
// If we couldn't allocate a new IP, the released one
// will be the only free one now, so instantly use it
// next time
pos = r - ipNum
} else {
// Use same IP as last time
if pos == 1 {
pos = max
} else {
pos--
}
}
}
}
// The queue size should be the network size - 3
// -1 for the network address, -1 for the broadcast address and
// -1 for the gateway address
alloc.queue = make(chan net.IP, size-3)
for i := int32(1); i < size-1; i++ {
ipNum, err := ipToInt(firstIP)
if err != nil {
return err
}
ip, err := intToIp(ipNum + int32(i))
if err != nil {
return err
}
// Discard the network IP (that's the host IP address)
if ip.Equal(alloc.network.IP) {
continue
}
alloc.queue <- ip
}
return nil
}
func (alloc *IPAllocator) Acquire() (net.IP, error) {
select {
case ip := <-alloc.queue:
return ip, nil
default:
return net.IP{}, errors.New("No more IP addresses available")
}
return net.IP{}, nil
ip := <-alloc.queueAlloc
return ip.ip, ip.err
}
func (alloc *IPAllocator) Release(ip net.IP) error {
select {
case alloc.queue <- ip:
return nil
default:
return errors.New("Too many IP addresses have been released")
}
return nil
func (alloc *IPAllocator) Release(ip net.IP) {
alloc.queueReleased <- ip
}
func newIPAllocator(network *net.IPNet) (*IPAllocator, error) {
func newIPAllocator(network *net.IPNet) *IPAllocator {
alloc := &IPAllocator{
network: network,
network: network,
queueAlloc: make(chan allocatedIP),
queueReleased: make(chan net.IP),
inUse: make(map[int32]struct{}),
}
if err := alloc.populate(); err != nil {
return nil, err
}
return alloc, nil
go alloc.run()
return alloc
}
// Network interface represents the networking stack of a container
@ -297,7 +312,7 @@ func (iface *NetworkInterface) AllocatePort(port int) (int, error) {
}
// Release: Network cleanup - release all resources
func (iface *NetworkInterface) Release() error {
func (iface *NetworkInterface) Release() {
for _, port := range iface.extPorts {
if err := iface.manager.portMapper.Unmap(port); err != nil {
log.Printf("Unable to unmap port %v: %v", port, err)
@ -307,7 +322,8 @@ func (iface *NetworkInterface) Release() error {
}
}
return iface.manager.ipAllocator.Release(iface.IPNet.IP)
iface.manager.ipAllocator.Release(iface.IPNet.IP)
}
// Network Manager manages a set of network interfaces
@ -342,10 +358,7 @@ func newNetworkManager(bridgeIface string) (*NetworkManager, error) {
}
network := addr.(*net.IPNet)
ipAllocator, err := newIPAllocator(network)
if err != nil {
return nil, err
}
ipAllocator := newIPAllocator(network)
portAllocator, err := newPortAllocator(portRangeStart, portRangeEnd)
if err != nil {

View file

@ -28,8 +28,8 @@ func TestNetworkRange(t *testing.T) {
if !last.Equal(net.ParseIP("192.168.0.255")) {
t.Error(last.String())
}
if size, err := networkSize(network.Mask); err != nil || size != 256 {
t.Error(size, err)
if size := networkSize(network.Mask); size != 256 {
t.Error(size)
}
// Class A test
@ -41,8 +41,8 @@ func TestNetworkRange(t *testing.T) {
if !last.Equal(net.ParseIP("10.255.255.255")) {
t.Error(last.String())
}
if size, err := networkSize(network.Mask); err != nil || size != 16777216 {
t.Error(size, err)
if size := networkSize(network.Mask); size != 16777216 {
t.Error(size)
}
// Class A, random IP address
@ -64,8 +64,8 @@ func TestNetworkRange(t *testing.T) {
if !last.Equal(net.ParseIP("10.1.2.3")) {
t.Error(last.String())
}
if size, err := networkSize(network.Mask); err != nil || size != 1 {
t.Error(size, err)
if size := networkSize(network.Mask); size != 1 {
t.Error(size)
}
// 31bit mask
@ -77,8 +77,8 @@ func TestNetworkRange(t *testing.T) {
if !last.Equal(net.ParseIP("10.1.2.3")) {
t.Error(last.String())
}
if size, err := networkSize(network.Mask); err != nil || size != 2 {
t.Error(size, err)
if size := networkSize(network.Mask); size != 2 {
t.Error(size)
}
// 26bit mask
@ -90,54 +90,130 @@ func TestNetworkRange(t *testing.T) {
if !last.Equal(net.ParseIP("10.1.2.63")) {
t.Error(last.String())
}
if size, err := networkSize(network.Mask); err != nil || size != 64 {
t.Error(size, err)
if size := networkSize(network.Mask); size != 64 {
t.Error(size)
}
}
func TestConversion(t *testing.T) {
ip := net.ParseIP("127.0.0.1")
i, err := ipToInt(ip)
if err != nil {
t.Fatal(err)
}
i := ipToInt(ip)
if i == 0 {
t.Fatal("converted to zero")
}
conv, err := intToIp(i)
if err != nil {
t.Fatal(err)
}
conv := intToIp(i)
if !ip.Equal(conv) {
t.Error(conv.String())
}
}
func TestIPAllocator(t *testing.T) {
gwIP, n, _ := net.ParseCIDR("127.0.0.1/29")
alloc, err := newIPAllocator(&net.IPNet{IP: gwIP, Mask: n.Mask})
if err != nil {
t.Fatal(err)
expectedIPs := []net.IP{
0: net.IPv4(127, 0, 0, 2),
1: net.IPv4(127, 0, 0, 3),
2: net.IPv4(127, 0, 0, 4),
3: net.IPv4(127, 0, 0, 5),
4: net.IPv4(127, 0, 0, 6),
}
var lastIP net.IP
gwIP, n, _ := net.ParseCIDR("127.0.0.1/29")
alloc := newIPAllocator(&net.IPNet{IP: gwIP, Mask: n.Mask})
// Pool after initialisation (f = free, u = used)
// 2(f) - 3(f) - 4(f) - 5(f) - 6(f)
// ↑
// Check that we get 5 IPs, from 127.0.0.2127.0.0.6, in that
// order.
for i := 0; i < 5; i++ {
ip, err := alloc.Acquire()
if err != nil {
t.Fatal(err)
}
lastIP = ip
assertIPEquals(t, expectedIPs[i], ip)
}
ip, err := alloc.Acquire()
// Before loop begin
// 2(f) - 3(f) - 4(f) - 5(f) - 6(f)
// ↑
// After i = 0
// 2(u) - 3(f) - 4(f) - 5(f) - 6(f)
// ↑
// After i = 1
// 2(u) - 3(u) - 4(f) - 5(f) - 6(f)
// ↑
// After i = 2
// 2(u) - 3(u) - 4(u) - 5(f) - 6(f)
// ↑
// After i = 3
// 2(u) - 3(u) - 4(u) - 5(u) - 6(f)
// ↑
// After i = 4
// 2(u) - 3(u) - 4(u) - 5(u) - 6(u)
// ↑
// Check that there are no more IPs
_, err := alloc.Acquire()
if err == nil {
t.Fatal("There shouldn't be any IP addresses at this point")
}
// Release 1 IP
alloc.Release(lastIP)
ip, err = alloc.Acquire()
if err != nil {
t.Fatal(err)
// Release some IPs in non-sequential order
alloc.Release(expectedIPs[3])
// 2(u) - 3(u) - 4(u) - 5(f) - 6(u)
// ↑
alloc.Release(expectedIPs[2])
// 2(u) - 3(u) - 4(f) - 5(f) - 6(u)
// ↑
alloc.Release(expectedIPs[4])
// 2(u) - 3(u) - 4(f) - 5(f) - 6(f)
// ↑
// Make sure that IPs are reused in sequential order, starting
// with the first released IP
newIPs := make([]net.IP, 3)
for i := 0; i < 3; i++ {
ip, err := alloc.Acquire()
if err != nil {
t.Fatal(err)
}
newIPs[i] = ip
}
if !ip.Equal(lastIP) {
t.Fatal(ip.String())
// Before loop begin
// 2(u) - 3(u) - 4(f) - 5(f) - 6(f)
// ↑
// After i = 0
// 2(u) - 3(u) - 4(f) - 5(u) - 6(f)
// ↑
// After i = 1
// 2(u) - 3(u) - 4(f) - 5(u) - 6(u)
// ↑
// After i = 2
// 2(u) - 3(u) - 4(u) - 5(u) - 6(u)
// ↑
assertIPEquals(t, expectedIPs[3], newIPs[0])
assertIPEquals(t, expectedIPs[4], newIPs[1])
assertIPEquals(t, expectedIPs[2], newIPs[2])
_, err = alloc.Acquire()
if err == nil {
t.Fatal("There shouldn't be any IP addresses at this point")
}
}
func assertIPEquals(t *testing.T, ip1, ip2 net.IP) {
if !ip1.Equal(ip2) {
t.Fatalf("Expected IP %s, got %s", ip1, ip2)
}
}

View file

@ -20,7 +20,7 @@ func ListenAndServeHTTP(addr string, service Service) error {
func(w http.ResponseWriter, r *http.Request) {
cmd, args := URLToCall(r.URL)
if err := call(service, r.Body, &AutoFlush{w}, append([]string{cmd}, args...)...); err != nil {
fmt.Fprintf(w, "Error: "+err.Error()+"\n")
fmt.Fprintln(w, "Error:", err.Error())
}
}))
}

View file

@ -51,8 +51,8 @@ func ListenAndServe(proto, addr string, service Service) error {
CLIENT_SOCKET = conn
}
if err := Serve(conn, service); err != nil {
log.Printf("Error: " + err.Error() + "\n")
fmt.Fprintf(conn, "Error: "+err.Error()+"\n")
log.Println("Error:", err.Error())
fmt.Fprintln(conn, "Error:", err.Error())
}
conn.Close()
}()

View file

@ -1,6 +1,7 @@
package docker
import (
"bytes"
"encoding/json"
"fmt"
"github.com/dotcloud/docker/auth"
@ -20,7 +21,7 @@ func NewImgJson(src []byte) (*Image, error) {
ret := &Image{}
Debugf("Json string: {%s}\n", src)
// FIXME: Is there a cleaner way to "puryfy" the input json?
// FIXME: Is there a cleaner way to "purify" the input json?
if err := json.Unmarshal(src, ret); err != nil {
return nil, err
}
@ -32,7 +33,7 @@ func NewImgJson(src []byte) (*Image, error) {
func NewMultipleImgJson(src []byte) ([]*Image, error) {
ret := []*Image{}
dec := json.NewDecoder(strings.NewReader(string(src)))
dec := json.NewDecoder(bytes.NewReader(src))
for {
m := &Image{}
if err := dec.Decode(m); err == io.EOF {
@ -135,7 +136,7 @@ func (graph *Graph) getRemoteImage(stdout io.Writer, imgId string, authConfig *a
if err != nil {
return nil, nil, err
}
return img, res.Body, nil
return img, ProgressReader(res.Body, int(res.ContentLength), stdout), nil
}
func (graph *Graph) PullImage(stdout io.Writer, imgId string, authConfig *auth.AuthConfig) error {
@ -183,10 +184,10 @@ func (graph *Graph) PullRepository(stdout io.Writer, remote, askedTag string, re
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return fmt.Errorf("HTTP code: %d", res.StatusCode)
}
defer res.Body.Close()
rawJson, err := ioutil.ReadAll(res.Body)
if err != nil {
return err
@ -226,7 +227,7 @@ func (graph *Graph) PushImage(stdout io.Writer, imgOrig *Image, authConfig *auth
fmt.Fprintf(stdout, "Pushing %s metadata\n", img.Id)
// FIXME: try json with UTF8
jsonData := strings.NewReader(string(jsonRaw))
jsonData := bytes.NewReader(jsonRaw)
req, err := http.NewRequest("PUT", REGISTRY_ENDPOINT+"/images/"+img.Id+"/json", jsonData)
if err != nil {
return err
@ -237,6 +238,7 @@ func (graph *Graph) PushImage(stdout io.Writer, imgOrig *Image, authConfig *auth
if err != nil {
return fmt.Errorf("Failed to upload metadata: %s", err)
}
defer res.Body.Close()
if res.StatusCode != 200 {
switch res.StatusCode {
case 204:
@ -256,9 +258,13 @@ func (graph *Graph) PushImage(stdout io.Writer, imgOrig *Image, authConfig *auth
req2, err := http.NewRequest("PUT", REGISTRY_ENDPOINT+"/images/"+img.Id+"/layer", nil)
req2.SetBasicAuth(authConfig.Username, authConfig.Password)
res2, err := client.Do(req2)
if err != nil || res2.StatusCode != 307 {
if err != nil {
return fmt.Errorf("Registry returned error: %s", err)
}
res2.Body.Close()
if res2.StatusCode != 307 {
return fmt.Errorf("Registry returned unexpected HTTP status code %d, expected 307", res2.StatusCode)
}
url, err := res2.Location()
if err != nil || url == nil {
return fmt.Errorf("Failed to retrieve layer upload location: %s", err)
@ -267,25 +273,28 @@ func (graph *Graph) PushImage(stdout io.Writer, imgOrig *Image, authConfig *auth
// FIXME: Don't do this :D. Check the S3 requierement and implement chunks of 5MB
// FIXME2: I won't stress it enough, DON'T DO THIS! very high priority
layerData2, err := Tar(path.Join(graph.Root, img.Id, "layer"), Gzip)
layerData, err := Tar(path.Join(graph.Root, img.Id, "layer"), Gzip)
if err != nil {
return fmt.Errorf("Failed to generate layer archive: %s", err)
}
req3, err := http.NewRequest("PUT", url.String(), layerData)
if err != nil {
return err
}
tmp, err := ioutil.ReadAll(layerData2)
if err != nil {
return err
}
req3.ContentLength = int64(len(tmp))
layerLength := len(tmp)
layerData, err := Tar(path.Join(graph.Root, img.Id, "layer"), Gzip)
if err != nil {
return fmt.Errorf("Failed to generate layer archive: %s", err)
}
req3, err := http.NewRequest("PUT", url.String(), ProgressReader(layerData.(io.ReadCloser), layerLength, stdout))
if err != nil {
return err
}
req3.ContentLength = int64(layerLength)
req3.TransferEncoding = []string{"none"}
res3, err := client.Do(req3)
if err != nil {
return fmt.Errorf("Failed to upload layer: %s", err)
}
res3.Body.Close()
if res3.StatusCode != 200 {
return fmt.Errorf("Received HTTP code %d while uploading layer", res3.StatusCode)
}
@ -315,12 +324,13 @@ func (graph *Graph) pushTag(remote, revision, tag string, authConfig *auth.AuthC
req.Header.Add("Content-type", "application/json")
req.SetBasicAuth(authConfig.Username, authConfig.Password)
res, err := client.Do(req)
if err != nil || (res.StatusCode != 200 && res.StatusCode != 201) {
if res != nil {
return fmt.Errorf("Internal server error: %d trying to push tag %s on %s", res.StatusCode, tag, remote)
}
if err != nil {
return err
}
res.Body.Close()
if res.StatusCode != 200 && res.StatusCode != 201 {
return fmt.Errorf("Internal server error: %d trying to push tag %s on %s", res.StatusCode, tag, remote)
}
Debugf("Result of push tag: %d\n", res.StatusCode)
switch res.StatusCode {
default:

View file

@ -106,7 +106,7 @@ func (store *TagStore) ImageName(id string) string {
if names, exists := store.ById()[id]; exists && len(names) > 0 {
return names[0]
}
return id
return TruncateId(id)
}
func (store *TagStore) Set(repoName, tag, imageName string, force bool) error {

View file

@ -334,3 +334,15 @@ func (idx *TruncIndex) Get(s string) (string, error) {
}
return string(idx.bytes[before:after]), err
}
// TruncateId returns a shorthand version of a string identifier for convenience.
// A collision with other shorthands is very unlikely, but possible.
// In case of a collision a lookup with TruncIndex.Get() will fail, and the caller
// will need to use a langer prefix, or the full-length Id.
func TruncateId(id string) string {
shortLen := 12
if len(id) < shortLen {
shortLen = len(id)
}
return id[:shortLen]
}