Merge branch 'master' of https://github.com/dotcloud/docker
This commit is contained in:
commit
ca49da41ee
316 changed files with 10747 additions and 5151 deletions
38
.mailmap
38
.mailmap
|
@ -6,14 +6,16 @@ Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com
|
|||
<guillaume.charmes@docker.com> <guillaume@dotcloud.com>
|
||||
<guillaume.charmes@docker.com> <guillaume@docker.com>
|
||||
<guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
|
||||
<guillaume.charmes@docker.com> <guillaume@charmes.net>
|
||||
<kencochrane@gmail.com> <KenCochrane@gmail.com>
|
||||
<sridharr@activestate.com> <github@srid.name>
|
||||
Thatcher Peskens <thatcher@dotcloud.com> dhrp <thatcher@dotcloud.com>
|
||||
Thatcher Peskens <thatcher@dotcloud.com> dhrp <thatcher@gmx.net>
|
||||
Thatcher Peskens <thatcher@docker.com>
|
||||
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
|
||||
Thatcher Peskens <thatcher@docker.com> dhrp <thatcher@gmx.net>
|
||||
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com> jpetazzo <jerome.petazzoni@dotcloud.com>
|
||||
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com> <jp@enix.org>
|
||||
Joffrey F <joffrey@dotcloud.com>
|
||||
<joffrey@dotcloud.com> <f.joffrey@gmail.com>
|
||||
Joffrey F <joffrey@docker.com>
|
||||
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
|
||||
Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
|
||||
Tim Terhorst <mynamewastaken+git@gmail.com>
|
||||
Andy Smith <github@anarkystic.com>
|
||||
<kalessin@kalessin.fr> <louis@dotcloud.com>
|
||||
|
@ -23,7 +25,6 @@ Andy Smith <github@anarkystic.com>
|
|||
<victor.vieux@docker.com> <victor@docker.com>
|
||||
<victor.vieux@docker.com> <vieux@docker.com>
|
||||
<dominik@honnef.co> <dominikh@fork-bomb.org>
|
||||
Thatcher Peskens <thatcher@dotcloud.com>
|
||||
<ehanchrow@ine.com> <eric.hanchrow@gmail.com>
|
||||
Walter Stanish <walter@pratyeka.org>
|
||||
<daniel@gasienica.ch> <dgasienica@zynga.com>
|
||||
|
@ -54,7 +55,26 @@ Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
|
|||
<gurjeet@singh.im> <singh.gurjeet@gmail.com>
|
||||
<shawn@churchofgit.com> <shawnlandden@gmail.com>
|
||||
<sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
|
||||
<solomon@dotcloud.com> <solomon.hykes@dotcloud.com>
|
||||
<SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> ¨Sven <¨SvenDowideit@home.org.au¨>
|
||||
<solomon@docker.com> <solomon.hykes@dotcloud.com>
|
||||
<solomon@docker.com> <solomon@dotcloud.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@docker.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
|
||||
unclejack <unclejacksons@gmail.com> <unclejack@users.noreply.github.com>
|
||||
<alexl@redhat.com> <alexander.larsson@gmail.com>
|
||||
Alexandr Morozov <lk4d4math@gmail.com>
|
||||
<git.nivoc@neverbox.com> <kuehnle@online.de>
|
||||
O.S. Tezer <ostezer@gmail.com>
|
||||
<ostezer@gmail.com> <ostezer@users.noreply.github.com>
|
||||
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
|
||||
<justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
|
||||
<taim@bosboot.org> <maztaim@users.noreply.github.com>
|
||||
<viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
|
||||
<vbatts@redhat.com> <vbatts@hashbangbash.com>
|
||||
<altsysrq@gmail.com> <iamironbob@gmail.com>
|
||||
Sridhar Ratnakumar <sridharr@activestate.com>
|
||||
Sridhar Ratnakumar <sridharr@activestate.com> <github@srid.name>
|
||||
Liang-Chi Hsieh <viirya@gmail.com>
|
||||
Aleksa Sarai <cyphar@cyphar.com>
|
||||
Will Weaver <monkey@buildingbananas.com>
|
||||
|
|
133
AUTHORS
133
AUTHORS
|
@ -1,44 +1,62 @@
|
|||
# This file lists all individuals having contributed content to the repository.
|
||||
# If you're submitting a patch, please add your name here in alphabetical order as part of the patch.
|
||||
#
|
||||
# For a list of active project maintainers, see the MAINTAINERS file.
|
||||
#
|
||||
# For how it is generated, see `.mailmap`.
|
||||
|
||||
Aanand Prasad <aanand.prasad@gmail.com>
|
||||
Aaron Feng <aaron.feng@gmail.com>
|
||||
Aaron Huslage <huslage@gmail.com>
|
||||
Abel Muiño <amuino@gmail.com>
|
||||
Adam Miller <admiller@redhat.com>
|
||||
Adam Singer <financeCoding@gmail.com>
|
||||
Aditya <aditya@netroy.in>
|
||||
Adrian Mouat <adrian.mouat@gmail.com>
|
||||
alambike <alambike@gmail.com>
|
||||
Aleksa Sarai <cyphar@cyphar.com>
|
||||
Alexander Larsson <alexl@redhat.com>
|
||||
Alexandr Morozov <lk4d4math@gmail.com>
|
||||
Alexey Kotlyarov <alexey@infoxchange.net.au>
|
||||
Alexey Shamrin <shamrin@gmail.com>
|
||||
Alex Gaynor <alex.gaynor@gmail.com>
|
||||
Alexis THOMAS <fr.alexisthomas@gmail.com>
|
||||
almoehi <almoehi@users.noreply.github.com>
|
||||
Al Tobey <al@ooyala.com>
|
||||
amangoel <amangoel@gmail.com>
|
||||
Andrea Luzzardi <aluzzardi@gmail.com>
|
||||
Andreas Savvides <andreas@editd.com>
|
||||
Andreas Tiefenthaler <at@an-ti.eu>
|
||||
Andrea Turli <andrea.turli@gmail.com>
|
||||
Andrew Duckworth <grillopress@gmail.com>
|
||||
Andrew Macgregor <andrew.macgregor@agworld.com.au>
|
||||
Andrew Munsell <andrew@wizardapps.net>
|
||||
Andrews Medina <andrewsmedina@gmail.com>
|
||||
Andrew Williams <williams.andrew@gmail.com>
|
||||
Andy Chambers <anchambers@paypal.com>
|
||||
andy diller <dillera@gmail.com>
|
||||
Andy Goldstein <agoldste@redhat.com>
|
||||
Andy Kipp <andy@rstudio.com>
|
||||
Andy Rothfusz <github@metaliveblog.com>
|
||||
Andy Smith <github@anarkystic.com>
|
||||
Anthony Bishopric <git@anthonybishopric.com>
|
||||
Anton Nikitin <anton.k.nikitin@gmail.com>
|
||||
Antony Messerli <amesserl@rackspace.com>
|
||||
apocas <petermdias@gmail.com>
|
||||
Arnaud Porterie <icecrime@gmail.com>
|
||||
Asbjørn Enge <asbjorn@hanafjedle.net>
|
||||
Barnaby Gray <barnaby@pickle.me.uk>
|
||||
Barry Allard <barry.allard@gmail.com>
|
||||
Bartłomiej Piotrowski <b@bpiotrowski.pl>
|
||||
Benjamin Atkin <ben@benatkin.com>
|
||||
Benoit Chesneau <bchesneau@gmail.com>
|
||||
Ben Sargent <ben@brokendigits.com>
|
||||
Ben Toews <mastahyeti@gmail.com>
|
||||
Ben Wiklund <ben@daisyowl.com>
|
||||
Bernerd Schaefer <bj.schaefer@gmail.com>
|
||||
Bhiraj Butala <abhiraj.butala@gmail.com>
|
||||
bin liu <liubin0329@users.noreply.github.com>
|
||||
Bouke Haarsma <bouke@webatoom.nl>
|
||||
Brandon Liu <bdon@bdon.org>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Brian Dorsey <brian@dorseys.org>
|
||||
Brian Flad <bflad417@gmail.com>
|
||||
Brian Goff <cpuguy83@gmail.com>
|
||||
Brian McCallister <brianm@skife.org>
|
||||
Brian Olsen <brian@maven-group.org>
|
||||
|
@ -46,11 +64,15 @@ Brian Shumate <brian@couchbase.com>
|
|||
Briehan Lombaard <briehan.lombaard@gmail.com>
|
||||
Bruno Bigras <bigras.bruno@gmail.com>
|
||||
Bryan Matsuo <bryan.matsuo@gmail.com>
|
||||
Bryan Murphy <bmurphy1976@gmail.com>
|
||||
Caleb Spare <cespare@gmail.com>
|
||||
Calen Pennington <cale@edx.org>
|
||||
Cameron Boehmer <cameron.boehmer@gmail.com>
|
||||
Carl X. Su <bcbcarl@gmail.com>
|
||||
Charles Hooper <charles.hooper@dotcloud.com>
|
||||
Charles Lindsay <chaz@chazomatic.us>
|
||||
Charles Merriam <charles.merriam@gmail.com>
|
||||
Charlie Lewis <charliel@lab41.org>
|
||||
Chia-liang Kao <clkao@clkao.org>
|
||||
Chris St. Pierre <chris.a.st.pierre@gmail.com>
|
||||
Christopher Currie <codemonkey+github@gmail.com>
|
||||
|
@ -61,6 +83,7 @@ Colin Dunklau <colin.dunklau@gmail.com>
|
|||
Colin Rice <colin@daedrum.net>
|
||||
Cory Forsyth <cory.forsyth@gmail.com>
|
||||
cressie176 <github@stephen-cresswell.net>
|
||||
Dafydd Crosby <dtcrsby@gmail.com>
|
||||
Dan Buch <d.buch@modcloth.com>
|
||||
Dan Hirsch <thequux@upstandinghackers.com>
|
||||
Daniel Exner <dex@dragonslave.de>
|
||||
|
@ -72,30 +95,45 @@ Daniel Nordberg <dnordberg@gmail.com>
|
|||
Daniel Robinson <gottagetmac@gmail.com>
|
||||
Daniel Von Fange <daniel@leancoder.com>
|
||||
Daniel YC Lin <dlin.tw@gmail.com>
|
||||
Dan Keder <dan.keder@gmail.com>
|
||||
Dan McPherson <dmcphers@redhat.com>
|
||||
Danny Berger <dpb587@gmail.com>
|
||||
Danny Yates <danny@codeaholics.org>
|
||||
Dan Stine <sw@stinemail.com>
|
||||
Dan Walsh <dwalsh@redhat.com>
|
||||
Dan Williams <me@deedubs.com>
|
||||
Darren Coxall <darren@darrencoxall.com>
|
||||
Darren Shepherd <darren.s.shepherd@gmail.com>
|
||||
David Anderson <dave@natulte.net>
|
||||
David Calavera <david.calavera@gmail.com>
|
||||
David Gageot <david@gageot.net>
|
||||
David Mcanulty <github@hellspark.com>
|
||||
David Röthlisberger <david@rothlis.net>
|
||||
David Sissitka <me@dsissitka.com>
|
||||
Deni Bertovic <deni@kset.org>
|
||||
Dinesh Subhraveti <dineshs@altiscale.com>
|
||||
Djibril Koné <kone.djibril@gmail.com>
|
||||
dkumor <daniel@dkumor.com>
|
||||
Dmitry Demeshchuk <demeshchuk@gmail.com>
|
||||
Dolph Mathews <dolph.mathews@gmail.com>
|
||||
Dominik Honnef <dominik@honnef.co>
|
||||
Don Spaulding <donspauldingii@gmail.com>
|
||||
Dražen Lučanin <kermit666@gmail.com>
|
||||
Dr Nic Williams <drnicwilliams@gmail.com>
|
||||
Dustin Sallings <dustin@spy.net>
|
||||
Edmund Wagner <edmund-wagner@web.de>
|
||||
Eiichi Tsukata <devel@etsukata.com>
|
||||
Eivind Uggedal <eivind@uggedal.com>
|
||||
Elias Probst <mail@eliasprobst.eu>
|
||||
Emil Hernvall <emil@quench.at>
|
||||
Emily Rose <emily@contactvibe.com>
|
||||
Eric Hanchrow <ehanchrow@ine.com>
|
||||
Eric Lee <thenorthsecedes@gmail.com>
|
||||
Eric Myhre <hash@exultant.us>
|
||||
Erik Hollensbe <erik+github@hollensbe.org>
|
||||
Erno Hopearuoho <erno.hopearuoho@gmail.com>
|
||||
eugenkrizo <eugen.krizo@gmail.com>
|
||||
Evan Hazlett <ejhazlett@gmail.com>
|
||||
Evan Krall <krall@yelp.com>
|
||||
Evan Phoenix <evan@fallingsnow.net>
|
||||
Evan Wies <evan@neomantra.net>
|
||||
|
@ -106,6 +144,7 @@ Fabio Rehm <fgrehm@gmail.com>
|
|||
Fabrizio Regini <freegenie@gmail.com>
|
||||
Faiz Khan <faizkhan00@gmail.com>
|
||||
Fareed Dudhia <fareeddudhia@googlemail.com>
|
||||
Felix Rabe <felix@rabe.io>
|
||||
Fernando <fermayo@gmail.com>
|
||||
Flavio Castelli <fcastelli@suse.com>
|
||||
Francisco Souza <f@souza.cc>
|
||||
|
@ -117,8 +156,11 @@ Gabe Rosenhouse <gabe@missionst.com>
|
|||
Gabriel Monroy <gabriel@opdemand.com>
|
||||
Galen Sampson <galen.sampson@gmail.com>
|
||||
Gareth Rushgrove <gareth@morethanseven.net>
|
||||
Geoffrey Bachelet <grosfrais@gmail.com>
|
||||
Gereon Frey <gereon.frey@dynport.de>
|
||||
German DZ <germ@ndz.com.ar>
|
||||
Gert van Valkenhoef <g.h.m.van.valkenhoef@rug.nl>
|
||||
Goffert van Gool <goffert@phusion.nl>
|
||||
Graydon Hoare <graydon@pobox.com>
|
||||
Greg Thornton <xdissent@me.com>
|
||||
grunny <mwgrunny@gmail.com>
|
||||
|
@ -127,28 +169,40 @@ Gurjeet Singh <gurjeet@singh.im>
|
|||
Guruprasad <lgp171188@gmail.com>
|
||||
Harley Laue <losinggeneration@gmail.com>
|
||||
Hector Castro <hectcastro@gmail.com>
|
||||
Hobofan <goisser94@gmail.com>
|
||||
Hunter Blanks <hunter@twilio.com>
|
||||
Ian Truslove <ian.truslove@gmail.com>
|
||||
ILYA Khlopotov <ilya.khlopotov@gmail.com>
|
||||
inglesp <peter.inglesby@gmail.com>
|
||||
Isaac Dupree <antispam@idupree.com>
|
||||
Isabel Jimenez <contact.isabeljimenez@gmail.com>
|
||||
Isao Jonas <isao.jonas@gmail.com>
|
||||
Jack Danger Canty <jackdanger@squareup.com>
|
||||
jakedt <jake@devtable.com>
|
||||
Jake Moshenko <jake@devtable.com>
|
||||
James Allen <jamesallen0108@gmail.com>
|
||||
James Carr <james.r.carr@gmail.com>
|
||||
James DeFelice <james.defelice@ishisystems.com>
|
||||
James Harrison Fisher <jameshfisher@gmail.com>
|
||||
James Mills <prologic@shortcircuit.net.au>
|
||||
James Turnbull <james@lovedthanlost.net>
|
||||
jaseg <jaseg@jaseg.net>
|
||||
Jason McVetta <jason.mcvetta@gmail.com>
|
||||
Jason Plum <jplum@devonit.com>
|
||||
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
|
||||
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
|
||||
Jeff Lindsay <progrium@gmail.com>
|
||||
Jeremy Grosser <jeremy@synack.me>
|
||||
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
|
||||
Jesse Dubay <jesse@thefortytwo.net>
|
||||
Jilles Oldenbeuving <ojilles@gmail.com>
|
||||
Jim Alateras <jima@comware.com.au>
|
||||
Jimmy Cuadra <jimmy@jimmycuadra.com>
|
||||
Joe Beda <joe.github@bedafamily.com>
|
||||
Joel Handwell <joelhandwell@gmail.com>
|
||||
Joe Shaw <joe@joeshaw.org>
|
||||
Joe Van Dyk <joe@tanga.com>
|
||||
Joffrey F <joffrey@dotcloud.com>
|
||||
Joffrey F <joffrey@docker.com>
|
||||
Johan Euphrosine <proppy@google.com>
|
||||
Johannes 'fish' Ziemke <github@freigeist.org>
|
||||
Johan Rydberg <johan.rydberg@gmail.com>
|
||||
|
@ -157,7 +211,9 @@ John Feminella <jxf@jxf.me>
|
|||
John Gardiner Myers <jgmyers@proofpoint.com>
|
||||
John Warwick <jwarwick@gmail.com>
|
||||
Jonas Pfenniger <jonas@pfenniger.name>
|
||||
Jonathan McCrohan <jmccrohan@gmail.com>
|
||||
Jonathan Mueller <j.mueller@apoveda.ch>
|
||||
Jonathan Pares <jonathanpa@users.noreply.github.com>
|
||||
Jonathan Rudenberg <jonathan@titanous.com>
|
||||
Jon Wedaman <jweede@gmail.com>
|
||||
Joost Cassee <joost@cassee.net>
|
||||
|
@ -172,13 +228,17 @@ Julien Barbier <write0@gmail.com>
|
|||
Julien Dubois <julien.dubois@gmail.com>
|
||||
Justin Force <justin.force@gmail.com>
|
||||
Justin Plock <jplock@users.noreply.github.com>
|
||||
Justin Simonelis <justin.p.simonelis@gmail.com>
|
||||
Karan Lyons <karan@karanlyons.com>
|
||||
Karl Grzeszczak <karlgrz@gmail.com>
|
||||
Kato Kazuyoshi <kato.kazuyoshi@gmail.com>
|
||||
Kawsar Saiyeed <kawsar.saiyeed@projiris.com>
|
||||
Keli Hu <dev@keli.hu>
|
||||
Ken Cochrane <kencochrane@gmail.com>
|
||||
Ken ICHIKAWA <ichikawa.ken@jp.fujitsu.com>
|
||||
Kevin Clark <kevin.clark@gmail.com>
|
||||
Kevin J. Lynagh <kevin@keminglabs.com>
|
||||
Kevin Menard <kevin@nirvdrum.com>
|
||||
Kevin Wallace <kevin@pentabarf.net>
|
||||
Keyvan Fatehi <keyvanfatehi@gmail.com>
|
||||
kim0 <email.ahmedkamal@googlemail.com>
|
||||
|
@ -187,14 +247,20 @@ Kimbro Staken <kstaken@kstaken.com>
|
|||
Kiran Gangadharan <kiran.daredevil@gmail.com>
|
||||
Konstantin Pelykh <kpelykh@zettaset.com>
|
||||
Kyle Conroy <kyle.j.conroy@gmail.com>
|
||||
lalyos <lalyos@yahoo.com>
|
||||
Lance Chen <cyen0312@gmail.com>
|
||||
Lars R. Damerow <lars@pixar.com>
|
||||
Laurie Voss <github@seldo.com>
|
||||
Lewis Peckover <lew+github@lew.io>
|
||||
Liang-Chi Hsieh <viirya@gmail.com>
|
||||
Lokesh Mandvekar <lsm5@redhat.com>
|
||||
Louis Opter <kalessin@kalessin.fr>
|
||||
lukaspustina <lukas.pustina@centerdevice.com>
|
||||
lukemarsden <luke@digital-crocus.com>
|
||||
Mahesh Tiyyagura <tmahesh@gmail.com>
|
||||
Manuel Meurer <manuel@krautcomputing.com>
|
||||
Manuel Woelker <github@manuel.woelker.org>
|
||||
Marc Abramowitz <marc@marc-abramowitz.com>
|
||||
Marc Kuo <kuomarc2@gmail.com>
|
||||
Marco Hennings <marco.hennings@freiheit.com>
|
||||
Marcus Farkas <toothlessgear@finitebox.com>
|
||||
|
@ -206,23 +272,32 @@ Marko Mikulicic <mmikulicic@gmail.com>
|
|||
Markus Fix <lispmeister@gmail.com>
|
||||
Martijn van Oosterhout <kleptog@svana.org>
|
||||
Martin Redmond <martin@tinychat.com>
|
||||
Mason Malone <mason.malone@gmail.com>
|
||||
Mateusz Sulima <sulima.mateusz@gmail.com>
|
||||
Mathieu Le Marec - Pasquet <kiorky@cryptelium.net>
|
||||
Matt Apperson <me@mattapperson.com>
|
||||
Matt Bachmann <bachmann.matt@gmail.com>
|
||||
Matt Haggard <haggardii@gmail.com>
|
||||
Matthew Mueller <mattmuelle@gmail.com>
|
||||
Matthias Klumpp <matthias@tenstral.net>
|
||||
Matthias Kühnle <git.nivoc@neverbox.com>
|
||||
mattymo <raytrac3r@gmail.com>
|
||||
Maxime Petazzoni <max@signalfuse.com>
|
||||
Maxim Treskin <zerthurd@gmail.com>
|
||||
Max Shytikov <mshytikov@gmail.com>
|
||||
meejah <meejah@meejah.ca>
|
||||
Michael Brown <michael@netdirect.ca>
|
||||
Michael Crosby <michael@crosbymichael.com>
|
||||
Michael Gorsuch <gorsuch@github.com>
|
||||
Michael Neale <michael.neale@gmail.com>
|
||||
Michael Stapelberg <michael+gh@stapelberg.de>
|
||||
Miguel Angel Fernández <elmendalerenda@gmail.com>
|
||||
Mike Gaffney <mike@uberu.com>
|
||||
Mike MacCana <mike.maccana@gmail.com>
|
||||
Mike Naberezny <mike@naberezny.com>
|
||||
Mikhail Sobolev <mss@mawhrin.net>
|
||||
Mohit Soni <mosoni@ebay.com>
|
||||
Morgante Pell <morgante.pell@morgante.net>
|
||||
Morten Siebuhr <sbhr@sbhr.dk>
|
||||
Nan Monnand Deng <monnand@gmail.com>
|
||||
Nate Jones <nate@endot.org>
|
||||
|
@ -234,22 +309,26 @@ Nick Stenning <nick.stenning@digital.cabinet-office.gov.uk>
|
|||
Nick Stinemates <nick@stinemates.org>
|
||||
Nicolas Dudebout <nicolas.dudebout@gatech.edu>
|
||||
Nicolas Kaiser <nikai@nikai.net>
|
||||
noducks <onemannoducks@gmail.com>
|
||||
Nolan Darilek <nolan@thewordnerd.info>
|
||||
odk- <github@odkurzacz.org>
|
||||
Oguz Bilgic <fisyonet@gmail.com>
|
||||
Ole Reifschneider <mail@ole-reifschneider.de>
|
||||
O.S.Tezer <ostezer@gmail.com>
|
||||
O.S. Tezer <ostezer@gmail.com>
|
||||
pandrew <letters@paulnotcom.se>
|
||||
Pascal Borreli <pascal@borreli.com>
|
||||
pattichen <craftsbear@gmail.com>
|
||||
Paul Annesley <paul@annesley.cc>
|
||||
Paul Bowsher <pbowsher@globalpersonals.co.uk>
|
||||
Paul Hammond <paul@paulhammond.org>
|
||||
Paul Jimenez <pj@place.org>
|
||||
Paul Lietar <paul@lietar.net>
|
||||
Paul Morie <pmorie@gmail.com>
|
||||
Paul Nasrat <pnasrat@gmail.com>
|
||||
Paul <paul9869@gmail.com>
|
||||
Peter Braden <peterbraden@peterbraden.co.uk>
|
||||
Peter Waller <peter@scraperwiki.com>
|
||||
Phillip Alexander <git@phillipalexander.io>
|
||||
Phil Spitler <pspitler@gmail.com>
|
||||
Piergiuliano Bossi <pgbossi@gmail.com>
|
||||
Pierre-Alain RIVIERE <pariviere@ippon.fr>
|
||||
|
@ -257,6 +336,8 @@ Piotr Bogdan <ppbogdan@gmail.com>
|
|||
pysqz <randomq@126.com>
|
||||
Quentin Brossard <qbrossard@gmail.com>
|
||||
Rafal Jeczalik <rjeczalik@gmail.com>
|
||||
Rajat Pandit <rp@rajatpandit.com>
|
||||
Ralph Bean <rbean@redhat.com>
|
||||
Ramkumar Ramachandra <artagnon@gmail.com>
|
||||
Ramon van Alteren <ramon@vanalteren.nl>
|
||||
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
|
||||
|
@ -266,54 +347,71 @@ Richo Healey <richo@psych0tik.net>
|
|||
Rick Bradley <rick@users.noreply.github.com>
|
||||
Robert Obryk <robryk@gmail.com>
|
||||
Roberto G. Hashioka <roberto.hashioka@docker.com>
|
||||
Roberto Hashioka <roberto_hashioka@hotmail.com>
|
||||
robpc <rpcann@gmail.com>
|
||||
Rodrigo Vaz <rodrigo.vaz@gmail.com>
|
||||
Roel Van Nyen <roel.vannyen@gmail.com>
|
||||
Roger Peppe <rogpeppe@gmail.com>
|
||||
Rohit Jnagal <jnagal@google.com>
|
||||
Roland Moriz <rmoriz@users.noreply.github.com>
|
||||
Rovanion Luckey <rovanion.luckey@gmail.com>
|
||||
Ryan Aslett <github@mixologic.com>
|
||||
Ryan Fowler <rwfowler@gmail.com>
|
||||
Ryan O'Donnell <odonnellryanc@gmail.com>
|
||||
Ryan Seto <ryanseto@yak.net>
|
||||
Ryan Thomas <rthomas@atlassian.com>
|
||||
Sam Alba <sam.alba@gmail.com>
|
||||
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
|
||||
Sam Rijs <srijs@airpost.net>
|
||||
Samuel Andaya <samuel@andaya.net>
|
||||
Scott Bessler <scottbessler@gmail.com>
|
||||
Scott Collier <emailscottcollier@gmail.com>
|
||||
Sean Cronin <seancron@gmail.com>
|
||||
Sean P. Kane <skane@newrelic.com>
|
||||
Sébastien Stormacq <sebsto@users.noreply.github.com>
|
||||
Shawn Landden <shawn@churchofgit.com>
|
||||
Shawn Siefkas <shawn.siefkas@meredith.com>
|
||||
Shih-Yuan Lee <fourdollars@gmail.com>
|
||||
shin- <joffrey@docker.com>
|
||||
Silas Sewell <silas@sewell.org>
|
||||
Simon Taranto <simon.taranto@gmail.com>
|
||||
Sindhu S <sindhus@live.in>
|
||||
Sjoerd Langkemper <sjoerd-github@linuxonly.nl>
|
||||
Solomon Hykes <solomon@dotcloud.com>
|
||||
Solomon Hykes <solomon@docker.com>
|
||||
Song Gao <song@gao.io>
|
||||
Soulou <leo@unbekandt.eu>
|
||||
Sridatta Thatipamala <sthatipamala@gmail.com>
|
||||
Sridhar Ratnakumar <sridharr@activestate.com>
|
||||
Steeve Morin <steeve.morin@gmail.com>
|
||||
Stefan Praszalowicz <stefan@greplin.com>
|
||||
Steven Burgess <steven.a.burgess@hotmail.com>
|
||||
sudosurootdev <sudosurootdev@gmail.com>
|
||||
Sven Dowideit <svendowideit@home.org.au>
|
||||
Sven Dowideit <SvenDowideit@home.org.au>
|
||||
Sylvain Bellemare <sylvain.bellemare@ezeep.com>
|
||||
tang0th <tang0th@gmx.com>
|
||||
Tatsuki Sugiura <sugi@nemui.org>
|
||||
Tehmasp Chaudhri <tehmasp@gmail.com>
|
||||
Thatcher Peskens <thatcher@dotcloud.com>
|
||||
Thatcher Peskens <thatcher@docker.com>
|
||||
Thermionix <bond711@gmail.com>
|
||||
Thijs Terlouw <thijsterlouw@gmail.com>
|
||||
Thomas Bikeev <thomas.bikeev@mac.com>
|
||||
Thomas Frössman <thomasf@jossystem.se>
|
||||
Thomas Hansen <thomas.hansen@gmail.com>
|
||||
Thomas LEVEIL <thomasleveil@gmail.com>
|
||||
Thomas Schroeter <thomas@cliqz.com>
|
||||
Tianon Gravi <admwiggin@gmail.com>
|
||||
Tim Bosse <maztaim@users.noreply.github.com>
|
||||
Tibor Vass <teabee89@gmail.com>
|
||||
Tim Bosse <taim@bosboot.org>
|
||||
Timothy Hobbs <timothyhobbs@seznam.cz>
|
||||
Tim Ruffles <oi@truffles.me.uk>
|
||||
Tim Terhorst <mynamewastaken+git@gmail.com>
|
||||
tjmehta <tj@init.me>
|
||||
Tobias Bieniek <Tobias.Bieniek@gmx.de>
|
||||
Tobias Schmidt <ts@soundcloud.com>
|
||||
Tobias Schwab <tobias.schwab@dynport.de>
|
||||
Todd Lunter <tlunter@gmail.com>
|
||||
Tom Fotherby <tom+github@peopleperhour.com>
|
||||
Tom Hulihan <hulihan.tom159@gmail.com>
|
||||
Tommaso Visconti <tommaso.visconti@gmail.com>
|
||||
Tony Daws <tony@daws.ca>
|
||||
Travis Cline <travis.cline@gmail.com>
|
||||
Tyler Brock <tyler.brock@gmail.com>
|
||||
Tzu-Jung Lee <roylee17@gmail.com>
|
||||
|
@ -322,26 +420,35 @@ unclejack <unclejacksons@gmail.com>
|
|||
vgeta <gopikannan.venugopalsamy@gmail.com>
|
||||
Victor Coisne <victor.coisne@dotcloud.com>
|
||||
Victor Lyuboslavsky <victor@victoreda.com>
|
||||
Victor Marmol <vmarmol@google.com>
|
||||
Victor Vieux <victor.vieux@docker.com>
|
||||
Viktor Vojnovski <viktor.vojnovski@amadeus.com>
|
||||
Vincent Batts <vbatts@redhat.com>
|
||||
Vincent Bernat <bernat@luffy.cx>
|
||||
Vincent Mayers <vincent.mayers@inbloom.org>
|
||||
Vincent Woo <me@vincentwoo.com>
|
||||
Vinod Kulkarni <vinod.kulkarni@gmail.com>
|
||||
Vishnu Kannan <vishnuk@google.com>
|
||||
Vitor Monteiro <vmrmonteiro@gmail.com>
|
||||
Vivek Agarwal <me@vivek.im>
|
||||
Vladimir Bulyga <xx@ccxx.cc>
|
||||
Vladimir Kirillov <proger@wilab.org.ua>
|
||||
Vladimir Rutsky <iamironbob@gmail.com>
|
||||
Vladimir Rutsky <altsysrq@gmail.com>
|
||||
Walter Leibbrandt <github@wrl.co.za>
|
||||
Walter Stanish <walter@pratyeka.org>
|
||||
WarheadsSE <max@warheads.net>
|
||||
Wes Morgan <cap10morgan@gmail.com>
|
||||
Will Dietz <w@wdtz.org>
|
||||
William Delanoue <william.delanoue@gmail.com>
|
||||
William Henry <whenry@redhat.com>
|
||||
Will Rouesnel <w.rouesnel@gmail.com>
|
||||
Will Weaver <monkey@buildingbananas.com>
|
||||
Xiuming Chen <cc@cxm.cc>
|
||||
Yang Bai <hamo.by@gmail.com>
|
||||
Yasunori Mahata <nori@mahata.net>
|
||||
Yurii Rashkovskii <yrashk@gmail.com>
|
||||
Zain Memon <zain@inzain.net>
|
||||
Zaiste! <oh@zaiste.net>
|
||||
Zilin Du <zilin.du@gmail.com>
|
||||
zimbatm <zimbatm@zimbatm.com>
|
||||
zqh <zqhxuyuan@gmail.com>
|
||||
|
|
|
@ -1,5 +1,10 @@
|
|||
# Changelog
|
||||
|
||||
## 0.11.1 (2014-05-07)
|
||||
|
||||
#### Registry
|
||||
- Fix push and pull to private registry
|
||||
|
||||
## 0.11.0 (2014-05-07)
|
||||
|
||||
#### Notable features since 0.10.0
|
||||
|
|
|
@ -182,7 +182,7 @@ One way to automate this, is customise your get ``commit.template`` by adding
|
|||
a ``prepare-commit-msg`` hook to your docker checkout:
|
||||
|
||||
```
|
||||
curl -o .git/hooks/prepare-commit-msg https://raw.github.com/dotcloud/docker/master/contrib/prepare-commit-msg.hook && chmod +x .git/hooks/prepare-commit-msg
|
||||
curl -o .git/hooks/prepare-commit-msg https://raw.githubusercontent.com/dotcloud/docker/master/contrib/prepare-commit-msg.hook && chmod +x .git/hooks/prepare-commit-msg
|
||||
```
|
||||
|
||||
* Note: the above script expects to find your GitHub user name in ``git config --get github.user``
|
||||
|
@ -192,7 +192,10 @@ curl -o .git/hooks/prepare-commit-msg https://raw.github.com/dotcloud/docker/mas
|
|||
There are several exceptions to the signing requirement. Currently these are:
|
||||
|
||||
* Your patch fixes spelling or grammar errors.
|
||||
* Your patch is a single line change to documentation.
|
||||
* Your patch is a single line change to documentation contained in the
|
||||
`docs` directory.
|
||||
* Your patch fixes Markdown formatting or syntax errors in the
|
||||
documentation contained in the `docs` directory.
|
||||
|
||||
If you have any questions, please refer to the FAQ in the [docs](http://docs.docker.io)
|
||||
|
||||
|
|
13
Dockerfile
13
Dockerfile
|
@ -24,7 +24,7 @@
|
|||
#
|
||||
|
||||
docker-version 0.6.1
|
||||
FROM ubuntu:13.10
|
||||
FROM ubuntu:14.04
|
||||
MAINTAINER Tianon Gravi <admwiggin@gmail.com> (@tianon)
|
||||
|
||||
# Packaged dependencies
|
||||
|
@ -41,6 +41,7 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq \
|
|||
libapparmor-dev \
|
||||
libcap-dev \
|
||||
libsqlite3-dev \
|
||||
lxc=1.0* \
|
||||
mercurial \
|
||||
pandoc \
|
||||
reprepro \
|
||||
|
@ -49,10 +50,6 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq \
|
|||
s3cmd=1.1.0* \
|
||||
--no-install-recommends
|
||||
|
||||
# Get and compile LXC 0.8 (since it is the most stable)
|
||||
RUN git clone --no-checkout https://github.com/lxc/lxc.git /usr/local/lxc && cd /usr/local/lxc && git checkout -q lxc-0.8.0
|
||||
RUN cd /usr/local/lxc && ./autogen.sh && ./configure --disable-docs && make && make install
|
||||
|
||||
# Get lvm2 source for compiling statically
|
||||
RUN git clone --no-checkout https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2 && cd /usr/local/lvm2 && git checkout -q v2_02_103
|
||||
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
|
||||
|
@ -84,7 +81,7 @@ RUN go get code.google.com/p/go.tools/cmd/cover
|
|||
RUN gem install --no-rdoc --no-ri fpm --version 1.0.2
|
||||
|
||||
# Get the "busybox" image source so we can build locally instead of pulling
|
||||
RUN git clone https://github.com/jpetazzo/docker-busybox.git /docker-busybox
|
||||
RUN git clone -b buildroot-2014.02 https://github.com/jpetazzo/docker-busybox.git /docker-busybox
|
||||
|
||||
# Setup s3cmd config
|
||||
RUN /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_KEY' > /.s3cfg
|
||||
|
@ -92,6 +89,10 @@ RUN /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_
|
|||
# Set user.email so crosbymichael's in-container merge commits go smoothly
|
||||
RUN git config --global user.email 'docker-dummy@example.com'
|
||||
|
||||
# Add an unprivileged user to be used for tests which need it
|
||||
RUN groupadd -r docker
|
||||
RUN useradd --create-home --gid docker unprivilegeduser
|
||||
|
||||
VOLUME /var/lib/docker
|
||||
WORKDIR /go/src/github.com/dotcloud/docker
|
||||
ENV DOCKER_BUILDTAGS apparmor selinux
|
||||
|
|
2
Makefile
2
Makefile
|
@ -35,7 +35,7 @@ docs-release: docs-build
|
|||
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" ./release.sh
|
||||
|
||||
test: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh binary test-unit test-integration test-integration-cli
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross test-unit test-integration test-integration-cli
|
||||
|
||||
test-unit: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh test-unit
|
||||
|
|
|
@ -190,3 +190,9 @@ It is your responsibility to ensure that your use and/or transfer does not
|
|||
violate applicable laws.
|
||||
|
||||
For more information, please see http://www.bis.doc.gov
|
||||
|
||||
|
||||
Licensing
|
||||
=========
|
||||
Docker is licensed under the Apache License, Version 2.0. See LICENSE for full license text.
|
||||
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
0.11.0-dev
|
||||
0.11.1-dev
|
||||
|
|
5
api/README.md
Normal file
5
api/README.md
Normal file
|
@ -0,0 +1,5 @@
|
|||
This directory contains code pertaining to the Docker API:
|
||||
|
||||
- Used by the docker client when comunicating with the docker deamon
|
||||
|
||||
- Used by third party tools wishing to interface with the docker deamon
|
|
@ -23,6 +23,9 @@ var funcMap = template.FuncMap{
|
|||
}
|
||||
|
||||
func (cli *DockerCli) getMethod(name string) (func(...string) error, bool) {
|
||||
if len(name) == 0 {
|
||||
return nil, false
|
||||
}
|
||||
methodName := "Cmd" + strings.ToUpper(name[:1]) + strings.ToLower(name[1:])
|
||||
method := reflect.ValueOf(cli).MethodByName(methodName)
|
||||
if !method.IsValid() {
|
||||
|
@ -73,7 +76,7 @@ func NewDockerCli(in io.ReadCloser, out, err io.Writer, proto, addr string, tlsC
|
|||
}
|
||||
|
||||
if in != nil {
|
||||
if file, ok := in.(*os.File); ok {
|
||||
if file, ok := out.(*os.File); ok {
|
||||
terminalFd = file.Fd()
|
||||
isTerminal = term.IsTerminal(terminalFd)
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
goruntime "runtime"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
@ -28,6 +28,7 @@ import (
|
|||
"github.com/dotcloud/docker/nat"
|
||||
"github.com/dotcloud/docker/pkg/signal"
|
||||
"github.com/dotcloud/docker/pkg/term"
|
||||
"github.com/dotcloud/docker/pkg/units"
|
||||
"github.com/dotcloud/docker/registry"
|
||||
"github.com/dotcloud/docker/runconfig"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
|
@ -109,6 +110,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the verbose output generated by the containers")
|
||||
noCache := cmd.Bool([]string{"#no-cache", "-no-cache"}, false, "Do not use cache when building the image")
|
||||
rm := cmd.Bool([]string{"#rm", "-rm"}, true, "Remove intermediate containers after a successful build")
|
||||
forceRm := cmd.Bool([]string{"-force-rm"}, false, "Always remove intermediate containers, even after unsuccessful builds")
|
||||
if err := cmd.Parse(args); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
@ -160,6 +162,9 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
if _, err = os.Stat(filename); os.IsNotExist(err) {
|
||||
return fmt.Errorf("no Dockerfile found in %s", cmd.Arg(0))
|
||||
}
|
||||
if err = utils.ValidateContextDirectory(root); err != nil {
|
||||
return fmt.Errorf("Error checking context is accessible: '%s'. Please check permissions and try again.", err)
|
||||
}
|
||||
context, err = archive.Tar(root, archive.Uncompressed)
|
||||
}
|
||||
var body io.Reader
|
||||
|
@ -193,6 +198,12 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
}
|
||||
if *rm {
|
||||
v.Set("rm", "1")
|
||||
} else {
|
||||
v.Set("rm", "0")
|
||||
}
|
||||
|
||||
if *forceRm {
|
||||
v.Set("forcerm", "1")
|
||||
}
|
||||
|
||||
cli.LoadConfigFile()
|
||||
|
@ -359,7 +370,7 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
|
|||
fmt.Fprintf(cli.out, "Client version: %s\n", dockerversion.VERSION)
|
||||
}
|
||||
fmt.Fprintf(cli.out, "Client API version: %s\n", api.APIVERSION)
|
||||
fmt.Fprintf(cli.out, "Go version (client): %s\n", goruntime.Version())
|
||||
fmt.Fprintf(cli.out, "Go version (client): %s\n", runtime.Version())
|
||||
if dockerversion.GITCOMMIT != "" {
|
||||
fmt.Fprintf(cli.out, "Git commit (client): %s\n", dockerversion.GITCOMMIT)
|
||||
}
|
||||
|
@ -384,16 +395,8 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
|
|||
if apiVersion := remoteVersion.Get("ApiVersion"); apiVersion != "" {
|
||||
fmt.Fprintf(cli.out, "Server API version: %s\n", apiVersion)
|
||||
}
|
||||
fmt.Fprintf(cli.out, "Git commit (server): %s\n", remoteVersion.Get("GitCommit"))
|
||||
fmt.Fprintf(cli.out, "Go version (server): %s\n", remoteVersion.Get("GoVersion"))
|
||||
release := utils.GetReleaseVersion()
|
||||
if release != "" {
|
||||
fmt.Fprintf(cli.out, "Last stable version: %s", release)
|
||||
if (dockerversion.VERSION != "" || remoteVersion.Exists("Version")) && (strings.Trim(dockerversion.VERSION, "-dev") != release || strings.Trim(remoteVersion.Get("Version"), "-dev") != release) {
|
||||
fmt.Fprintf(cli.out, ", please update docker")
|
||||
}
|
||||
fmt.Fprintf(cli.out, "\n")
|
||||
}
|
||||
fmt.Fprintf(cli.out, "Git commit (server): %s\n", remoteVersion.Get("GitCommit"))
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -884,14 +887,14 @@ func (cli *DockerCli) CmdHistory(args ...string) error {
|
|||
fmt.Fprintf(w, "%s\t", utils.TruncateID(outID))
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "%s ago\t", utils.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))))
|
||||
fmt.Fprintf(w, "%s ago\t", units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))))
|
||||
|
||||
if *noTrunc {
|
||||
fmt.Fprintf(w, "%s\t", out.Get("CreatedBy"))
|
||||
} else {
|
||||
fmt.Fprintf(w, "%s\t", utils.Trunc(out.Get("CreatedBy"), 45))
|
||||
}
|
||||
fmt.Fprintf(w, "%s\n", utils.HumanSize(out.GetInt64("Size")))
|
||||
fmt.Fprintf(w, "%s\n", units.HumanSize(out.GetInt64("Size")))
|
||||
} else {
|
||||
if *noTrunc {
|
||||
fmt.Fprintln(w, outID)
|
||||
|
@ -1249,7 +1252,7 @@ func (cli *DockerCli) CmdImages(args ...string) error {
|
|||
}
|
||||
|
||||
if !*quiet {
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, utils.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), utils.HumanSize(out.GetInt64("VirtualSize")))
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\n", repo, tag, outID, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), units.HumanSize(out.GetInt64("VirtualSize")))
|
||||
} else {
|
||||
fmt.Fprintln(w, outID)
|
||||
}
|
||||
|
@ -1323,7 +1326,7 @@ func (cli *DockerCli) printTreeNode(noTrunc bool, image *engine.Env, prefix stri
|
|||
imageID = utils.TruncateID(image.Get("Id"))
|
||||
}
|
||||
|
||||
fmt.Fprintf(cli.out, "%s%s Virtual Size: %s", prefix, imageID, utils.HumanSize(image.GetInt64("VirtualSize")))
|
||||
fmt.Fprintf(cli.out, "%s%s Virtual Size: %s", prefix, imageID, units.HumanSize(image.GetInt64("VirtualSize")))
|
||||
if image.GetList("RepoTags")[0] != "<none>:<none>" {
|
||||
fmt.Fprintf(cli.out, " Tags: %s\n", strings.Join(image.GetList("RepoTags"), ", "))
|
||||
} else {
|
||||
|
@ -1408,12 +1411,12 @@ func (cli *DockerCli) CmdPs(args ...string) error {
|
|||
outCommand = utils.Trunc(outCommand, 20)
|
||||
}
|
||||
ports.ReadListFrom([]byte(out.Get("Ports")))
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\t%s\t%s\t", outID, out.Get("Image"), outCommand, utils.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), out.Get("Status"), api.DisplayablePorts(ports), strings.Join(outNames, ","))
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s\t%s\t%s\t", outID, out.Get("Image"), outCommand, units.HumanDuration(time.Now().UTC().Sub(time.Unix(out.GetInt64("Created"), 0))), out.Get("Status"), api.DisplayablePorts(ports), strings.Join(outNames, ","))
|
||||
if *size {
|
||||
if out.GetInt("SizeRootFs") > 0 {
|
||||
fmt.Fprintf(w, "%s (virtual %s)\n", utils.HumanSize(out.GetInt64("SizeRw")), utils.HumanSize(out.GetInt64("SizeRootFs")))
|
||||
fmt.Fprintf(w, "%s (virtual %s)\n", units.HumanSize(out.GetInt64("SizeRw")), units.HumanSize(out.GetInt64("SizeRootFs")))
|
||||
} else {
|
||||
fmt.Fprintf(w, "%s\n", utils.HumanSize(out.GetInt64("SizeRw")))
|
||||
fmt.Fprintf(w, "%s\n", units.HumanSize(out.GetInt64("SizeRw")))
|
||||
}
|
||||
} else {
|
||||
fmt.Fprint(w, "\n")
|
||||
|
@ -1839,6 +1842,10 @@ func (cli *DockerCli) CmdRun(args ...string) error {
|
|||
|
||||
v := url.Values{}
|
||||
repos, tag := utils.ParseRepositoryTag(config.Image)
|
||||
// pull only the image tagged 'latest' if no tag was specified
|
||||
if tag == "" {
|
||||
tag = "latest"
|
||||
}
|
||||
v.Set("fromImage", repos)
|
||||
v.Set("tag", tag)
|
||||
|
||||
|
@ -2058,7 +2065,7 @@ func (cli *DockerCli) CmdCp(args ...string) error {
|
|||
}
|
||||
|
||||
if statusCode == 200 {
|
||||
if err := archive.Untar(stream, copyData.Get("HostPath"), nil); err != nil {
|
||||
if err := archive.Untar(stream, copyData.Get("HostPath"), &archive.TarOptions{NoLchown: true}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,15 +2,16 @@ package api
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"mime"
|
||||
"strings"
|
||||
|
||||
"github.com/dotcloud/docker/engine"
|
||||
"github.com/dotcloud/docker/pkg/version"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"mime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
APIVERSION version.Version = "1.11"
|
||||
APIVERSION version.Version = "1.12"
|
||||
DEFAULTHTTPHOST = "127.0.0.1"
|
||||
DEFAULTUNIXSOCKET = "/var/run/docker.sock"
|
||||
)
|
||||
|
@ -30,7 +31,7 @@ func DisplayablePorts(ports *engine.Table) string {
|
|||
ports.Sort()
|
||||
for _, port := range ports.Data {
|
||||
if port.Get("IP") == "" {
|
||||
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PublicPort"), port.Get("Type")))
|
||||
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PrivatePort"), port.Get("Type")))
|
||||
} else {
|
||||
result = append(result, fmt.Sprintf("%s:%d->%d/%s", port.Get("IP"), port.GetInt("PublicPort"), port.GetInt("PrivatePort"), port.Get("Type")))
|
||||
}
|
||||
|
|
|
@ -122,17 +122,17 @@ func postAuth(eng *engine.Engine, version version.Version, w http.ResponseWriter
|
|||
var (
|
||||
authConfig, err = ioutil.ReadAll(r.Body)
|
||||
job = eng.Job("auth")
|
||||
status string
|
||||
stdoutBuffer = bytes.NewBuffer(nil)
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
job.Setenv("authConfig", string(authConfig))
|
||||
job.Stdout.AddString(&status)
|
||||
job.Stdout.Add(stdoutBuffer)
|
||||
if err = job.Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
if status != "" {
|
||||
if status := engine.Tail(stdoutBuffer, 1); status != "" {
|
||||
var env engine.Env
|
||||
env.Set("Status", status)
|
||||
return writeJSON(w, http.StatusOK, env)
|
||||
|
@ -244,7 +244,7 @@ func getEvents(eng *engine.Engine, version version.Version, w http.ResponseWrite
|
|||
return err
|
||||
}
|
||||
|
||||
var job = eng.Job("events", r.RemoteAddr)
|
||||
var job = eng.Job("events")
|
||||
streamJSON(job, w, true)
|
||||
job.Setenv("since", r.Form.Get("since"))
|
||||
job.Setenv("until", r.Form.Get("until"))
|
||||
|
@ -338,7 +338,7 @@ func getContainersLogs(eng *engine.Engine, version version.Version, w http.Respo
|
|||
}
|
||||
|
||||
var (
|
||||
job = eng.Job("inspect", vars["name"], "container")
|
||||
job = eng.Job("container_inspect", vars["name"])
|
||||
c, err = job.Stdout.AddEnv()
|
||||
)
|
||||
if err != nil {
|
||||
|
@ -393,9 +393,10 @@ func postCommit(eng *engine.Engine, version version.Version, w http.ResponseWrit
|
|||
return err
|
||||
}
|
||||
var (
|
||||
config engine.Env
|
||||
env engine.Env
|
||||
job = eng.Job("commit", r.Form.Get("container"))
|
||||
config engine.Env
|
||||
env engine.Env
|
||||
job = eng.Job("commit", r.Form.Get("container"))
|
||||
stdoutBuffer = bytes.NewBuffer(nil)
|
||||
)
|
||||
if err := config.Decode(r.Body); err != nil {
|
||||
utils.Errorf("%s", err)
|
||||
|
@ -407,12 +408,11 @@ func postCommit(eng *engine.Engine, version version.Version, w http.ResponseWrit
|
|||
job.Setenv("comment", r.Form.Get("comment"))
|
||||
job.SetenvSubEnv("config", &config)
|
||||
|
||||
var id string
|
||||
job.Stdout.AddString(&id)
|
||||
job.Stdout.Add(stdoutBuffer)
|
||||
if err := job.Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
env.Set("Id", id)
|
||||
env.Set("Id", engine.Tail(stdoutBuffer, 1))
|
||||
return writeJSON(w, http.StatusCreated, env)
|
||||
}
|
||||
|
||||
|
@ -603,17 +603,17 @@ func postContainersCreate(eng *engine.Engine, version version.Version, w http.Re
|
|||
return nil
|
||||
}
|
||||
var (
|
||||
out engine.Env
|
||||
job = eng.Job("create", r.Form.Get("name"))
|
||||
outWarnings []string
|
||||
outId string
|
||||
warnings = bytes.NewBuffer(nil)
|
||||
out engine.Env
|
||||
job = eng.Job("create", r.Form.Get("name"))
|
||||
outWarnings []string
|
||||
stdoutBuffer = bytes.NewBuffer(nil)
|
||||
warnings = bytes.NewBuffer(nil)
|
||||
)
|
||||
if err := job.DecodeEnv(r.Body); err != nil {
|
||||
return err
|
||||
}
|
||||
// Read container ID from the first line of stdout
|
||||
job.Stdout.AddString(&outId)
|
||||
job.Stdout.Add(stdoutBuffer)
|
||||
// Read warnings from stderr
|
||||
job.Stderr.Add(warnings)
|
||||
if err := job.Run(); err != nil {
|
||||
|
@ -624,7 +624,7 @@ func postContainersCreate(eng *engine.Engine, version version.Version, w http.Re
|
|||
for scanner.Scan() {
|
||||
outWarnings = append(outWarnings, scanner.Text())
|
||||
}
|
||||
out.Set("Id", outId)
|
||||
out.Set("Id", engine.Tail(stdoutBuffer, 1))
|
||||
out.SetList("Warnings", outWarnings)
|
||||
return writeJSON(w, http.StatusCreated, out)
|
||||
}
|
||||
|
@ -720,20 +720,16 @@ func postContainersWait(eng *engine.Engine, version version.Version, w http.Resp
|
|||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
var (
|
||||
env engine.Env
|
||||
status string
|
||||
job = eng.Job("wait", vars["name"])
|
||||
env engine.Env
|
||||
stdoutBuffer = bytes.NewBuffer(nil)
|
||||
job = eng.Job("wait", vars["name"])
|
||||
)
|
||||
job.Stdout.AddString(&status)
|
||||
job.Stdout.Add(stdoutBuffer)
|
||||
if err := job.Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
// Parse a 16-bit encoded integer to map typical unix exit status.
|
||||
_, err := strconv.ParseInt(status, 10, 16)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
env.Set("StatusCode", status)
|
||||
|
||||
env.Set("StatusCode", engine.Tail(stdoutBuffer, 1))
|
||||
return writeJSON(w, http.StatusOK, env)
|
||||
}
|
||||
|
||||
|
@ -759,7 +755,7 @@ func postContainersAttach(eng *engine.Engine, version version.Version, w http.Re
|
|||
}
|
||||
|
||||
var (
|
||||
job = eng.Job("inspect", vars["name"], "container")
|
||||
job = eng.Job("container_inspect", vars["name"])
|
||||
c, err = job.Stdout.AddEnv()
|
||||
)
|
||||
if err != nil {
|
||||
|
@ -823,7 +819,7 @@ func wsContainersAttach(eng *engine.Engine, version version.Version, w http.Resp
|
|||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
|
||||
if err := eng.Job("inspect", vars["name"], "container").Run(); err != nil {
|
||||
if err := eng.Job("container_inspect", vars["name"]).Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -851,9 +847,8 @@ func getContainersByName(eng *engine.Engine, version version.Version, w http.Res
|
|||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
var job = eng.Job("inspect", vars["name"], "container")
|
||||
var job = eng.Job("container_inspect", vars["name"])
|
||||
streamJSON(job, w, false)
|
||||
job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job
|
||||
return job.Run()
|
||||
}
|
||||
|
||||
|
@ -861,9 +856,8 @@ func getImagesByName(eng *engine.Engine, version version.Version, w http.Respons
|
|||
if vars == nil {
|
||||
return fmt.Errorf("Missing parameter")
|
||||
}
|
||||
var job = eng.Job("inspect", vars["name"], "image")
|
||||
var job = eng.Job("image_inspect", vars["name"])
|
||||
streamJSON(job, w, false)
|
||||
job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job
|
||||
return job.Run()
|
||||
}
|
||||
|
||||
|
@ -872,6 +866,8 @@ func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWrite
|
|||
return fmt.Errorf("Multipart upload for build is no longer supported. Please upgrade your docker client.")
|
||||
}
|
||||
var (
|
||||
authEncoded = r.Header.Get("X-Registry-Auth")
|
||||
authConfig = ®istry.AuthConfig{}
|
||||
configFileEncoded = r.Header.Get("X-Registry-Config")
|
||||
configFile = ®istry.ConfigFile{}
|
||||
job = eng.Job("build")
|
||||
|
@ -881,18 +877,12 @@ func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWrite
|
|||
// Both headers will be parsed and sent along to the daemon, but if a non-empty
|
||||
// ConfigFile is present, any value provided as an AuthConfig directly will
|
||||
// be overridden. See BuildFile::CmdFrom for details.
|
||||
var (
|
||||
authEncoded = r.Header.Get("X-Registry-Auth")
|
||||
authConfig = ®istry.AuthConfig{}
|
||||
)
|
||||
if version.LessThan("1.9") && authEncoded != "" {
|
||||
authJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
|
||||
if err := json.NewDecoder(authJson).Decode(authConfig); err != nil {
|
||||
// for a pull it is not an error if no auth was given
|
||||
// to increase compatibility with the existing api it is defaulting to be empty
|
||||
authConfig = ®istry.AuthConfig{}
|
||||
} else {
|
||||
configFile.Configs[authConfig.ServerAddress] = *authConfig
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -911,13 +901,22 @@ func postBuild(eng *engine.Engine, version version.Version, w http.ResponseWrite
|
|||
} else {
|
||||
job.Stdout.Add(utils.NewWriteFlusher(w))
|
||||
}
|
||||
|
||||
if r.FormValue("forcerm") == "1" && version.GreaterThanOrEqualTo("1.12") {
|
||||
job.Setenv("rm", "1")
|
||||
} else if r.FormValue("rm") == "" && version.GreaterThanOrEqualTo("1.12") {
|
||||
job.Setenv("rm", "1")
|
||||
} else {
|
||||
job.Setenv("rm", r.FormValue("rm"))
|
||||
}
|
||||
job.Stdin.Add(r.Body)
|
||||
job.Setenv("remote", r.FormValue("remote"))
|
||||
job.Setenv("t", r.FormValue("t"))
|
||||
job.Setenv("q", r.FormValue("q"))
|
||||
job.Setenv("nocache", r.FormValue("nocache"))
|
||||
job.Setenv("rm", r.FormValue("rm"))
|
||||
job.SetenvJson("auth", configFile)
|
||||
job.Setenv("forcerm", r.FormValue("forcerm"))
|
||||
job.SetenvJson("authConfig", authConfig)
|
||||
job.SetenvJson("configFile", configFile)
|
||||
|
||||
if err := job.Run(); err != nil {
|
||||
if !job.Stdout.Used() {
|
||||
|
@ -1196,6 +1195,7 @@ func changeGroup(addr string, nameOrGid string) error {
|
|||
// ListenAndServe sets up the required http.Server and gets it listening for
|
||||
// each addr passed in and does protocol specific checking.
|
||||
func ListenAndServe(proto, addr string, job *engine.Job) error {
|
||||
var l net.Listener
|
||||
r, err := createRouter(job.Eng, job.GetenvBool("Logging"), job.GetenvBool("EnableCors"), job.Getenv("Version"))
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -1211,7 +1211,11 @@ func ListenAndServe(proto, addr string, job *engine.Job) error {
|
|||
}
|
||||
}
|
||||
|
||||
l, err := listenbuffer.NewListenBuffer(proto, addr, activationLock)
|
||||
if job.GetenvBool("BufferRequests") {
|
||||
l, err = listenbuffer.NewListenBuffer(proto, addr, activationLock)
|
||||
} else {
|
||||
l, err = net.Listen(proto, addr)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -1283,10 +1287,6 @@ func ServeApi(job *engine.Job) engine.Status {
|
|||
)
|
||||
activationLock = make(chan struct{})
|
||||
|
||||
if err := job.Eng.Register("acceptconnections", AcceptConnections); err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
|
||||
for _, protoAddr := range protoAddrs {
|
||||
protoAddrParts := strings.SplitN(protoAddr, "://", 2)
|
||||
if len(protoAddrParts) != 2 {
|
||||
|
@ -1313,7 +1313,9 @@ func AcceptConnections(job *engine.Job) engine.Status {
|
|||
go systemd.SdNotify("READY=1")
|
||||
|
||||
// close the lock so the listeners start accepting connections
|
||||
close(activationLock)
|
||||
if activationLock != nil {
|
||||
close(activationLock)
|
||||
}
|
||||
|
||||
return engine.StatusOK
|
||||
}
|
||||
|
|
3
archive/README.md
Normal file
3
archive/README.md
Normal file
|
@ -0,0 +1,3 @@
|
|||
This code provides helper functions for dealing with archive files.
|
||||
|
||||
**TODO**: Move this to either `pkg` or (if not possible) to `utils`.
|
|
@ -1,14 +1,12 @@
|
|||
package archive
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"compress/bzip2"
|
||||
"compress/gzip"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
@ -17,6 +15,10 @@ import (
|
|||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
)
|
||||
|
||||
type (
|
||||
|
@ -26,6 +28,7 @@ type (
|
|||
TarOptions struct {
|
||||
Includes []string
|
||||
Compression Compression
|
||||
NoLchown bool
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -41,26 +44,16 @@ const (
|
|||
)
|
||||
|
||||
func DetectCompression(source []byte) Compression {
|
||||
sourceLen := len(source)
|
||||
for compression, m := range map[Compression][]byte{
|
||||
Bzip2: {0x42, 0x5A, 0x68},
|
||||
Gzip: {0x1F, 0x8B, 0x08},
|
||||
Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
|
||||
} {
|
||||
fail := false
|
||||
if len(m) > sourceLen {
|
||||
if len(source) < len(m) {
|
||||
utils.Debugf("Len too short")
|
||||
continue
|
||||
}
|
||||
i := 0
|
||||
for _, b := range m {
|
||||
if b != source[i] {
|
||||
fail = true
|
||||
break
|
||||
}
|
||||
i++
|
||||
}
|
||||
if !fail {
|
||||
if bytes.Compare(m, source[:len(m)]) == 0 {
|
||||
return compression
|
||||
}
|
||||
}
|
||||
|
@ -74,31 +67,24 @@ func xzDecompress(archive io.Reader) (io.ReadCloser, error) {
|
|||
}
|
||||
|
||||
func DecompressStream(archive io.Reader) (io.ReadCloser, error) {
|
||||
buf := make([]byte, 10)
|
||||
totalN := 0
|
||||
for totalN < 10 {
|
||||
n, err := archive.Read(buf[totalN:])
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil, fmt.Errorf("Tarball too short")
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
totalN += n
|
||||
utils.Debugf("[tar autodetect] n: %d", n)
|
||||
buf := bufio.NewReader(archive)
|
||||
bs, err := buf.Peek(10)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
compression := DetectCompression(buf)
|
||||
wrap := io.MultiReader(bytes.NewReader(buf), archive)
|
||||
utils.Debugf("[tar autodetect] n: %v", bs)
|
||||
|
||||
compression := DetectCompression(bs)
|
||||
|
||||
switch compression {
|
||||
case Uncompressed:
|
||||
return ioutil.NopCloser(wrap), nil
|
||||
return ioutil.NopCloser(buf), nil
|
||||
case Gzip:
|
||||
return gzip.NewReader(wrap)
|
||||
return gzip.NewReader(buf)
|
||||
case Bzip2:
|
||||
return ioutil.NopCloser(bzip2.NewReader(wrap)), nil
|
||||
return ioutil.NopCloser(bzip2.NewReader(buf)), nil
|
||||
case Xz:
|
||||
return xzDecompress(wrap)
|
||||
return xzDecompress(buf)
|
||||
default:
|
||||
return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension())
|
||||
}
|
||||
|
@ -194,7 +180,7 @@ func addTarFile(path, name string, tw *tar.Writer) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader) error {
|
||||
func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader, Lchown bool) error {
|
||||
// hdr.Mode is in linux format, which we can use for sycalls,
|
||||
// but for os.Foo() calls we need the mode converted to os.FileMode,
|
||||
// so use hdrInfo.Mode() (they differ for e.g. setuid bits)
|
||||
|
@ -255,7 +241,7 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader) e
|
|||
return fmt.Errorf("Unhandled tar header type %d\n", hdr.Typeflag)
|
||||
}
|
||||
|
||||
if err := os.Lchown(path, hdr.Uid, hdr.Gid); err != nil {
|
||||
if err := os.Lchown(path, hdr.Uid, hdr.Gid); err != nil && Lchown {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -309,8 +295,11 @@ func escapeName(name string) string {
|
|||
return string(escaped)
|
||||
}
|
||||
|
||||
// Tar creates an archive from the directory at `path`, only including files whose relative
|
||||
// paths are included in `filter`. If `filter` is nil, then all files are included.
|
||||
// TarFilter creates an archive from the directory at `srcPath` with `options`, and returns it as a
|
||||
// stream of bytes.
|
||||
//
|
||||
// Files are included according to `options.Includes`, default to including all files.
|
||||
// Stream is compressed according to `options.Compression', default to Uncompressed.
|
||||
func TarFilter(srcPath string, options *TarOptions) (io.ReadCloser, error) {
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
|
||||
|
@ -418,14 +407,16 @@ func Untar(archive io.Reader, dest string, options *TarOptions) error {
|
|||
// the layer is also a directory. Then we want to merge them (i.e.
|
||||
// just apply the metadata from the layer).
|
||||
if fi, err := os.Lstat(path); err == nil {
|
||||
if fi.IsDir() && hdr.Name == "." {
|
||||
continue
|
||||
}
|
||||
if !(fi.IsDir() && hdr.Typeflag == tar.TypeDir) {
|
||||
if err := os.RemoveAll(path); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := createTarFile(path, dest, hdr, tr); err != nil {
|
||||
if err := createTarFile(path, dest, hdr, tr, options == nil || !options.NoLchown); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
|
@ -3,7 +3,6 @@ package archive
|
|||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
@ -11,6 +10,8 @@ import (
|
|||
"path"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
)
|
||||
|
||||
func TestCmdStreamLargeStderr(t *testing.T) {
|
||||
|
@ -132,8 +133,37 @@ func TestTarUntar(t *testing.T) {
|
|||
// Failing prevents the archives from being uncompressed during ADD
|
||||
func TestTypeXGlobalHeaderDoesNotFail(t *testing.T) {
|
||||
hdr := tar.Header{Typeflag: tar.TypeXGlobalHeader}
|
||||
err := createTarFile("pax_global_header", "some_dir", &hdr, nil)
|
||||
err := createTarFile("pax_global_header", "some_dir", &hdr, nil, true)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Some tar have both GNU specific (huge uid) and Ustar specific (long name) things.
|
||||
// Not supposed to happen (should use PAX instead of Ustar for long name) but it does and it should still work.
|
||||
func TestUntarUstarGnuConflict(t *testing.T) {
|
||||
f, err := os.Open("testdata/broken.tar")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
found := false
|
||||
tr := tar.NewReader(f)
|
||||
// Iterate through the files in the archive.
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
// end of tar archive
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if hdr.Name == "root/.cpanm/work/1395823785.24209/Plack-1.0030/blib/man3/Plack::Middleware::LighttpdScriptNameFix.3pm" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("%s not found in the archive", "root/.cpanm/work/1395823785.24209/Plack-1.0030/blib/man3/Plack::Middleware::LighttpdScriptNameFix.3pm")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,15 +3,16 @@ package archive
|
|||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
)
|
||||
|
||||
type ChangeType int
|
||||
|
@ -293,13 +294,23 @@ func collectFileInfo(sourceDir string) (*FileInfo, error) {
|
|||
|
||||
// Compare two directories and generate an array of Change objects describing the changes
|
||||
func ChangesDirs(newDir, oldDir string) ([]Change, error) {
|
||||
oldRoot, err := collectFileInfo(oldDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
newRoot, err := collectFileInfo(newDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var (
|
||||
oldRoot, newRoot *FileInfo
|
||||
err1, err2 error
|
||||
errs = make(chan error, 2)
|
||||
)
|
||||
go func() {
|
||||
oldRoot, err1 = collectFileInfo(oldDir)
|
||||
errs <- err1
|
||||
}()
|
||||
go func() {
|
||||
newRoot, err2 = collectFileInfo(newDir)
|
||||
errs <- err2
|
||||
}()
|
||||
for i := 0; i < 2; i++ {
|
||||
if err := <-errs; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return newRoot.Changes(oldRoot), nil
|
||||
|
|
|
@ -2,14 +2,14 @@ package archive
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/dotcloud/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
|
||||
)
|
||||
|
||||
// Linux device nodes are a bit weird due to backwards compat with 16 bit device nodes.
|
||||
|
@ -18,15 +18,6 @@ import (
|
|||
func mkdev(major int64, minor int64) uint32 {
|
||||
return uint32(((minor & 0xfff00) << 12) | ((major & 0xfff) << 8) | (minor & 0xff))
|
||||
}
|
||||
func timeToTimespec(time time.Time) (ts syscall.Timespec) {
|
||||
if time.IsZero() {
|
||||
// Return UTIME_OMIT special value
|
||||
ts.Sec = 0
|
||||
ts.Nsec = ((1 << 30) - 2)
|
||||
return
|
||||
}
|
||||
return syscall.NsecToTimespec(time.UnixNano())
|
||||
}
|
||||
|
||||
// ApplyLayer parses a diff in the standard layer format from `layer`, and
|
||||
// applies it to the directory `dest`.
|
||||
|
@ -89,7 +80,7 @@ func ApplyLayer(dest string, layer ArchiveReader) error {
|
|||
}
|
||||
defer os.RemoveAll(aufsTempdir)
|
||||
}
|
||||
if err := createTarFile(filepath.Join(aufsTempdir, basename), dest, hdr, tr); err != nil {
|
||||
if err := createTarFile(filepath.Join(aufsTempdir, basename), dest, hdr, tr, true); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
@ -136,7 +127,7 @@ func ApplyLayer(dest string, layer ArchiveReader) error {
|
|||
srcData = tmpFile
|
||||
}
|
||||
|
||||
if err := createTarFile(path, dest, srcHdr, srcData); err != nil {
|
||||
if err := createTarFile(path, dest, srcHdr, srcData, true); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
BIN
archive/testdata/broken.tar
vendored
Normal file
BIN
archive/testdata/broken.tar
vendored
Normal file
Binary file not shown.
16
archive/time_linux.go
Normal file
16
archive/time_linux.go
Normal file
|
@ -0,0 +1,16 @@
|
|||
package archive
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
func timeToTimespec(time time.Time) (ts syscall.Timespec) {
|
||||
if time.IsZero() {
|
||||
// Return UTIME_OMIT special value
|
||||
ts.Sec = 0
|
||||
ts.Nsec = ((1 << 30) - 2)
|
||||
return
|
||||
}
|
||||
return syscall.NsecToTimespec(time.UnixNano())
|
||||
}
|
16
archive/time_unsupported.go
Normal file
16
archive/time_unsupported.go
Normal file
|
@ -0,0 +1,16 @@
|
|||
// +build !linux
|
||||
|
||||
package archive
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
func timeToTimespec(time time.Time) (ts syscall.Timespec) {
|
||||
nsec := int64(0)
|
||||
if !time.IsZero() {
|
||||
nsec = time.UnixNano()
|
||||
}
|
||||
return syscall.NsecToTimespec(nsec)
|
||||
}
|
|
@ -1,11 +1,16 @@
|
|||
package builtins
|
||||
|
||||
import (
|
||||
api "github.com/dotcloud/docker/api/server"
|
||||
"runtime"
|
||||
|
||||
"github.com/dotcloud/docker/api"
|
||||
apiserver "github.com/dotcloud/docker/api/server"
|
||||
"github.com/dotcloud/docker/daemon/networkdriver/bridge"
|
||||
"github.com/dotcloud/docker/dockerversion"
|
||||
"github.com/dotcloud/docker/engine"
|
||||
"github.com/dotcloud/docker/registry"
|
||||
"github.com/dotcloud/docker/server"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
)
|
||||
|
||||
func Register(eng *engine.Engine) error {
|
||||
|
@ -15,12 +20,18 @@ func Register(eng *engine.Engine) error {
|
|||
if err := remote(eng); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := eng.Register("version", dockerVersion); err != nil {
|
||||
return err
|
||||
}
|
||||
return registry.NewService().Install(eng)
|
||||
}
|
||||
|
||||
// remote: a RESTful api for cross-docker communication
|
||||
func remote(eng *engine.Engine) error {
|
||||
return eng.Register("serveapi", api.ServeApi)
|
||||
if err := eng.Register("serveapi", apiserver.ServeApi); err != nil {
|
||||
return err
|
||||
}
|
||||
return eng.Register("acceptconnections", apiserver.AcceptConnections)
|
||||
}
|
||||
|
||||
// daemon: a default execution and storage backend for Docker on Linux,
|
||||
|
@ -44,3 +55,21 @@ func daemon(eng *engine.Engine) error {
|
|||
}
|
||||
return eng.Register("init_networkdriver", bridge.InitDriver)
|
||||
}
|
||||
|
||||
// builtins jobs independent of any subsystem
|
||||
func dockerVersion(job *engine.Job) engine.Status {
|
||||
v := &engine.Env{}
|
||||
v.Set("Version", dockerversion.VERSION)
|
||||
v.SetJson("ApiVersion", api.APIVERSION)
|
||||
v.Set("GitCommit", dockerversion.GITCOMMIT)
|
||||
v.Set("GoVersion", runtime.Version())
|
||||
v.Set("Os", runtime.GOOS)
|
||||
v.Set("Arch", runtime.GOARCH)
|
||||
if kernelVersion, err := utils.GetKernelVersion(); err == nil {
|
||||
v.Set("KernelVersion", kernelVersion.String())
|
||||
}
|
||||
if _, err := v.WriteTo(job.Stdout); err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
return engine.StatusOK
|
||||
}
|
||||
|
|
|
@ -116,7 +116,7 @@ fi
|
|||
flags=(
|
||||
NAMESPACES {NET,PID,IPC,UTS}_NS
|
||||
DEVPTS_MULTIPLE_INSTANCES
|
||||
CGROUPS CGROUP_DEVICE
|
||||
CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_SCHED
|
||||
MACVLAN VETH BRIDGE
|
||||
NF_NAT_IPV4 IP_NF_TARGET_MASQUERADE
|
||||
NETFILTER_XT_MATCH_{ADDRTYPE,CONNTRACK}
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
# /data volume is owned by sysadmin.
|
||||
# USAGE:
|
||||
# # Download data Dockerfile
|
||||
# wget http://raw.github.com/dotcloud/docker/master/contrib/desktop-integration/data/Dockerfile
|
||||
# wget http://raw.githubusercontent.com/dotcloud/docker/master/contrib/desktop-integration/data/Dockerfile
|
||||
#
|
||||
# # Build data image
|
||||
# docker build -t data .
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
# sound devices. Tested on Debian 7.2
|
||||
# USAGE:
|
||||
# # Download Iceweasel Dockerfile
|
||||
# wget http://raw.github.com/dotcloud/docker/master/contrib/desktop-integration/iceweasel/Dockerfile
|
||||
# wget http://raw.githubusercontent.com/dotcloud/docker/master/contrib/desktop-integration/iceweasel/Dockerfile
|
||||
#
|
||||
# # Build iceweasel image
|
||||
# docker build -t iceweasel .
|
||||
|
|
|
@ -4,6 +4,8 @@
|
|||
# Provides: docker
|
||||
# Required-Start: $syslog $remote_fs
|
||||
# Required-Stop: $syslog $remote_fs
|
||||
# Should-Start: cgroupfs-mount cgroup-lite
|
||||
# Should-Stop: cgroupfs-mount cgroup-lite
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop: 0 1 6
|
||||
# Short-Description: Create lightweight, portable, self-sufficient containers.
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# /etc/rc.d/init.d/docker
|
||||
#
|
||||
# Daemon for docker.io
|
||||
#
|
||||
#
|
||||
# chkconfig: 2345 95 95
|
||||
# description: Daemon for docker.io
|
||||
|
||||
|
@ -49,6 +49,13 @@ start() {
|
|||
$exec -d $other_args &>> $logfile &
|
||||
pid=$!
|
||||
touch $lockfile
|
||||
# wait up to 10 seconds for the pidfile to exist. see
|
||||
# https://github.com/dotcloud/docker/issues/5359
|
||||
tries=0
|
||||
while [ ! -f $pidfile -a $tries -lt 10 ]; do
|
||||
sleep 1
|
||||
tries=$((tries + 1))
|
||||
done
|
||||
success
|
||||
echo
|
||||
else
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
description "Docker daemon"
|
||||
|
||||
start on filesystem
|
||||
start on local-filesystems
|
||||
stop on runlevel [!2345]
|
||||
limit nofile 524288 1048576
|
||||
limit nproc 524288 1048576
|
||||
|
|
41
contrib/man/md/Dockerfile.5.md
Normal file
41
contrib/man/md/Dockerfile.5.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
% DOCKERFILE(1) Docker User Manuals
|
||||
% Zac Dover
|
||||
% May 2014
|
||||
# NAME
|
||||
|
||||
Dockerfile - automate the steps of creating a Docker image
|
||||
|
||||
# INTRODUCTION
|
||||
**Dockerfile** is a configuration file that automates the steps of creating a Docker image. Docker can act as a builder and can read instructions from **Dockerfile** to automate the steps that you would otherwise manually perform to create an image. To build an image from a source repository, create a description file called **Dockerfile** at the root of your repository. This file describes the steps that will be taken to assemble the image. When **Dockerfile** has been created, call **docker build** with the path of the source repository as the argument.
|
||||
|
||||
# SYNOPSIS
|
||||
|
||||
INSTRUCTION arguments
|
||||
|
||||
For example:
|
||||
|
||||
FROM image
|
||||
|
||||
# DESCRIPTION
|
||||
|
||||
Dockerfile is a file that automates the steps of creating a Docker image.
|
||||
|
||||
# USAGE
|
||||
|
||||
$ sudo docker build .
|
||||
-- runs the steps and commits them, building a final image
|
||||
The path to the source repository defines where to find the context of the build.
|
||||
The build is run by the docker daemon, not the CLI. The whole context must be
|
||||
transferred to the daemon. The Docker CLI reports "Uploading context" when the
|
||||
context is sent to the daemon.
|
||||
|
||||
$ sudo docker build -t repository/tag .
|
||||
-- specifies a repository and tag at which to save the new image if the build succeeds.
|
||||
The Docker daemon runs the steps one-by-one, commiting the result to a new image
|
||||
if necessary before finally outputting the ID of the new image. The Docker
|
||||
daemon automatically cleans up the context it is given.
|
||||
|
||||
Docker re-uses intermediate images whenever possible. This significantly accelerates the *docker build* process.
|
||||
|
||||
# HISTORY
|
||||
May 2014, Compiled by Zac Dover (zdover at redhat dot com) based on docker.io Dockerfile documentation.
|
|
@ -164,7 +164,7 @@ and foreground Docker containers.
|
|||
Docker container. This is because by default a container is not allowed to
|
||||
access any devices. A “privileged” container is given access to all devices.
|
||||
|
||||
When the operator executes **docker run -privileged**, Docker will enable access
|
||||
When the operator executes **docker run --privileged**, Docker will enable access
|
||||
to all devices on the host as well as set some configuration in AppArmor to
|
||||
allow the container nearly all the same access to the host as processes running
|
||||
outside of a container on the host.
|
||||
|
@ -190,18 +190,28 @@ interactive shell. The default is value is false.
|
|||
Set a username or UID for the container.
|
||||
|
||||
|
||||
**-v**, **-volume**=*volume*
|
||||
Bind mount a volume to the container. The **-v** option can be used one or
|
||||
**-v**, **-volume**=*volume*[:ro|:rw]
|
||||
Bind mount a volume to the container.
|
||||
|
||||
The **-v** option can be used one or
|
||||
more times to add one or more mounts to a container. These mounts can then be
|
||||
used in other containers using the **--volumes-from** option. See examples.
|
||||
used in other containers using the **--volumes-from** option.
|
||||
|
||||
The volume may be optionally suffixed with :ro or :rw to mount the volumes in
|
||||
read-only or read-write mode, respectively. By default, the volumes are mounted
|
||||
read-write. See examples.
|
||||
|
||||
**--volumes-from**=*container-id*
|
||||
**--volumes-from**=*container-id*[:ro|:rw]
|
||||
Will mount volumes from the specified container identified by container-id.
|
||||
Once a volume is mounted in a one container it can be shared with other
|
||||
containers using the **--volumes-from** option when running those other
|
||||
containers. The volumes can be shared even if the original container with the
|
||||
mount is not running.
|
||||
mount is not running.
|
||||
|
||||
The container ID may be optionally suffixed with :ro or
|
||||
:rw to mount the volumes in read-only or read-write mode, respectively. By
|
||||
default, the volumes are mounted in the same mode (read write or read only) as
|
||||
the reference container.
|
||||
|
||||
|
||||
**-w**, **-workdir**=*directory*
|
||||
|
@ -307,7 +317,7 @@ fedora-data image:
|
|||
# docker run --name=data -v /var/volume1 -v /tmp/volume2 -i -t fedora-data true
|
||||
# docker run --volumes-from=data --name=fedora-container1 -i -t fedora bash
|
||||
|
||||
Multiple -volumes-from parameters will bring together multiple data volumes from
|
||||
Multiple --volumes-from parameters will bring together multiple data volumes from
|
||||
multiple containers. And it's possible to mount the volumes that came from the
|
||||
DATA container in yet another container via the fedora-container1 intermidiery
|
||||
container, allowing to abstract the actual data source from users of that data:
|
||||
|
|
|
@ -73,7 +73,7 @@ port=[4243] or path =[/var/run/docker.sock] is omitted, default values are used.
|
|||
**-v**=*true*|*false*
|
||||
Print version information and quit. Default is false.
|
||||
|
||||
**--selinux-enabled=*true*|*false*
|
||||
**--selinux-enabled**=*true*|*false*
|
||||
Enable selinux support. Default is false.
|
||||
|
||||
# COMMANDS
|
||||
|
|
|
@ -245,7 +245,7 @@ docker run --volumes-from=data --name=fedora-container1 -i -t fedora bash
|
|||
.RE
|
||||
.sp
|
||||
.TP
|
||||
Multiple -volumes-from parameters will bring together multiple data volumes from multiple containers. And it's possible to mount the volumes that came from the DATA container in yet another container via the fedora-container1 intermidiery container, allowing to abstract the actual data source from users of that data:
|
||||
Multiple --volumes-from parameters will bring together multiple data volumes from multiple containers. And it's possible to mount the volumes that came from the DATA container in yet another container via the fedora-container1 intermidiery container, allowing to abstract the actual data source from users of that data:
|
||||
.sp
|
||||
.RS
|
||||
docker run --volumes-from=fedora-container1 --name=fedora-container2 -i -t fedora bash
|
||||
|
|
|
@ -2,6 +2,10 @@
|
|||
# Generate a very minimal filesystem based on busybox-static,
|
||||
# and load it into the local docker under the name "busybox".
|
||||
|
||||
echo >&2
|
||||
echo >&2 'warning: this script is deprecated - see mkimage.sh and mkimage/busybox-static'
|
||||
echo >&2
|
||||
|
||||
BUSYBOX=$(which busybox)
|
||||
[ "$BUSYBOX" ] || {
|
||||
echo "Sorry, I could not locate busybox."
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo >&2
|
||||
echo >&2 'warning: this script is deprecated - see mkimage.sh and mkimage/debootstrap'
|
||||
echo >&2
|
||||
|
||||
variant='minbase'
|
||||
include='iproute,iputils-ping'
|
||||
arch='amd64' # intentionally undocumented for now
|
||||
|
|
|
@ -8,6 +8,10 @@
|
|||
|
||||
set -e
|
||||
|
||||
echo >&2
|
||||
echo >&2 'warning: this script is deprecated - see mkimage.sh and mkimage/rinse'
|
||||
echo >&2
|
||||
|
||||
repo="$1"
|
||||
distro="$2"
|
||||
mirror="$3"
|
||||
|
|
105
contrib/mkimage.sh
Executable file
105
contrib/mkimage.sh
Executable file
|
@ -0,0 +1,105 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
mkimg="$(basename "$0")"
|
||||
|
||||
usage() {
|
||||
echo >&2 "usage: $mkimg [-d dir] [-t tag] script [script-args]"
|
||||
echo >&2 " ie: $mkimg -t someuser/debian debootstrap --variant=minbase jessie"
|
||||
echo >&2 " $mkimg -t someuser/ubuntu debootstrap --include=ubuntu-minimal trusty"
|
||||
echo >&2 " $mkimg -t someuser/busybox busybox-static"
|
||||
echo >&2 " $mkimg -t someuser/centos:5 rinse --distribution centos-5"
|
||||
exit 1
|
||||
}
|
||||
|
||||
scriptDir="$(dirname "$(readlink -f "$BASH_SOURCE")")/mkimage"
|
||||
|
||||
optTemp=$(getopt --options '+d:t:h' --longoptions 'dir:,tag:,help' --name "$mkimg" -- "$@")
|
||||
eval set -- "$optTemp"
|
||||
unset optTemp
|
||||
|
||||
dir=
|
||||
tag=
|
||||
while true; do
|
||||
case "$1" in
|
||||
-d|--dir) dir="$2" ; shift 2 ;;
|
||||
-t|--tag) tag="$2" ; shift 2 ;;
|
||||
-h|--help) usage ;;
|
||||
--) shift ; break ;;
|
||||
esac
|
||||
done
|
||||
|
||||
script="$1"
|
||||
[ "$script" ] || usage
|
||||
shift
|
||||
|
||||
if [ ! -x "$scriptDir/$script" ]; then
|
||||
echo >&2 "error: $script does not exist or is not executable"
|
||||
echo >&2 " see $scriptDir for possible scripts"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# don't mistake common scripts like .febootstrap-minimize as image-creators
|
||||
if [[ "$script" == .* ]]; then
|
||||
echo >&2 "error: $script is a script helper, not a script"
|
||||
echo >&2 " see $scriptDir for possible scripts"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
delDir=
|
||||
if [ -z "$dir" ]; then
|
||||
dir="$(mktemp -d ${TMPDIR:-/tmp}/docker-mkimage.XXXXXXXXXX)"
|
||||
delDir=1
|
||||
fi
|
||||
|
||||
rootfsDir="$dir/rootfs"
|
||||
( set -x; mkdir -p "$rootfsDir" )
|
||||
|
||||
# pass all remaining arguments to $script
|
||||
"$scriptDir/$script" "$rootfsDir" "$@"
|
||||
|
||||
# Docker mounts tmpfs at /dev and procfs at /proc so we can remove them
|
||||
rm -rf "$rootfsDir/dev" "$rootfsDir/proc"
|
||||
mkdir -p "$rootfsDir/dev" "$rootfsDir/proc"
|
||||
|
||||
# make sure /etc/resolv.conf has something useful in it
|
||||
mkdir -p "$rootfsDir/etc"
|
||||
cat > "$rootfsDir/etc/resolv.conf" <<'EOF'
|
||||
nameserver 8.8.8.8
|
||||
nameserver 8.8.4.4
|
||||
EOF
|
||||
|
||||
tarFile="$dir/rootfs.tar.xz"
|
||||
touch "$tarFile"
|
||||
|
||||
(
|
||||
set -x
|
||||
tar --numeric-owner -caf "$tarFile" -C "$rootfsDir" --transform='s,^./,,' .
|
||||
)
|
||||
|
||||
echo >&2 "+ cat > '$dir/Dockerfile'"
|
||||
cat > "$dir/Dockerfile" <<'EOF'
|
||||
FROM scratch
|
||||
ADD rootfs.tar.xz /
|
||||
EOF
|
||||
|
||||
# if our generated image has a decent shell, let's set a default command
|
||||
for shell in /bin/bash /usr/bin/fish /usr/bin/zsh /bin/sh; do
|
||||
if [ -x "$rootfsDir/$shell" ]; then
|
||||
( set -x; echo 'CMD ["'"$shell"'"]' >> "$dir/Dockerfile" )
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
( set -x; rm -rf "$rootfsDir" )
|
||||
|
||||
if [ "$tag" ]; then
|
||||
( set -x; docker build -t "$tag" "$dir" )
|
||||
elif [ "$delDir" ]; then
|
||||
# if we didn't specify a tag and we're going to delete our dir, let's just build an untagged image so that we did _something_
|
||||
( set -x; docker build "$dir" )
|
||||
fi
|
||||
|
||||
if [ "$delDir" ]; then
|
||||
( set -x; rm -rf "$dir" )
|
||||
fi
|
28
contrib/mkimage/.febootstrap-minimize
Executable file
28
contrib/mkimage/.febootstrap-minimize
Executable file
|
@ -0,0 +1,28 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
rootfsDir="$1"
|
||||
shift
|
||||
|
||||
(
|
||||
cd "$rootfsDir"
|
||||
|
||||
# effectively: febootstrap-minimize --keep-zoneinfo --keep-rpmdb --keep-services "$target"
|
||||
# locales
|
||||
rm -rf usr/{{lib,share}/locale,{lib,lib64}/gconv,bin/localedef,sbin/build-locale-archive}
|
||||
# docs
|
||||
rm -rf usr/share/{man,doc,info,gnome/help}
|
||||
# cracklib
|
||||
#rm -rf usr/share/cracklib
|
||||
# i18n
|
||||
rm -rf usr/share/i18n
|
||||
# yum cache
|
||||
rm -rf var/cache/yum
|
||||
mkdir -p --mode=0755 var/cache/yum
|
||||
# sln
|
||||
rm -rf sbin/sln
|
||||
# ldconfig
|
||||
#rm -rf sbin/ldconfig
|
||||
rm -rf etc/ld.so.cache var/cache/ldconfig
|
||||
mkdir -p --mode=0755 var/cache/ldconfig
|
||||
)
|
34
contrib/mkimage/busybox-static
Executable file
34
contrib/mkimage/busybox-static
Executable file
|
@ -0,0 +1,34 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
rootfsDir="$1"
|
||||
shift
|
||||
|
||||
busybox="$(which busybox 2>/dev/null || true)"
|
||||
if [ -z "$busybox" ]; then
|
||||
echo >&2 'error: busybox: not found'
|
||||
echo >&2 ' install it with your distribution "busybox-static" package'
|
||||
exit 1
|
||||
fi
|
||||
if ! ldd "$busybox" 2>&1 | grep -q 'not a dynamic executable'; then
|
||||
echo >&2 "error: '$busybox' appears to be a dynamic executable"
|
||||
echo >&2 ' you should install your distribution "busybox-static" package instead'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mkdir -p "$rootfsDir/bin"
|
||||
rm -f "$rootfsDir/bin/busybox" # just in case
|
||||
cp "$busybox" "$rootfsDir/bin/busybox"
|
||||
|
||||
(
|
||||
cd "$rootfsDir"
|
||||
|
||||
IFS=$'\n'
|
||||
modules=( $(bin/busybox --list-modules) )
|
||||
unset IFS
|
||||
|
||||
for module in "${modules[@]}"; do
|
||||
mkdir -p "$(dirname "$module")"
|
||||
ln -sf /bin/busybox "$module"
|
||||
done
|
||||
)
|
125
contrib/mkimage/debootstrap
Executable file
125
contrib/mkimage/debootstrap
Executable file
|
@ -0,0 +1,125 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
rootfsDir="$1"
|
||||
shift
|
||||
|
||||
# we have to do a little fancy footwork to make sure "rootfsDir" becomes the second non-option argument to debootstrap
|
||||
|
||||
before=()
|
||||
while [ $# -gt 0 ] && [[ "$1" == -* ]]; do
|
||||
before+=( "$1" )
|
||||
shift
|
||||
done
|
||||
|
||||
suite="$1"
|
||||
shift
|
||||
|
||||
(
|
||||
set -x
|
||||
debootstrap "${before[@]}" "$suite" "$rootfsDir" "$@"
|
||||
)
|
||||
|
||||
# now for some Docker-specific tweaks
|
||||
|
||||
# prevent init scripts from running during install/update
|
||||
echo >&2 "+ cat > '$rootfsDir/usr/sbin/policy-rc.d'"
|
||||
cat > "$rootfsDir/usr/sbin/policy-rc.d" <<'EOF'
|
||||
#!/bin/sh
|
||||
exit 101
|
||||
EOF
|
||||
chmod +x "$rootfsDir/usr/sbin/policy-rc.d"
|
||||
|
||||
# prevent upstart scripts from running during install/update
|
||||
(
|
||||
set -x
|
||||
chroot "$rootfsDir" dpkg-divert --local --rename --add /sbin/initctl
|
||||
ln -sf /bin/true "$rootfsDir/sbin/initctl"
|
||||
)
|
||||
|
||||
# shrink the image, since apt makes us fat (wheezy: ~157.5MB vs ~120MB)
|
||||
( set -x; chroot "$rootfsDir" apt-get clean )
|
||||
|
||||
# Ubuntu 10.04 sucks... :)
|
||||
if strings "$rootfsDir/usr/bin/dpkg" | grep -q unsafe-io; then
|
||||
# force dpkg not to call sync() after package extraction (speeding up installs)
|
||||
echo >&2 "+ echo force-unsafe-io > '$rootfsDir/etc/dpkg/dpkg.cfg.d/docker-apt-speedup'"
|
||||
echo 'force-unsafe-io' > "$rootfsDir/etc/dpkg/dpkg.cfg.d/docker-apt-speedup"
|
||||
fi
|
||||
|
||||
if [ -d "$rootfsDir/etc/apt/apt.conf.d" ]; then
|
||||
# _keep_ us lean by effectively running "apt-get clean" after every install
|
||||
aptGetClean='"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true";'
|
||||
echo >&2 "+ cat > '$rootfsDir/etc/apt/apt.conf.d/docker-clean'"
|
||||
cat > "$rootfsDir/etc/apt/apt.conf.d/docker-clean" <<-EOF
|
||||
DPkg::Post-Invoke { ${aptGetClean} };
|
||||
APT::Update::Post-Invoke { ${aptGetClean} };
|
||||
|
||||
Dir::Cache::pkgcache "";
|
||||
Dir::Cache::srcpkgcache "";
|
||||
EOF
|
||||
|
||||
# remove apt-cache translations for fast "apt-get update"
|
||||
echo >&2 "+ cat > '$rootfsDir/etc/apt/apt.conf.d/docker-no-languages'"
|
||||
echo 'Acquire::Languages "none";' > "$rootfsDir/etc/apt/apt.conf.d/docker-no-languages"
|
||||
fi
|
||||
|
||||
if [ -z "$DONT_TOUCH_SOURCES_LIST" ]; then
|
||||
# tweak sources.list, where appropriate
|
||||
lsbDist=
|
||||
if [ -z "$lsbDist" -a -r "$rootfsDir/etc/os-release" ]; then
|
||||
lsbDist="$(. "$rootfsDir/etc/os-release" && echo "$ID")"
|
||||
fi
|
||||
if [ -z "$lsbDist" -a -r "$rootfsDir/etc/lsb-release" ]; then
|
||||
lsbDist="$(. "$rootfsDir/etc/lsb-release" && echo "$DISTRIB_ID")"
|
||||
fi
|
||||
if [ -z "$lsbDist" -a -r "$rootfsDir/etc/debian_version" ]; then
|
||||
lsbDist='Debian'
|
||||
fi
|
||||
case "$lsbDist" in
|
||||
debian|Debian)
|
||||
# updates and security!
|
||||
if [ "$suite" != 'sid' -a "$suite" != 'unstable' ]; then
|
||||
(
|
||||
set -x
|
||||
sed -i "p; s/ $suite main$/ ${suite}-updates main/" "$rootfsDir/etc/apt/sources.list"
|
||||
echo "deb http://security.debian.org $suite/updates main" >> "$rootfsDir/etc/apt/sources.list"
|
||||
)
|
||||
fi
|
||||
;;
|
||||
ubuntu|Ubuntu)
|
||||
# add the universe, updates, and security repositories
|
||||
(
|
||||
set -x
|
||||
sed -i "
|
||||
s/ $suite main$/ $suite main universe/; p;
|
||||
s/ $suite main/ ${suite}-updates main/; p;
|
||||
s/ $suite-updates main/ ${suite}-security main/
|
||||
" "$rootfsDir/etc/apt/sources.list"
|
||||
)
|
||||
;;
|
||||
tanglu|Tanglu)
|
||||
# add the updates repository
|
||||
if [ "$suite" != 'devel' ]; then
|
||||
(
|
||||
set -x
|
||||
sed -i "p; s/ $suite main$/ ${suite}-updates main/" "$rootfsDir/etc/apt/sources.list"
|
||||
)
|
||||
fi
|
||||
;;
|
||||
steamos|SteamOS)
|
||||
# add contrib and non-free
|
||||
(
|
||||
set -x
|
||||
sed -i "s/ $suite main$/ $suite main contrib non-free/" "$rootfsDir/etc/apt/sources.list"
|
||||
)
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# make sure we're fully up-to-date, too
|
||||
(
|
||||
set -x
|
||||
chroot "$rootfsDir" apt-get update
|
||||
chroot "$rootfsDir" apt-get dist-upgrade -y
|
||||
)
|
25
contrib/mkimage/rinse
Executable file
25
contrib/mkimage/rinse
Executable file
|
@ -0,0 +1,25 @@
|
|||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
rootfsDir="$1"
|
||||
shift
|
||||
|
||||
# specifying --arch below is safe because "$@" can override it and the "latest" one wins :)
|
||||
|
||||
(
|
||||
set -x
|
||||
rinse --directory "$rootfsDir" --arch amd64 "$@"
|
||||
)
|
||||
|
||||
"$(dirname "$BASH_SOURCE")/.febootstrap-minimize" "$rootfsDir"
|
||||
|
||||
if [ -d "$rootfsDir/etc/sysconfig" ]; then
|
||||
# allow networking init scripts inside the container to work without extra steps
|
||||
echo 'NETWORKING=yes' > "$rootfsDir/etc/sysconfig/network"
|
||||
fi
|
||||
|
||||
# make sure we're fully up-to-date, too
|
||||
(
|
||||
set -x
|
||||
chroot "$rootfsDir" yum update -y
|
||||
)
|
10
daemon/README.md
Normal file
10
daemon/README.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
This directory contains code pertaining to running containers and storing images
|
||||
|
||||
Code pertaining to running containers:
|
||||
|
||||
- execdriver
|
||||
- networkdriver
|
||||
|
||||
Code pertaining to storing images:
|
||||
|
||||
- graphdriver
|
|
@ -9,6 +9,7 @@ import (
|
|||
"log"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
@ -89,7 +90,7 @@ func (container *Container) Inject(file io.Reader, pth string) error {
|
|||
defer container.Unmount()
|
||||
|
||||
// Return error if path exists
|
||||
destPath := path.Join(container.basefs, pth)
|
||||
destPath := container.getResourcePath(pth)
|
||||
if _, err := os.Stat(destPath); err == nil {
|
||||
// Since err is nil, the path could be stat'd and it exists
|
||||
return fmt.Errorf("%s exists", pth)
|
||||
|
@ -101,7 +102,7 @@ func (container *Container) Inject(file io.Reader, pth string) error {
|
|||
}
|
||||
|
||||
// Make sure the directory exists
|
||||
if err := os.MkdirAll(path.Join(container.basefs, path.Dir(pth)), 0755); err != nil {
|
||||
if err := os.MkdirAll(container.getResourcePath(path.Dir(pth)), 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -170,6 +171,16 @@ func (container *Container) WriteHostConfig() (err error) {
|
|||
return ioutil.WriteFile(container.hostConfigPath(), data, 0666)
|
||||
}
|
||||
|
||||
func (container *Container) getResourcePath(path string) string {
|
||||
cleanPath := filepath.Join("/", path)
|
||||
return filepath.Join(container.basefs, cleanPath)
|
||||
}
|
||||
|
||||
func (container *Container) getRootResourcePath(path string) string {
|
||||
cleanPath := filepath.Join("/", path)
|
||||
return filepath.Join(container.root, cleanPath)
|
||||
}
|
||||
|
||||
func populateCommand(c *Container, env []string) error {
|
||||
var (
|
||||
en *execdriver.Network
|
||||
|
@ -215,6 +226,7 @@ func populateCommand(c *Container, env []string) error {
|
|||
Memory: c.Config.Memory,
|
||||
MemorySwap: c.Config.MemorySwap,
|
||||
CpuShares: c.Config.CpuShares,
|
||||
Cpuset: c.Config.Cpuset,
|
||||
}
|
||||
c.command = &execdriver.Command{
|
||||
ID: c.ID,
|
||||
|
@ -344,7 +356,7 @@ func (container *Container) StderrLogPipe() io.ReadCloser {
|
|||
}
|
||||
|
||||
func (container *Container) buildHostnameFile() error {
|
||||
container.HostnamePath = path.Join(container.root, "hostname")
|
||||
container.HostnamePath = container.getRootResourcePath("hostname")
|
||||
if container.Config.Domainname != "" {
|
||||
return ioutil.WriteFile(container.HostnamePath, []byte(fmt.Sprintf("%s.%s\n", container.Config.Hostname, container.Config.Domainname)), 0644)
|
||||
}
|
||||
|
@ -356,7 +368,7 @@ func (container *Container) buildHostnameAndHostsFiles(IP string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
container.HostsPath = path.Join(container.root, "hosts")
|
||||
container.HostsPath = container.getRootResourcePath("hosts")
|
||||
|
||||
extraContent := make(map[string]string)
|
||||
|
||||
|
@ -640,7 +652,7 @@ func (container *Container) Export() (archive.Archive, error) {
|
|||
}
|
||||
|
||||
func (container *Container) WaitTimeout(timeout time.Duration) error {
|
||||
done := make(chan bool)
|
||||
done := make(chan bool, 1)
|
||||
go func() {
|
||||
container.Wait()
|
||||
done <- true
|
||||
|
@ -674,7 +686,7 @@ func (container *Container) Unmount() error {
|
|||
}
|
||||
|
||||
func (container *Container) logPath(name string) string {
|
||||
return path.Join(container.root, fmt.Sprintf("%s-%s.log", container.ID, name))
|
||||
return container.getRootResourcePath(fmt.Sprintf("%s-%s.log", container.ID, name))
|
||||
}
|
||||
|
||||
func (container *Container) ReadLog(name string) (io.Reader, error) {
|
||||
|
@ -682,11 +694,11 @@ func (container *Container) ReadLog(name string) (io.Reader, error) {
|
|||
}
|
||||
|
||||
func (container *Container) hostConfigPath() string {
|
||||
return path.Join(container.root, "hostconfig.json")
|
||||
return container.getRootResourcePath("hostconfig.json")
|
||||
}
|
||||
|
||||
func (container *Container) jsonPath() string {
|
||||
return path.Join(container.root, "config.json")
|
||||
return container.getRootResourcePath("config.json")
|
||||
}
|
||||
|
||||
// This method must be exported to be used from the lxc template
|
||||
|
@ -745,8 +757,10 @@ func (container *Container) Copy(resource string) (io.ReadCloser, error) {
|
|||
if err := container.Mount(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var filter []string
|
||||
basePath := path.Join(container.basefs, resource)
|
||||
|
||||
basePath := container.getResourcePath(resource)
|
||||
stat, err := os.Stat(basePath)
|
||||
if err != nil {
|
||||
container.Unmount()
|
||||
|
@ -844,7 +858,7 @@ func (container *Container) setupContainerDns() error {
|
|||
} else if len(daemon.config.DnsSearch) > 0 {
|
||||
dnsSearch = daemon.config.DnsSearch
|
||||
}
|
||||
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
|
||||
container.ResolvConfPath = container.getRootResourcePath("resolv.conf")
|
||||
return resolvconf.Build(container.ResolvConfPath, dns, dnsSearch)
|
||||
} else {
|
||||
container.ResolvConfPath = "/etc/resolv.conf"
|
||||
|
@ -865,9 +879,17 @@ func (container *Container) initializeNetworking() error {
|
|||
container.Config.Hostname = parts[0]
|
||||
container.Config.Domainname = parts[1]
|
||||
}
|
||||
container.HostsPath = "/etc/hosts"
|
||||
|
||||
return container.buildHostnameFile()
|
||||
content, err := ioutil.ReadFile("/etc/hosts")
|
||||
if os.IsNotExist(err) {
|
||||
return container.buildHostnameAndHostsFiles("")
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
container.HostsPath = container.getRootResourcePath("hosts")
|
||||
return ioutil.WriteFile(container.HostsPath, content, 0644)
|
||||
} else if container.hostConfig.NetworkMode.IsContainer() {
|
||||
// we need to get the hosts files from the container to join
|
||||
nc, err := container.getNetworkedContainer()
|
||||
|
@ -982,12 +1004,12 @@ func (container *Container) setupWorkingDirectory() error {
|
|||
if container.Config.WorkingDir != "" {
|
||||
container.Config.WorkingDir = path.Clean(container.Config.WorkingDir)
|
||||
|
||||
pthInfo, err := os.Stat(path.Join(container.basefs, container.Config.WorkingDir))
|
||||
pthInfo, err := os.Stat(container.getResourcePath(container.Config.WorkingDir))
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if err := os.MkdirAll(path.Join(container.basefs, container.Config.WorkingDir), 0755); err != nil {
|
||||
if err := os.MkdirAll(container.getResourcePath(container.Config.WorkingDir), 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
|
@ -64,6 +64,11 @@ type Daemon struct {
|
|||
execDriver execdriver.Driver
|
||||
}
|
||||
|
||||
// Install installs daemon capabilities to eng.
|
||||
func (daemon *Daemon) Install(eng *engine.Engine) error {
|
||||
return eng.Register("container_inspect", daemon.ContainerInspect)
|
||||
}
|
||||
|
||||
// Mountpoints should be private to the container
|
||||
func remountPrivate(mountPoint string) error {
|
||||
mounted, err := mount.Mounted(mountPoint)
|
||||
|
@ -85,6 +90,7 @@ func (daemon *Daemon) List() []*Container {
|
|||
for e := daemon.containers.Front(); e != nil; e = e.Next() {
|
||||
containers.Add(e.Value.(*Container))
|
||||
}
|
||||
containers.Sort()
|
||||
return *containers
|
||||
}
|
||||
|
||||
|
@ -141,7 +147,13 @@ func (daemon *Daemon) load(id string) (*Container, error) {
|
|||
}
|
||||
|
||||
// Register makes a container object usable by the daemon as <container.ID>
|
||||
// This is a wrapper for register
|
||||
func (daemon *Daemon) Register(container *Container) error {
|
||||
return daemon.register(container, true)
|
||||
}
|
||||
|
||||
// register makes a container object usable by the daemon as <container.ID>
|
||||
func (daemon *Daemon) register(container *Container, updateSuffixarray bool) error {
|
||||
if container.daemon != nil || daemon.Exists(container.ID) {
|
||||
return fmt.Errorf("Container is already loaded")
|
||||
}
|
||||
|
@ -165,7 +177,14 @@ func (daemon *Daemon) Register(container *Container) error {
|
|||
}
|
||||
// done
|
||||
daemon.containers.PushBack(container)
|
||||
daemon.idIndex.Add(container.ID)
|
||||
|
||||
// don't update the Suffixarray if we're starting up
|
||||
// we'll waste time if we update it for every container
|
||||
if updateSuffixarray {
|
||||
daemon.idIndex.Add(container.ID)
|
||||
} else {
|
||||
daemon.idIndex.AddWithoutSuffixarrayUpdate(container.ID)
|
||||
}
|
||||
|
||||
// FIXME: if the container is supposed to be running but is not, auto restart it?
|
||||
// if so, then we need to restart monitor and init a new lock
|
||||
|
@ -277,6 +296,10 @@ func (daemon *Daemon) Destroy(container *Container) error {
|
|||
daemon.idIndex.Delete(container.ID)
|
||||
daemon.containers.Remove(element)
|
||||
|
||||
if _, err := daemon.containerGraph.Purge(container.ID); err != nil {
|
||||
utils.Debugf("Unable to remove container from link graph: %s", err)
|
||||
}
|
||||
|
||||
if err := daemon.driver.Remove(container.ID); err != nil {
|
||||
return fmt.Errorf("Driver %s failed to remove root filesystem %s: %s", daemon.driver, container.ID, err)
|
||||
}
|
||||
|
@ -286,10 +309,6 @@ func (daemon *Daemon) Destroy(container *Container) error {
|
|||
return fmt.Errorf("Driver %s failed to remove init filesystem %s: %s", daemon.driver, initID, err)
|
||||
}
|
||||
|
||||
if _, err := daemon.containerGraph.Purge(container.ID); err != nil {
|
||||
utils.Debugf("Unable to remove container from link graph: %s", err)
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(container.root); err != nil {
|
||||
return fmt.Errorf("Unable to remove filesystem for %v: %v", container.ID, err)
|
||||
}
|
||||
|
@ -329,8 +348,8 @@ func (daemon *Daemon) restore() error {
|
|||
}
|
||||
}
|
||||
|
||||
register := func(container *Container) {
|
||||
if err := daemon.Register(container); err != nil {
|
||||
registerContainer := func(container *Container) {
|
||||
if err := daemon.register(container, false); err != nil {
|
||||
utils.Debugf("Failed to register container %s: %s", container.ID, err)
|
||||
}
|
||||
}
|
||||
|
@ -342,7 +361,7 @@ func (daemon *Daemon) restore() error {
|
|||
}
|
||||
e := entities[p]
|
||||
if container, ok := containers[e.ID()]; ok {
|
||||
register(container)
|
||||
registerContainer(container)
|
||||
delete(containers, e.ID())
|
||||
}
|
||||
}
|
||||
|
@ -359,9 +378,10 @@ func (daemon *Daemon) restore() error {
|
|||
if _, err := daemon.containerGraph.Set(container.Name, container.ID); err != nil {
|
||||
utils.Debugf("Setting default id - %s", err)
|
||||
}
|
||||
register(container)
|
||||
registerContainer(container)
|
||||
}
|
||||
|
||||
daemon.idIndex.UpdateSuffixarray()
|
||||
if os.Getenv("DEBUG") == "" && os.Getenv("TEST") == "" {
|
||||
fmt.Printf(": done.\n")
|
||||
}
|
||||
|
@ -592,15 +612,18 @@ func (daemon *Daemon) Commit(container *Container, repository, tag, comment, aut
|
|||
containerID, containerImage string
|
||||
containerConfig *runconfig.Config
|
||||
)
|
||||
|
||||
if container != nil {
|
||||
containerID = container.ID
|
||||
containerImage = container.Image
|
||||
containerConfig = container.Config
|
||||
}
|
||||
|
||||
img, err := daemon.graph.Create(rwTar, containerID, containerImage, comment, author, containerConfig, config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Register the image if needed
|
||||
if repository != "" {
|
||||
if err := daemon.repositories.Set(repository, tag, img.ID, true); err != nil {
|
||||
|
@ -667,6 +690,35 @@ func (daemon *Daemon) RegisterLink(parent, child *Container, alias string) error
|
|||
return nil
|
||||
}
|
||||
|
||||
func (daemon *Daemon) RegisterLinks(container *Container, hostConfig *runconfig.HostConfig) error {
|
||||
if hostConfig != nil && hostConfig.Links != nil {
|
||||
for _, l := range hostConfig.Links {
|
||||
parts, err := utils.PartParser("name:alias", l)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
child, err := daemon.GetByName(parts["name"])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if child == nil {
|
||||
return fmt.Errorf("Could not get container for %s", parts["name"])
|
||||
}
|
||||
if err := daemon.RegisterLink(container, child, parts["alias"]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// After we load all the links into the daemon
|
||||
// set them to nil on the hostconfig
|
||||
hostConfig.Links = nil
|
||||
if err := container.WriteHostConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FIXME: harmonize with NewGraph()
|
||||
func NewDaemon(config *daemonconfig.Config, eng *engine.Engine) (*Daemon, error) {
|
||||
daemon, err := NewDaemonFromDirectory(config, eng)
|
||||
|
@ -680,6 +732,12 @@ func NewDaemonFromDirectory(config *daemonconfig.Config, eng *engine.Engine) (*D
|
|||
if !config.EnableSelinuxSupport {
|
||||
selinux.SetDisabled()
|
||||
}
|
||||
|
||||
// Create the root directory if it doesn't exists
|
||||
if err := os.MkdirAll(config.Root, 0700); err != nil && !os.IsExist(err) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Set the default driver
|
||||
graphdriver.DefaultDriver = config.GraphDriver
|
||||
|
||||
|
@ -842,6 +900,10 @@ func (daemon *Daemon) Close() error {
|
|||
utils.Errorf("daemon.containerGraph.Close(): %s", err.Error())
|
||||
errorsStrings = append(errorsStrings, err.Error())
|
||||
}
|
||||
if err := mount.Unmount(daemon.config.Root); err != nil {
|
||||
utils.Errorf("daemon.Umount(%s): %s", daemon.config.Root, err.Error())
|
||||
errorsStrings = append(errorsStrings, err.Error())
|
||||
}
|
||||
if len(errorsStrings) > 0 {
|
||||
return fmt.Errorf("%s", strings.Join(errorsStrings, ", "))
|
||||
}
|
||||
|
|
|
@ -103,9 +103,10 @@ type NetworkInterface struct {
|
|||
}
|
||||
|
||||
type Resources struct {
|
||||
Memory int64 `json:"memory"`
|
||||
MemorySwap int64 `json:"memory_swap"`
|
||||
CpuShares int64 `json:"cpu_shares"`
|
||||
Memory int64 `json:"memory"`
|
||||
MemorySwap int64 `json:"memory_swap"`
|
||||
CpuShares int64 `json:"cpu_shares"`
|
||||
Cpuset string `json:"cpuset"`
|
||||
}
|
||||
|
||||
type Mount struct {
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
func NewDriver(name, root, initPath string, sysInfo *sysinfo.SysInfo) (execdriver.Driver, error) {
|
||||
switch name {
|
||||
case "lxc":
|
||||
// we want to five the lxc driver the full docker root because it needs
|
||||
// we want to give the lxc driver the full docker root because it needs
|
||||
// to access and write config and template files in /var/lib/docker/containers/*
|
||||
// to be backwards compatible
|
||||
return lxc.NewDriver(root, sysInfo.AppArmor)
|
||||
|
|
|
@ -15,8 +15,8 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/dotcloud/docker/daemon/execdriver"
|
||||
"github.com/dotcloud/docker/pkg/cgroups"
|
||||
"github.com/dotcloud/docker/pkg/label"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/cgroups"
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
)
|
||||
|
@ -268,18 +268,14 @@ func (d *driver) waitForStart(c *execdriver.Command, waitLock chan struct{}) (in
|
|||
}
|
||||
|
||||
output, err = d.getInfo(c.ID)
|
||||
if err != nil {
|
||||
output, err = d.getInfo(c.ID)
|
||||
if err == nil {
|
||||
info, err := parseLxcInfo(string(output))
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
}
|
||||
info, err := parseLxcInfo(string(output))
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
if info.Running {
|
||||
return info.Pid, nil
|
||||
if info.Running {
|
||||
return info.Pid, nil
|
||||
}
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
|
|
|
@ -15,7 +15,9 @@ lxc.network.type = veth
|
|||
lxc.network.link = {{.Network.Interface.Bridge}}
|
||||
lxc.network.name = eth0
|
||||
lxc.network.mtu = {{.Network.Mtu}}
|
||||
{{else if not .Network.HostNetworking}}
|
||||
{{else if .Network.HostNetworking}}
|
||||
lxc.network.type = none
|
||||
{{else}}
|
||||
# network is disabled (-n=false)
|
||||
lxc.network.type = empty
|
||||
lxc.network.flags = up
|
||||
|
@ -126,6 +128,9 @@ lxc.cgroup.memory.memsw.limit_in_bytes = {{$memSwap}}
|
|||
{{if .Resources.CpuShares}}
|
||||
lxc.cgroup.cpu.shares = {{.Resources.CpuShares}}
|
||||
{{end}}
|
||||
{{if .Resources.Cpuset}}
|
||||
lxc.cgroup.cpuset.cpus = {{.Resources.Cpuset}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
{{if .Config.lxc}}
|
||||
|
|
|
@ -8,7 +8,7 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/dotcloud/docker/pkg/libcontainer"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/pkg/units"
|
||||
)
|
||||
|
||||
type Action func(*libcontainer.Container, interface{}, string) error
|
||||
|
@ -75,7 +75,7 @@ func memory(container *libcontainer.Container, context interface{}, value string
|
|||
return fmt.Errorf("cannot set cgroups when they are disabled")
|
||||
}
|
||||
|
||||
v, err := utils.RAMInBytes(value)
|
||||
v, err := units.RAMInBytes(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -88,7 +88,7 @@ func memoryReservation(container *libcontainer.Container, context interface{}, v
|
|||
return fmt.Errorf("cannot set cgroups when they are disabled")
|
||||
}
|
||||
|
||||
v, err := utils.RAMInBytes(value)
|
||||
v, err := units.RAMInBytes(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -109,12 +109,19 @@ func memorySwap(container *libcontainer.Container, context interface{}, value st
|
|||
}
|
||||
|
||||
func addCap(container *libcontainer.Container, context interface{}, value string) error {
|
||||
container.CapabilitiesMask[value] = true
|
||||
container.Capabilities = append(container.Capabilities, value)
|
||||
return nil
|
||||
}
|
||||
|
||||
func dropCap(container *libcontainer.Container, context interface{}, value string) error {
|
||||
container.CapabilitiesMask[value] = false
|
||||
// If the capability is specified multiple times, remove all instances.
|
||||
for i, capability := range container.Capabilities {
|
||||
if capability == value {
|
||||
container.Capabilities = append(container.Capabilities[:i], container.Capabilities[i+1:]...)
|
||||
}
|
||||
}
|
||||
|
||||
// The capability wasn't found so we will drop it anyways.
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -4,8 +4,19 @@ import (
|
|||
"testing"
|
||||
|
||||
"github.com/dotcloud/docker/daemon/execdriver/native/template"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer"
|
||||
)
|
||||
|
||||
// Checks whether the expected capability is specified in the capabilities.
|
||||
func hasCapability(expected string, capabilities []string) bool {
|
||||
for _, capability := range capabilities {
|
||||
if capability == expected {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func TestSetReadonlyRootFs(t *testing.T) {
|
||||
var (
|
||||
container = template.New()
|
||||
|
@ -39,10 +50,10 @@ func TestConfigurationsDoNotConflict(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !container1.CapabilitiesMask["NET_ADMIN"] {
|
||||
if !hasCapability("NET_ADMIN", container1.Capabilities) {
|
||||
t.Fatal("container one should have NET_ADMIN enabled")
|
||||
}
|
||||
if container2.CapabilitiesMask["NET_ADMIN"] {
|
||||
if hasCapability("NET_ADMIN", container2.Capabilities) {
|
||||
t.Fatal("container two should not have NET_ADMIN enabled")
|
||||
}
|
||||
}
|
||||
|
@ -138,10 +149,10 @@ func TestAddCap(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !container.CapabilitiesMask["MKNOD"] {
|
||||
if !hasCapability("MKNOD", container.Capabilities) {
|
||||
t.Fatal("container should have MKNOD enabled")
|
||||
}
|
||||
if !container.CapabilitiesMask["SYS_ADMIN"] {
|
||||
if !hasCapability("SYS_ADMIN", container.Capabilities) {
|
||||
t.Fatal("container should have SYS_ADMIN enabled")
|
||||
}
|
||||
}
|
||||
|
@ -154,14 +165,12 @@ func TestDropCap(t *testing.T) {
|
|||
}
|
||||
)
|
||||
// enabled all caps like in privileged mode
|
||||
for key := range container.CapabilitiesMask {
|
||||
container.CapabilitiesMask[key] = true
|
||||
}
|
||||
container.Capabilities = libcontainer.GetAllCapabilities()
|
||||
if err := ParseConfiguration(container, nil, opts); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if container.CapabilitiesMask["MKNOD"] {
|
||||
if hasCapability("MKNOD", container.Capabilities) {
|
||||
t.Fatal("container should not have MKNOD enabled")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@ package native
|
|||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/dotcloud/docker/daemon/execdriver"
|
||||
|
@ -10,6 +11,7 @@ import (
|
|||
"github.com/dotcloud/docker/daemon/execdriver/native/template"
|
||||
"github.com/dotcloud/docker/pkg/apparmor"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/mount/nodes"
|
||||
)
|
||||
|
||||
// createContainer populates and configures the container type with the
|
||||
|
@ -34,8 +36,6 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Container
|
|||
if err := d.setPrivileged(container); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
container.Mounts = append(container.Mounts, libcontainer.Mount{Type: "devtmpfs"})
|
||||
}
|
||||
if err := d.setupCgroups(container, c); err != nil {
|
||||
return nil, err
|
||||
|
@ -46,7 +46,11 @@ func (d *driver) createContainer(c *execdriver.Command) (*libcontainer.Container
|
|||
if err := d.setupLabels(container, c); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := configuration.ParseConfiguration(container, d.activeContainers, c.Config["native"]); err != nil {
|
||||
cmds := make(map[string]*exec.Cmd)
|
||||
for k, v := range d.activeContainers {
|
||||
cmds[k] = v.cmd
|
||||
}
|
||||
if err := configuration.ParseConfiguration(container, cmds, c.Config["native"]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return container, nil
|
||||
|
@ -82,10 +86,12 @@ func (d *driver) createNetwork(container *libcontainer.Container, c *execdriver.
|
|||
}
|
||||
|
||||
if c.Network.ContainerID != "" {
|
||||
cmd := d.activeContainers[c.Network.ContainerID]
|
||||
if cmd == nil || cmd.Process == nil {
|
||||
active := d.activeContainers[c.Network.ContainerID]
|
||||
if active == nil || active.cmd.Process == nil {
|
||||
return fmt.Errorf("%s is not a valid running container to join", c.Network.ContainerID)
|
||||
}
|
||||
cmd := active.cmd
|
||||
|
||||
nspath := filepath.Join("/proc", fmt.Sprint(cmd.Process.Pid), "ns", "net")
|
||||
container.Networks = append(container.Networks, &libcontainer.Network{
|
||||
Type: "netns",
|
||||
|
@ -97,14 +103,17 @@ func (d *driver) createNetwork(container *libcontainer.Container, c *execdriver.
|
|||
return nil
|
||||
}
|
||||
|
||||
func (d *driver) setPrivileged(container *libcontainer.Container) error {
|
||||
for key := range container.CapabilitiesMask {
|
||||
container.CapabilitiesMask[key] = true
|
||||
}
|
||||
func (d *driver) setPrivileged(container *libcontainer.Container) (err error) {
|
||||
container.Capabilities = libcontainer.GetAllCapabilities()
|
||||
container.Cgroups.DeviceAccess = true
|
||||
|
||||
delete(container.Context, "restrictions")
|
||||
|
||||
container.OptionalDeviceNodes = nil
|
||||
if container.RequiredDeviceNodes, err = nodes.GetHostDeviceNodes(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if apparmor.IsEnabled() {
|
||||
container.Context["apparmor_profile"] = "unconfined"
|
||||
}
|
||||
|
@ -117,6 +126,7 @@ func (d *driver) setupCgroups(container *libcontainer.Container, c *execdriver.C
|
|||
container.Cgroups.Memory = c.Resources.Memory
|
||||
container.Cgroups.MemoryReservation = c.Resources.Memory
|
||||
container.Cgroups.MemorySwap = c.Resources.MemorySwap
|
||||
container.Cgroups.CpusetCpus = c.Resources.Cpuset
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -7,14 +7,14 @@ import (
|
|||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/dotcloud/docker/daemon/execdriver"
|
||||
"github.com/dotcloud/docker/pkg/apparmor"
|
||||
"github.com/dotcloud/docker/pkg/cgroups"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/cgroups/fs"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/cgroups/systemd"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/nsinit"
|
||||
"github.com/dotcloud/docker/pkg/system"
|
||||
)
|
||||
|
@ -53,24 +53,31 @@ func init() {
|
|||
})
|
||||
}
|
||||
|
||||
type activeContainer struct {
|
||||
container *libcontainer.Container
|
||||
cmd *exec.Cmd
|
||||
}
|
||||
|
||||
type driver struct {
|
||||
root string
|
||||
initPath string
|
||||
activeContainers map[string]*exec.Cmd
|
||||
activeContainers map[string]*activeContainer
|
||||
}
|
||||
|
||||
func NewDriver(root, initPath string) (*driver, error) {
|
||||
if err := os.MkdirAll(root, 0700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// native driver root is at docker_root/execdriver/native. Put apparmor at docker_root
|
||||
if err := apparmor.InstallDefaultProfile(filepath.Join(root, "../..", BackupApparmorProfilePath)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &driver{
|
||||
root: root,
|
||||
initPath: initPath,
|
||||
activeContainers: make(map[string]*exec.Cmd),
|
||||
activeContainers: make(map[string]*activeContainer),
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
@ -80,7 +87,10 @@ func (d *driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, startCallba
|
|||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
d.activeContainers[c.ID] = &c.Cmd
|
||||
d.activeContainers[c.ID] = &activeContainer{
|
||||
container: container,
|
||||
cmd: &c.Cmd,
|
||||
}
|
||||
|
||||
var (
|
||||
dataPath = filepath.Join(d.root, c.ID)
|
||||
|
@ -175,41 +185,18 @@ func (d *driver) Name() string {
|
|||
return fmt.Sprintf("%s-%s", DriverName, Version)
|
||||
}
|
||||
|
||||
// TODO: this can be improved with our driver
|
||||
// there has to be a better way to do this
|
||||
func (d *driver) GetPidsForContainer(id string) ([]int, error) {
|
||||
pids := []int{}
|
||||
active := d.activeContainers[id]
|
||||
|
||||
subsystem := "devices"
|
||||
cgroupRoot, err := cgroups.FindCgroupMountpoint(subsystem)
|
||||
if err != nil {
|
||||
return pids, err
|
||||
}
|
||||
cgroupDir, err := cgroups.GetThisCgroupDir(subsystem)
|
||||
if err != nil {
|
||||
return pids, err
|
||||
if active == nil {
|
||||
return nil, fmt.Errorf("active container for %s does not exist", id)
|
||||
}
|
||||
c := active.container.Cgroups
|
||||
|
||||
filename := filepath.Join(cgroupRoot, cgroupDir, id, "tasks")
|
||||
if _, err := os.Stat(filename); os.IsNotExist(err) {
|
||||
filename = filepath.Join(cgroupRoot, cgroupDir, "docker", id, "tasks")
|
||||
if systemd.UseSystemd() {
|
||||
return systemd.GetPids(c)
|
||||
}
|
||||
|
||||
output, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
return pids, err
|
||||
}
|
||||
for _, p := range strings.Split(string(output), "\n") {
|
||||
if len(p) == 0 {
|
||||
continue
|
||||
}
|
||||
pid, err := strconv.Atoi(p)
|
||||
if err != nil {
|
||||
return pids, fmt.Errorf("Invalid pid '%s': %s", p, err)
|
||||
}
|
||||
pids = append(pids, pid)
|
||||
}
|
||||
return pids, nil
|
||||
return fs.GetPids(c)
|
||||
}
|
||||
|
||||
func (d *driver) writeContainerFile(container *libcontainer.Container, id string) error {
|
||||
|
@ -225,6 +212,8 @@ func (d *driver) createContainerRoot(id string) error {
|
|||
}
|
||||
|
||||
func (d *driver) removeContainerRoot(id string) error {
|
||||
delete(d.activeContainers, id)
|
||||
|
||||
return os.RemoveAll(filepath.Join(d.root, id))
|
||||
}
|
||||
|
||||
|
|
|
@ -2,30 +2,25 @@ package template
|
|||
|
||||
import (
|
||||
"github.com/dotcloud/docker/pkg/apparmor"
|
||||
"github.com/dotcloud/docker/pkg/cgroups"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/cgroups"
|
||||
"github.com/dotcloud/docker/pkg/libcontainer/mount/nodes"
|
||||
)
|
||||
|
||||
// New returns the docker default configuration for libcontainer
|
||||
func New() *libcontainer.Container {
|
||||
container := &libcontainer.Container{
|
||||
CapabilitiesMask: map[string]bool{
|
||||
"SETPCAP": false,
|
||||
"SYS_MODULE": false,
|
||||
"SYS_RAWIO": false,
|
||||
"SYS_PACCT": false,
|
||||
"SYS_ADMIN": false,
|
||||
"SYS_NICE": false,
|
||||
"SYS_RESOURCE": false,
|
||||
"SYS_TIME": false,
|
||||
"SYS_TTY_CONFIG": false,
|
||||
"AUDIT_WRITE": false,
|
||||
"AUDIT_CONTROL": false,
|
||||
"MAC_OVERRIDE": false,
|
||||
"MAC_ADMIN": false,
|
||||
"NET_ADMIN": false,
|
||||
"MKNOD": true,
|
||||
"SYSLOG": false,
|
||||
Capabilities: []string{
|
||||
"CHOWN",
|
||||
"DAC_OVERRIDE",
|
||||
"FOWNER",
|
||||
"MKNOD",
|
||||
"NET_RAW",
|
||||
"SETGID",
|
||||
"SETUID",
|
||||
"SETFCAP",
|
||||
"SETPCAP",
|
||||
"NET_BIND_SERVICE",
|
||||
},
|
||||
Namespaces: map[string]bool{
|
||||
"NEWNS": true,
|
||||
|
@ -38,7 +33,9 @@ func New() *libcontainer.Container {
|
|||
Parent: "docker",
|
||||
DeviceAccess: false,
|
||||
},
|
||||
Context: libcontainer.Context{},
|
||||
Context: libcontainer.Context{},
|
||||
RequiredDeviceNodes: nodes.DefaultNodes,
|
||||
OptionalDeviceNodes: []string{"/dev/fuse"},
|
||||
}
|
||||
if apparmor.IsEnabled() {
|
||||
container.Context["apparmor_profile"] = "docker-default"
|
||||
|
|
|
@ -54,7 +54,7 @@ type Driver struct {
|
|||
func Init(root string) (graphdriver.Driver, error) {
|
||||
// Try to load the aufs kernel module
|
||||
if err := supportsAufs(); err != nil {
|
||||
return nil, err
|
||||
return nil, graphdriver.ErrNotSupported
|
||||
}
|
||||
paths := []string{
|
||||
"mnt",
|
||||
|
|
|
@ -19,7 +19,7 @@ var (
|
|||
func testInit(dir string, t *testing.T) graphdriver.Driver {
|
||||
d, err := Init(dir)
|
||||
if err != nil {
|
||||
if err == ErrAufsNotSupported {
|
||||
if err == graphdriver.ErrNotSupported {
|
||||
t.Skip(err)
|
||||
} else {
|
||||
t.Fatal(err)
|
||||
|
|
|
@ -31,7 +31,7 @@ func Init(home string) (graphdriver.Driver, error) {
|
|||
}
|
||||
|
||||
if buf.Type != 0x9123683E {
|
||||
return nil, fmt.Errorf("%s is not a btrfs filesystem", rootdir)
|
||||
return nil, graphdriver.ErrNotSupported
|
||||
}
|
||||
|
||||
return &Driver{
|
||||
|
|
28
daemon/graphdriver/btrfs/btrfs_test.go
Normal file
28
daemon/graphdriver/btrfs/btrfs_test.go
Normal file
|
@ -0,0 +1,28 @@
|
|||
package btrfs
|
||||
|
||||
import (
|
||||
"github.com/dotcloud/docker/daemon/graphdriver/graphtest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// This avoids creating a new driver for each test if all tests are run
|
||||
// Make sure to put new tests between TestBtrfsSetup and TestBtrfsTeardown
|
||||
func TestBtrfsSetup(t *testing.T) {
|
||||
graphtest.GetDriver(t, "btrfs")
|
||||
}
|
||||
|
||||
func TestBtrfsCreateEmpty(t *testing.T) {
|
||||
graphtest.DriverTestCreateEmpty(t, "btrfs")
|
||||
}
|
||||
|
||||
func TestBtrfsCreateBase(t *testing.T) {
|
||||
graphtest.DriverTestCreateBase(t, "btrfs")
|
||||
}
|
||||
|
||||
func TestBtrfsCreateSnap(t *testing.T) {
|
||||
graphtest.DriverTestCreateSnap(t, "btrfs")
|
||||
}
|
||||
|
||||
func TestBtrfsTeardown(t *testing.T) {
|
||||
graphtest.PutDriver(t)
|
||||
}
|
|
@ -4,6 +4,9 @@ package devmapper
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"syscall"
|
||||
|
||||
"github.com/dotcloud/docker/utils"
|
||||
)
|
||||
|
||||
|
@ -14,7 +17,7 @@ func stringToLoopName(src string) [LoNameSize]uint8 {
|
|||
}
|
||||
|
||||
func getNextFreeLoopbackIndex() (int, error) {
|
||||
f, err := osOpenFile("/dev/loop-control", osORdOnly, 0644)
|
||||
f, err := os.OpenFile("/dev/loop-control", os.O_RDONLY, 0644)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
@ -27,27 +30,27 @@ func getNextFreeLoopbackIndex() (int, error) {
|
|||
return index, err
|
||||
}
|
||||
|
||||
func openNextAvailableLoopback(index int, sparseFile *osFile) (loopFile *osFile, err error) {
|
||||
func openNextAvailableLoopback(index int, sparseFile *os.File) (loopFile *os.File, err error) {
|
||||
// Start looking for a free /dev/loop
|
||||
for {
|
||||
target := fmt.Sprintf("/dev/loop%d", index)
|
||||
index++
|
||||
|
||||
fi, err := osStat(target)
|
||||
fi, err := os.Stat(target)
|
||||
if err != nil {
|
||||
if osIsNotExist(err) {
|
||||
if os.IsNotExist(err) {
|
||||
utils.Errorf("There are no more loopback device available.")
|
||||
}
|
||||
return nil, ErrAttachLoopbackDevice
|
||||
}
|
||||
|
||||
if fi.Mode()&osModeDevice != osModeDevice {
|
||||
if fi.Mode()&os.ModeDevice != os.ModeDevice {
|
||||
utils.Errorf("Loopback device %s is not a block device.", target)
|
||||
continue
|
||||
}
|
||||
|
||||
// OpenFile adds O_CLOEXEC
|
||||
loopFile, err = osOpenFile(target, osORdWr, 0644)
|
||||
loopFile, err = os.OpenFile(target, os.O_RDWR, 0644)
|
||||
if err != nil {
|
||||
utils.Errorf("Error openning loopback device: %s", err)
|
||||
return nil, ErrAttachLoopbackDevice
|
||||
|
@ -58,7 +61,7 @@ func openNextAvailableLoopback(index int, sparseFile *osFile) (loopFile *osFile,
|
|||
loopFile.Close()
|
||||
|
||||
// If the error is EBUSY, then try the next loopback
|
||||
if err != sysEBusy {
|
||||
if err != syscall.EBUSY {
|
||||
utils.Errorf("Cannot set up loopback device %s: %s", target, err)
|
||||
return nil, ErrAttachLoopbackDevice
|
||||
}
|
||||
|
@ -80,8 +83,8 @@ func openNextAvailableLoopback(index int, sparseFile *osFile) (loopFile *osFile,
|
|||
}
|
||||
|
||||
// attachLoopDevice attaches the given sparse file to the next
|
||||
// available loopback device. It returns an opened *osFile.
|
||||
func attachLoopDevice(sparseName string) (loop *osFile, err error) {
|
||||
// available loopback device. It returns an opened *os.File.
|
||||
func attachLoopDevice(sparseName string) (loop *os.File, err error) {
|
||||
|
||||
// Try to retrieve the next available loopback device via syscall.
|
||||
// If it fails, we discard error and start loopking for a
|
||||
|
@ -92,7 +95,7 @@ func attachLoopDevice(sparseName string) (loop *osFile, err error) {
|
|||
}
|
||||
|
||||
// OpenFile adds O_CLOEXEC
|
||||
sparseFile, err := osOpenFile(sparseName, osORdWr, 0644)
|
||||
sparseFile, err := os.OpenFile(sparseName, os.O_RDWR, 0644)
|
||||
if err != nil {
|
||||
utils.Errorf("Error openning sparse file %s: %s", sparseName, err)
|
||||
return nil, ErrAttachLoopbackDevice
|
||||
|
|
|
@ -8,10 +8,11 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
@ -62,8 +63,7 @@ type DeviceSet struct {
|
|||
devicePrefix string
|
||||
TransactionId uint64
|
||||
NewTransactionId uint64
|
||||
nextFreeDevice int
|
||||
sawBusy bool
|
||||
nextDeviceId int
|
||||
}
|
||||
|
||||
type DiskUsage struct {
|
||||
|
@ -109,7 +109,19 @@ func (devices *DeviceSet) loopbackDir() string {
|
|||
return path.Join(devices.root, "devicemapper")
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) jsonFile() string {
|
||||
func (devices *DeviceSet) metadataDir() string {
|
||||
return path.Join(devices.root, "metadata")
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) metadataFile(info *DevInfo) string {
|
||||
file := info.Hash
|
||||
if file == "" {
|
||||
file = "base"
|
||||
}
|
||||
return path.Join(devices.metadataDir(), file)
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) oldMetadataFile() string {
|
||||
return path.Join(devices.loopbackDir(), "json")
|
||||
}
|
||||
|
||||
|
@ -125,7 +137,7 @@ func (devices *DeviceSet) hasImage(name string) bool {
|
|||
dirname := devices.loopbackDir()
|
||||
filename := path.Join(dirname, name)
|
||||
|
||||
_, err := osStat(filename)
|
||||
_, err := os.Stat(filename)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
|
@ -137,16 +149,16 @@ func (devices *DeviceSet) ensureImage(name string, size int64) (string, error) {
|
|||
dirname := devices.loopbackDir()
|
||||
filename := path.Join(dirname, name)
|
||||
|
||||
if err := osMkdirAll(dirname, 0700); err != nil && !osIsExist(err) {
|
||||
if err := os.MkdirAll(dirname, 0700); err != nil && !os.IsExist(err) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if _, err := osStat(filename); err != nil {
|
||||
if !osIsNotExist(err) {
|
||||
if _, err := os.Stat(filename); err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return "", err
|
||||
}
|
||||
utils.Debugf("Creating loopback file %s for device-manage use", filename)
|
||||
file, err := osOpenFile(filename, osORdWr|osOCreate, 0600)
|
||||
file, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE, 0600)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
@ -159,26 +171,24 @@ func (devices *DeviceSet) ensureImage(name string, size int64) (string, error) {
|
|||
return filename, nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) allocateDeviceId() int {
|
||||
// TODO: Add smarter reuse of deleted devices
|
||||
id := devices.nextFreeDevice
|
||||
devices.nextFreeDevice = devices.nextFreeDevice + 1
|
||||
return id
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) allocateTransactionId() uint64 {
|
||||
devices.NewTransactionId = devices.NewTransactionId + 1
|
||||
return devices.NewTransactionId
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) saveMetadata() error {
|
||||
devices.devicesLock.Lock()
|
||||
jsonData, err := json.Marshal(devices.MetaData)
|
||||
devices.devicesLock.Unlock()
|
||||
func (devices *DeviceSet) removeMetadata(info *DevInfo) error {
|
||||
if err := os.RemoveAll(devices.metadataFile(info)); err != nil {
|
||||
return fmt.Errorf("Error removing metadata file %s: %s", devices.metadataFile(info), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) saveMetadata(info *DevInfo) error {
|
||||
jsonData, err := json.Marshal(info)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error encoding metadata to json: %s", err)
|
||||
}
|
||||
tmpFile, err := ioutil.TempFile(filepath.Dir(devices.jsonFile()), ".json")
|
||||
tmpFile, err := ioutil.TempFile(devices.metadataDir(), ".tmp")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating metadata file: %s", err)
|
||||
}
|
||||
|
@ -196,7 +206,7 @@ func (devices *DeviceSet) saveMetadata() error {
|
|||
if err := tmpFile.Close(); err != nil {
|
||||
return fmt.Errorf("Error closing metadata file %s: %s", tmpFile.Name(), err)
|
||||
}
|
||||
if err := osRename(tmpFile.Name(), devices.jsonFile()); err != nil {
|
||||
if err := os.Rename(tmpFile.Name(), devices.metadataFile(info)); err != nil {
|
||||
return fmt.Errorf("Error committing metadata file %s: %s", tmpFile.Name(), err)
|
||||
}
|
||||
|
||||
|
@ -214,7 +224,12 @@ func (devices *DeviceSet) lookupDevice(hash string) (*DevInfo, error) {
|
|||
defer devices.devicesLock.Unlock()
|
||||
info := devices.Devices[hash]
|
||||
if info == nil {
|
||||
return nil, fmt.Errorf("Unknown device %s", hash)
|
||||
info = devices.loadMetadata(hash)
|
||||
if info == nil {
|
||||
return nil, fmt.Errorf("Unknown device %s", hash)
|
||||
}
|
||||
|
||||
devices.Devices[hash] = info
|
||||
}
|
||||
return info, nil
|
||||
}
|
||||
|
@ -234,7 +249,7 @@ func (devices *DeviceSet) registerDevice(id int, hash string, size uint64) (*Dev
|
|||
devices.Devices[hash] = info
|
||||
devices.devicesLock.Unlock()
|
||||
|
||||
if err := devices.saveMetadata(); err != nil {
|
||||
if err := devices.saveMetadata(info); err != nil {
|
||||
// Try to remove unused device
|
||||
devices.devicesLock.Lock()
|
||||
delete(devices.Devices, hash)
|
||||
|
@ -258,9 +273,9 @@ func (devices *DeviceSet) activateDeviceIfNeeded(info *DevInfo) error {
|
|||
func (devices *DeviceSet) createFilesystem(info *DevInfo) error {
|
||||
devname := info.DevName()
|
||||
|
||||
err := execRun("mkfs.ext4", "-E", "discard,lazy_itable_init=0,lazy_journal_init=0", devname)
|
||||
err := exec.Command("mkfs.ext4", "-E", "nodiscard,lazy_itable_init=0,lazy_journal_init=0", devname).Run()
|
||||
if err != nil {
|
||||
err = execRun("mkfs.ext4", "-E", "discard,lazy_itable_init=0", devname)
|
||||
err = exec.Command("mkfs.ext4", "-E", "nodiscard,lazy_itable_init=0", devname).Run()
|
||||
}
|
||||
if err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
|
@ -269,9 +284,7 @@ func (devices *DeviceSet) createFilesystem(info *DevInfo) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) loadMetaData() error {
|
||||
utils.Debugf("loadMetadata()")
|
||||
defer utils.Debugf("loadMetadata END")
|
||||
func (devices *DeviceSet) initMetaData() error {
|
||||
_, _, _, params, err := getStatus(devices.getPoolName())
|
||||
if err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
|
@ -284,37 +297,59 @@ func (devices *DeviceSet) loadMetaData() error {
|
|||
}
|
||||
devices.NewTransactionId = devices.TransactionId
|
||||
|
||||
jsonData, err := ioutil.ReadFile(devices.jsonFile())
|
||||
if err != nil && !osIsNotExist(err) {
|
||||
// Migrate old metadatafile
|
||||
|
||||
jsonData, err := ioutil.ReadFile(devices.oldMetadataFile())
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
devices.MetaData.Devices = make(map[string]*DevInfo)
|
||||
if jsonData != nil {
|
||||
if err := json.Unmarshal(jsonData, &devices.MetaData); err != nil {
|
||||
m := MetaData{Devices: make(map[string]*DevInfo)}
|
||||
|
||||
if err := json.Unmarshal(jsonData, &m); err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for hash, d := range devices.Devices {
|
||||
d.Hash = hash
|
||||
d.devices = devices
|
||||
for hash, info := range m.Devices {
|
||||
info.Hash = hash
|
||||
|
||||
if d.DeviceId >= devices.nextFreeDevice {
|
||||
devices.nextFreeDevice = d.DeviceId + 1
|
||||
// If the transaction id is larger than the actual one we lost the device due to some crash
|
||||
if info.TransactionId <= devices.TransactionId {
|
||||
devices.saveMetadata(info)
|
||||
}
|
||||
}
|
||||
if err := os.Rename(devices.oldMetadataFile(), devices.oldMetadataFile()+".migrated"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If the transaction id is larger than the actual one we lost the device due to some crash
|
||||
if d.TransactionId > devices.TransactionId {
|
||||
utils.Debugf("Removing lost device %s with id %d", hash, d.TransactionId)
|
||||
delete(devices.Devices, hash)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) loadMetadata(hash string) *DevInfo {
|
||||
info := &DevInfo{Hash: hash, devices: devices}
|
||||
|
||||
jsonData, err := ioutil.ReadFile(devices.metadataFile(info))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(jsonData, &info); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If the transaction id is larger than the actual one we lost the device due to some crash
|
||||
if info.TransactionId > devices.TransactionId {
|
||||
return nil
|
||||
}
|
||||
|
||||
return info
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) setupBaseImage() error {
|
||||
oldInfo, _ := devices.lookupDevice("")
|
||||
if oldInfo != nil && oldInfo.Initialized {
|
||||
|
@ -331,14 +366,17 @@ func (devices *DeviceSet) setupBaseImage() error {
|
|||
|
||||
utils.Debugf("Initializing base device-manager snapshot")
|
||||
|
||||
id := devices.allocateDeviceId()
|
||||
id := devices.nextDeviceId
|
||||
|
||||
// Create initial device
|
||||
if err := createDevice(devices.getPoolDevName(), id); err != nil {
|
||||
if err := createDevice(devices.getPoolDevName(), &id); err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Ids are 24bit, so wrap around
|
||||
devices.nextDeviceId = (id + 1) & 0xffffff
|
||||
|
||||
utils.Debugf("Registering base device (id %v) with FS size %v", id, DefaultBaseFsSize)
|
||||
info, err := devices.registerDevice(id, "", DefaultBaseFsSize)
|
||||
if err != nil {
|
||||
|
@ -360,7 +398,7 @@ func (devices *DeviceSet) setupBaseImage() error {
|
|||
}
|
||||
|
||||
info.Initialized = true
|
||||
if err = devices.saveMetadata(); err != nil {
|
||||
if err = devices.saveMetadata(info); err != nil {
|
||||
info.Initialized = false
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
|
@ -372,11 +410,11 @@ func (devices *DeviceSet) setupBaseImage() error {
|
|||
func setCloseOnExec(name string) {
|
||||
if fileInfos, _ := ioutil.ReadDir("/proc/self/fd"); fileInfos != nil {
|
||||
for _, i := range fileInfos {
|
||||
link, _ := osReadlink(filepath.Join("/proc/self/fd", i.Name()))
|
||||
link, _ := os.Readlink(filepath.Join("/proc/self/fd", i.Name()))
|
||||
if link == name {
|
||||
fd, err := strconv.Atoi(i.Name())
|
||||
if err == nil {
|
||||
sysCloseOnExec(fd)
|
||||
syscall.CloseOnExec(fd)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -388,10 +426,6 @@ func (devices *DeviceSet) log(level int, file string, line int, dmError int, mes
|
|||
return // Ignore _LOG_DEBUG
|
||||
}
|
||||
|
||||
if strings.Contains(message, "busy") {
|
||||
devices.sawBusy = true
|
||||
}
|
||||
|
||||
utils.Debugf("libdevmapper(%d): %s:%d (%d) %s", level, file, line, dmError, message)
|
||||
}
|
||||
|
||||
|
@ -408,7 +442,7 @@ func (devices *DeviceSet) ResizePool(size int64) error {
|
|||
datafilename := path.Join(dirname, "data")
|
||||
metadatafilename := path.Join(dirname, "metadata")
|
||||
|
||||
datafile, err := osOpenFile(datafilename, osORdWr, 0)
|
||||
datafile, err := os.OpenFile(datafilename, os.O_RDWR, 0)
|
||||
if datafile == nil {
|
||||
return err
|
||||
}
|
||||
|
@ -429,7 +463,7 @@ func (devices *DeviceSet) ResizePool(size int64) error {
|
|||
}
|
||||
defer dataloopback.Close()
|
||||
|
||||
metadatafile, err := osOpenFile(metadatafilename, osORdWr, 0)
|
||||
metadatafile, err := os.OpenFile(metadatafilename, os.O_RDWR, 0)
|
||||
if metadatafile == nil {
|
||||
return err
|
||||
}
|
||||
|
@ -472,39 +506,17 @@ func (devices *DeviceSet) ResizePool(size int64) error {
|
|||
func (devices *DeviceSet) initDevmapper(doInit bool) error {
|
||||
logInit(devices)
|
||||
|
||||
// Make sure the sparse images exist in <root>/devicemapper/data and
|
||||
// <root>/devicemapper/metadata
|
||||
|
||||
hasData := devices.hasImage("data")
|
||||
hasMetadata := devices.hasImage("metadata")
|
||||
|
||||
if !doInit && !hasData {
|
||||
return errors.New("Loopback data file not found")
|
||||
}
|
||||
|
||||
if !doInit && !hasMetadata {
|
||||
return errors.New("Loopback metadata file not found")
|
||||
}
|
||||
|
||||
createdLoopback := !hasData || !hasMetadata
|
||||
data, err := devices.ensureImage("data", DefaultDataLoopbackSize)
|
||||
if err != nil {
|
||||
utils.Debugf("Error device ensureImage (data): %s\n", err)
|
||||
return err
|
||||
}
|
||||
metadata, err := devices.ensureImage("metadata", DefaultMetaDataLoopbackSize)
|
||||
if err != nil {
|
||||
utils.Debugf("Error device ensureImage (metadata): %s\n", err)
|
||||
if err := os.MkdirAll(devices.metadataDir(), 0700); err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
// Set the device prefix from the device id and inode of the docker root dir
|
||||
|
||||
st, err := osStat(devices.root)
|
||||
st, err := os.Stat(devices.root)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error looking up dir %s: %s", devices.root, err)
|
||||
}
|
||||
sysSt := toSysStatT(st.Sys())
|
||||
sysSt := st.Sys().(*syscall.Stat_t)
|
||||
// "reg-" stands for "regular file".
|
||||
// In the future we might use "dev-" for "device file", etc.
|
||||
// docker-maj,min[-inode] stands for:
|
||||
|
@ -527,10 +539,38 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
|
|||
// so we add this badhack to make sure it closes itself
|
||||
setCloseOnExec("/dev/mapper/control")
|
||||
|
||||
// Make sure the sparse images exist in <root>/devicemapper/data and
|
||||
// <root>/devicemapper/metadata
|
||||
|
||||
createdLoopback := false
|
||||
|
||||
// If the pool doesn't exist, create it
|
||||
if info.Exists == 0 {
|
||||
utils.Debugf("Pool doesn't exist. Creating it.")
|
||||
|
||||
hasData := devices.hasImage("data")
|
||||
hasMetadata := devices.hasImage("metadata")
|
||||
|
||||
if !doInit && !hasData {
|
||||
return errors.New("Loopback data file not found")
|
||||
}
|
||||
|
||||
if !doInit && !hasMetadata {
|
||||
return errors.New("Loopback metadata file not found")
|
||||
}
|
||||
|
||||
createdLoopback = !hasData || !hasMetadata
|
||||
data, err := devices.ensureImage("data", DefaultDataLoopbackSize)
|
||||
if err != nil {
|
||||
utils.Debugf("Error device ensureImage (data): %s\n", err)
|
||||
return err
|
||||
}
|
||||
metadata, err := devices.ensureImage("metadata", DefaultMetaDataLoopbackSize)
|
||||
if err != nil {
|
||||
utils.Debugf("Error device ensureImage (metadata): %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
dataFile, err := attachLoopDevice(data)
|
||||
if err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
|
@ -552,9 +592,9 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
|
|||
}
|
||||
|
||||
// If we didn't just create the data or metadata image, we need to
|
||||
// load the metadata from the existing file.
|
||||
// load the transaction id and migrate old metadata
|
||||
if !createdLoopback {
|
||||
if err = devices.loadMetaData(); err != nil {
|
||||
if err = devices.initMetaData(); err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
@ -587,13 +627,16 @@ func (devices *DeviceSet) AddDevice(hash, baseHash string) error {
|
|||
return fmt.Errorf("device %s already exists", hash)
|
||||
}
|
||||
|
||||
deviceId := devices.allocateDeviceId()
|
||||
deviceId := devices.nextDeviceId
|
||||
|
||||
if err := devices.createSnapDevice(devices.getPoolDevName(), deviceId, baseInfo.Name(), baseInfo.DeviceId); err != nil {
|
||||
if err := createSnapDevice(devices.getPoolDevName(), &deviceId, baseInfo.Name(), baseInfo.DeviceId); err != nil {
|
||||
utils.Debugf("Error creating snap device: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Ids are 24bit, so wrap around
|
||||
devices.nextDeviceId = (deviceId + 1) & 0xffffff
|
||||
|
||||
if _, err := devices.registerDevice(deviceId, hash, baseInfo.Size); err != nil {
|
||||
deleteDevice(devices.getPoolDevName(), deviceId)
|
||||
utils.Debugf("Error registering device: %s\n", err)
|
||||
|
@ -620,14 +663,6 @@ func (devices *DeviceSet) deleteDevice(info *DevInfo) error {
|
|||
}
|
||||
}
|
||||
|
||||
if info.Initialized {
|
||||
info.Initialized = false
|
||||
if err := devices.saveMetadata(); err != nil {
|
||||
utils.Debugf("Error saving meta data: %s\n", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := deleteDevice(devices.getPoolDevName(), info.DeviceId); err != nil {
|
||||
utils.Debugf("Error deleting device: %s\n", err)
|
||||
return err
|
||||
|
@ -638,11 +673,11 @@ func (devices *DeviceSet) deleteDevice(info *DevInfo) error {
|
|||
delete(devices.Devices, info.Hash)
|
||||
devices.devicesLock.Unlock()
|
||||
|
||||
if err := devices.saveMetadata(); err != nil {
|
||||
if err := devices.removeMetadata(info); err != nil {
|
||||
devices.devicesLock.Lock()
|
||||
devices.Devices[info.Hash] = info
|
||||
devices.devicesLock.Unlock()
|
||||
utils.Debugf("Error saving meta data: %s\n", err)
|
||||
utils.Debugf("Error removing meta data: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -711,12 +746,11 @@ func (devices *DeviceSet) removeDeviceAndWait(devname string) error {
|
|||
var err error
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
devices.sawBusy = false
|
||||
err = removeDevice(devname)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
if !devices.sawBusy {
|
||||
if err != ErrBusy {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -813,7 +847,7 @@ func (devices *DeviceSet) Shutdown() error {
|
|||
// We use MNT_DETACH here in case it is still busy in some running
|
||||
// container. This means it'll go away from the global scope directly,
|
||||
// and the device will be released when that container dies.
|
||||
if err := sysUnmount(info.mountPath, syscall.MNT_DETACH); err != nil {
|
||||
if err := syscall.Unmount(info.mountPath, syscall.MNT_DETACH); err != nil {
|
||||
utils.Debugf("Shutdown unmounting %s, error: %s\n", info.mountPath, err)
|
||||
}
|
||||
|
||||
|
@ -871,13 +905,13 @@ func (devices *DeviceSet) MountDevice(hash, path, mountLabel string) error {
|
|||
return fmt.Errorf("Error activating devmapper device for '%s': %s", hash, err)
|
||||
}
|
||||
|
||||
var flags uintptr = sysMsMgcVal
|
||||
var flags uintptr = syscall.MS_MGC_VAL
|
||||
|
||||
mountOptions := label.FormatMountLabel("discard", mountLabel)
|
||||
err = sysMount(info.DevName(), path, "ext4", flags, mountOptions)
|
||||
if err != nil && err == sysEInval {
|
||||
err = syscall.Mount(info.DevName(), path, "ext4", flags, mountOptions)
|
||||
if err != nil && err == syscall.EINVAL {
|
||||
mountOptions = label.FormatMountLabel("", mountLabel)
|
||||
err = sysMount(info.DevName(), path, "ext4", flags, mountOptions)
|
||||
err = syscall.Mount(info.DevName(), path, "ext4", flags, mountOptions)
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error mounting '%s' on '%s': %s", info.DevName(), path, err)
|
||||
|
@ -886,7 +920,7 @@ func (devices *DeviceSet) MountDevice(hash, path, mountLabel string) error {
|
|||
info.mountCount = 1
|
||||
info.mountPath = path
|
||||
|
||||
return devices.setInitialized(info)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) UnmountDevice(hash string) error {
|
||||
|
@ -914,7 +948,7 @@ func (devices *DeviceSet) UnmountDevice(hash string) error {
|
|||
}
|
||||
|
||||
utils.Debugf("[devmapper] Unmount(%s)", info.mountPath)
|
||||
if err := sysUnmount(info.mountPath, 0); err != nil {
|
||||
if err := syscall.Unmount(info.mountPath, 0); err != nil {
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
@ -937,14 +971,6 @@ func (devices *DeviceSet) HasDevice(hash string) bool {
|
|||
return info != nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) HasInitializedDevice(hash string) bool {
|
||||
devices.Lock()
|
||||
defer devices.Unlock()
|
||||
|
||||
info, _ := devices.lookupDevice(hash)
|
||||
return info != nil && info.Initialized
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) HasActivatedDevice(hash string) bool {
|
||||
info, _ := devices.lookupDevice(hash)
|
||||
if info == nil {
|
||||
|
@ -961,17 +987,6 @@ func (devices *DeviceSet) HasActivatedDevice(hash string) bool {
|
|||
return devinfo != nil && devinfo.Exists != 0
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) setInitialized(info *DevInfo) error {
|
||||
info.Initialized = true
|
||||
if err := devices.saveMetadata(); err != nil {
|
||||
info.Initialized = false
|
||||
utils.Debugf("\n--->Err: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) List() []string {
|
||||
devices.Lock()
|
||||
defer devices.Unlock()
|
||||
|
|
|
@ -5,9 +5,11 @@ package devmapper
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"os"
|
||||
"runtime"
|
||||
"syscall"
|
||||
|
||||
"github.com/dotcloud/docker/utils"
|
||||
)
|
||||
|
||||
type DevmapperLogger interface {
|
||||
|
@ -62,6 +64,10 @@ var (
|
|||
ErrInvalidAddNode = errors.New("Invalide AddNoce type")
|
||||
ErrGetLoopbackBackingFile = errors.New("Unable to get loopback backing file")
|
||||
ErrLoopbackSetCapacity = errors.New("Unable set loopback capacity")
|
||||
ErrBusy = errors.New("Device is Busy")
|
||||
|
||||
dmSawBusy bool
|
||||
dmSawExist bool
|
||||
)
|
||||
|
||||
type (
|
||||
|
@ -180,7 +186,7 @@ func (t *Task) GetNextTarget(next uintptr) (nextPtr uintptr, start uint64,
|
|||
start, length, targetType, params
|
||||
}
|
||||
|
||||
func getLoopbackBackingFile(file *osFile) (uint64, uint64, error) {
|
||||
func getLoopbackBackingFile(file *os.File) (uint64, uint64, error) {
|
||||
loopInfo, err := ioctlLoopGetStatus64(file.Fd())
|
||||
if err != nil {
|
||||
utils.Errorf("Error get loopback backing file: %s\n", err)
|
||||
|
@ -189,7 +195,7 @@ func getLoopbackBackingFile(file *osFile) (uint64, uint64, error) {
|
|||
return loopInfo.loDevice, loopInfo.loInode, nil
|
||||
}
|
||||
|
||||
func LoopbackSetCapacity(file *osFile) error {
|
||||
func LoopbackSetCapacity(file *os.File) error {
|
||||
if err := ioctlLoopSetCapacity(file.Fd(), 0); err != nil {
|
||||
utils.Errorf("Error loopbackSetCapacity: %s", err)
|
||||
return ErrLoopbackSetCapacity
|
||||
|
@ -197,20 +203,20 @@ func LoopbackSetCapacity(file *osFile) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func FindLoopDeviceFor(file *osFile) *osFile {
|
||||
func FindLoopDeviceFor(file *os.File) *os.File {
|
||||
stat, err := file.Stat()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
targetInode := stat.Sys().(*sysStatT).Ino
|
||||
targetDevice := stat.Sys().(*sysStatT).Dev
|
||||
targetInode := stat.Sys().(*syscall.Stat_t).Ino
|
||||
targetDevice := stat.Sys().(*syscall.Stat_t).Dev
|
||||
|
||||
for i := 0; true; i++ {
|
||||
path := fmt.Sprintf("/dev/loop%d", i)
|
||||
|
||||
file, err := osOpenFile(path, osORdWr, 0)
|
||||
file, err := os.OpenFile(path, os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
if osIsNotExist(err) {
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -280,7 +286,7 @@ func RemoveDevice(name string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func GetBlockDeviceSize(file *osFile) (uint64, error) {
|
||||
func GetBlockDeviceSize(file *os.File) (uint64, error) {
|
||||
size, err := ioctlBlkGetSize64(file.Fd())
|
||||
if err != nil {
|
||||
utils.Errorf("Error getblockdevicesize: %s", err)
|
||||
|
@ -290,7 +296,7 @@ func GetBlockDeviceSize(file *osFile) (uint64, error) {
|
|||
}
|
||||
|
||||
func BlockDeviceDiscard(path string) error {
|
||||
file, err := osOpenFile(path, osORdWr, 0)
|
||||
file, err := os.OpenFile(path, os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -313,7 +319,7 @@ func BlockDeviceDiscard(path string) error {
|
|||
}
|
||||
|
||||
// This is the programmatic example of "dmsetup create"
|
||||
func createPool(poolName string, dataFile, metadataFile *osFile) error {
|
||||
func createPool(poolName string, dataFile, metadataFile *os.File) error {
|
||||
task, err := createTask(DeviceCreate, poolName)
|
||||
if task == nil {
|
||||
return err
|
||||
|
@ -343,7 +349,7 @@ func createPool(poolName string, dataFile, metadataFile *osFile) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func reloadPool(poolName string, dataFile, metadataFile *osFile) error {
|
||||
func reloadPool(poolName string, dataFile, metadataFile *os.File) error {
|
||||
task, err := createTask(DeviceReload, poolName)
|
||||
if task == nil {
|
||||
return err
|
||||
|
@ -464,23 +470,33 @@ func resumeDevice(name string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func createDevice(poolName string, deviceId int) error {
|
||||
utils.Debugf("[devmapper] createDevice(poolName=%v, deviceId=%v)", poolName, deviceId)
|
||||
task, err := createTask(DeviceTargetMsg, poolName)
|
||||
if task == nil {
|
||||
return err
|
||||
}
|
||||
func createDevice(poolName string, deviceId *int) error {
|
||||
utils.Debugf("[devmapper] createDevice(poolName=%v, deviceId=%v)", poolName, *deviceId)
|
||||
|
||||
if err := task.SetSector(0); err != nil {
|
||||
return fmt.Errorf("Can't set sector")
|
||||
}
|
||||
for {
|
||||
task, err := createTask(DeviceTargetMsg, poolName)
|
||||
if task == nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.SetMessage(fmt.Sprintf("create_thin %d", deviceId)); err != nil {
|
||||
return fmt.Errorf("Can't set message")
|
||||
}
|
||||
if err := task.SetSector(0); err != nil {
|
||||
return fmt.Errorf("Can't set sector")
|
||||
}
|
||||
|
||||
if err := task.Run(); err != nil {
|
||||
return fmt.Errorf("Error running createDevice")
|
||||
if err := task.SetMessage(fmt.Sprintf("create_thin %d", *deviceId)); err != nil {
|
||||
return fmt.Errorf("Can't set message")
|
||||
}
|
||||
|
||||
dmSawExist = false
|
||||
if err := task.Run(); err != nil {
|
||||
if dmSawExist {
|
||||
// Already exists, try next id
|
||||
*deviceId++
|
||||
continue
|
||||
}
|
||||
return fmt.Errorf("Error running createDevice")
|
||||
}
|
||||
break
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@ -512,7 +528,11 @@ func removeDevice(name string) error {
|
|||
if task == nil {
|
||||
return err
|
||||
}
|
||||
dmSawBusy = false
|
||||
if err = task.Run(); err != nil {
|
||||
if dmSawBusy {
|
||||
return ErrBusy
|
||||
}
|
||||
return fmt.Errorf("Error running removeDevice")
|
||||
}
|
||||
return nil
|
||||
|
@ -546,7 +566,7 @@ func activateDevice(poolName string, name string, deviceId int, size uint64) err
|
|||
return nil
|
||||
}
|
||||
|
||||
func (devices *DeviceSet) createSnapDevice(poolName string, deviceId int, baseName string, baseDeviceId int) error {
|
||||
func createSnapDevice(poolName string, deviceId *int, baseName string, baseDeviceId int) error {
|
||||
devinfo, _ := getInfo(baseName)
|
||||
doSuspend := devinfo != nil && devinfo.Exists != 0
|
||||
|
||||
|
@ -556,33 +576,44 @@ func (devices *DeviceSet) createSnapDevice(poolName string, deviceId int, baseNa
|
|||
}
|
||||
}
|
||||
|
||||
task, err := createTask(DeviceTargetMsg, poolName)
|
||||
if task == nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
for {
|
||||
task, err := createTask(DeviceTargetMsg, poolName)
|
||||
if task == nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
}
|
||||
return err
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.SetSector(0); err != nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
if err := task.SetSector(0); err != nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
}
|
||||
return fmt.Errorf("Can't set sector")
|
||||
}
|
||||
return fmt.Errorf("Can't set sector")
|
||||
}
|
||||
|
||||
if err := task.SetMessage(fmt.Sprintf("create_snap %d %d", deviceId, baseDeviceId)); err != nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
if err := task.SetMessage(fmt.Sprintf("create_snap %d %d", *deviceId, baseDeviceId)); err != nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
}
|
||||
return fmt.Errorf("Can't set message")
|
||||
}
|
||||
return fmt.Errorf("Can't set message")
|
||||
}
|
||||
|
||||
if err := task.Run(); err != nil {
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
dmSawExist = false
|
||||
if err := task.Run(); err != nil {
|
||||
if dmSawExist {
|
||||
// Already exists, try next id
|
||||
*deviceId++
|
||||
continue
|
||||
}
|
||||
|
||||
if doSuspend {
|
||||
resumeDevice(baseName)
|
||||
}
|
||||
return fmt.Errorf("Error running DeviceCreate (createSnapDevice)")
|
||||
}
|
||||
return fmt.Errorf("Error running DeviceCreate (createSnapDevice)")
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
if doSuspend {
|
||||
|
|
|
@ -4,12 +4,27 @@ package devmapper
|
|||
|
||||
import "C"
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Due to the way cgo works this has to be in a separate file, as devmapper.go has
|
||||
// definitions in the cgo block, which is incompatible with using "//export"
|
||||
|
||||
//export DevmapperLogCallback
|
||||
func DevmapperLogCallback(level C.int, file *C.char, line C.int, dm_errno_or_class C.int, message *C.char) {
|
||||
msg := C.GoString(message)
|
||||
if level < 7 {
|
||||
if strings.Contains(msg, "busy") {
|
||||
dmSawBusy = true
|
||||
}
|
||||
|
||||
if strings.Contains(msg, "File exists") {
|
||||
dmSawExist = true
|
||||
}
|
||||
}
|
||||
|
||||
if dmLogger != nil {
|
||||
dmLogger.log(int(level), C.GoString(file), int(line), int(dm_errno_or_class), C.GoString(message))
|
||||
dmLogger.log(int(level), C.GoString(file), int(line), int(dm_errno_or_class), msg)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,285 +3,35 @@
|
|||
package devmapper
|
||||
|
||||
import (
|
||||
"github.com/dotcloud/docker/daemon/graphdriver/graphtest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestTaskCreate(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
// Test success
|
||||
taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test Failure
|
||||
DmTaskCreate = dmTaskCreateFail
|
||||
defer func() { DmTaskCreate = dmTaskCreateFct }()
|
||||
if task := TaskCreate(-1); task != nil {
|
||||
t.Fatalf("An error should have occured while creating an invalid task.")
|
||||
}
|
||||
func init() {
|
||||
// Reduce the size the the base fs and loopback for the tests
|
||||
DefaultDataLoopbackSize = 300 * 1024 * 1024
|
||||
DefaultMetaDataLoopbackSize = 200 * 1024 * 1024
|
||||
DefaultBaseFsSize = 300 * 1024 * 1024
|
||||
}
|
||||
|
||||
func TestTaskRun(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
// Perform the RUN
|
||||
if err := task.Run(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Make sure we don't have error with GetInfo
|
||||
if _, err := task.GetInfo(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
DmTaskRun = dmTaskRunFail
|
||||
defer func() { DmTaskRun = dmTaskRunFct }()
|
||||
|
||||
task = taskCreate(t, DeviceInfo)
|
||||
// Perform the RUN
|
||||
if err := task.Run(); err != ErrTaskRun {
|
||||
t.Fatalf("An error should have occured while running task.")
|
||||
}
|
||||
// Make sure GetInfo also fails
|
||||
if _, err := task.GetInfo(); err != ErrTaskGetInfo {
|
||||
t.Fatalf("GetInfo should fail if task.Run() failed.")
|
||||
}
|
||||
// This avoids creating a new driver for each test if all tests are run
|
||||
// Make sure to put new tests between TestDevmapperSetup and TestDevmapperTeardown
|
||||
func TestDevmapperSetup(t *testing.T) {
|
||||
graphtest.GetDriver(t, "devicemapper")
|
||||
}
|
||||
|
||||
func TestTaskSetName(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.SetName("test"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
DmTaskSetName = dmTaskSetNameFail
|
||||
defer func() { DmTaskSetName = dmTaskSetNameFct }()
|
||||
|
||||
if err := task.SetName("test"); err != ErrTaskSetName {
|
||||
t.Fatalf("An error should have occured while runnign SetName.")
|
||||
}
|
||||
func TestDevmapperCreateEmpty(t *testing.T) {
|
||||
graphtest.DriverTestCreateEmpty(t, "devicemapper")
|
||||
}
|
||||
|
||||
func TestTaskSetMessage(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.SetMessage("test"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
DmTaskSetMessage = dmTaskSetMessageFail
|
||||
defer func() { DmTaskSetMessage = dmTaskSetMessageFct }()
|
||||
|
||||
if err := task.SetMessage("test"); err != ErrTaskSetMessage {
|
||||
t.Fatalf("An error should have occured while runnign SetMessage.")
|
||||
}
|
||||
func TestDevmapperCreateBase(t *testing.T) {
|
||||
graphtest.DriverTestCreateBase(t, "devicemapper")
|
||||
}
|
||||
|
||||
func TestTaskSetSector(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.SetSector(128); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
DmTaskSetSector = dmTaskSetSectorFail
|
||||
defer func() { DmTaskSetSector = dmTaskSetSectorFct }()
|
||||
|
||||
// Test failure
|
||||
if err := task.SetSector(0); err != ErrTaskSetSector {
|
||||
t.Fatalf("An error should have occured while running SetSector.")
|
||||
}
|
||||
func TestDevmapperCreateSnap(t *testing.T) {
|
||||
graphtest.DriverTestCreateSnap(t, "devicemapper")
|
||||
}
|
||||
|
||||
func TestTaskSetCookie(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
var (
|
||||
cookie uint = 0
|
||||
task = taskCreate(t, DeviceInfo)
|
||||
)
|
||||
|
||||
// Test success
|
||||
if err := task.SetCookie(&cookie, 0); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
if err := task.SetCookie(nil, 0); err != ErrNilCookie {
|
||||
t.Fatalf("An error should have occured while running SetCookie with nil cookie.")
|
||||
}
|
||||
|
||||
DmTaskSetCookie = dmTaskSetCookieFail
|
||||
defer func() { DmTaskSetCookie = dmTaskSetCookieFct }()
|
||||
|
||||
if err := task.SetCookie(&cookie, 0); err != ErrTaskSetCookie {
|
||||
t.Fatalf("An error should have occured while running SetCookie.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskSetAddNode(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.SetAddNode(0); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
if err := task.SetAddNode(-1); err != ErrInvalidAddNode {
|
||||
t.Fatalf("An error should have occured running SetAddNode with wrong node.")
|
||||
}
|
||||
|
||||
DmTaskSetAddNode = dmTaskSetAddNodeFail
|
||||
defer func() { DmTaskSetAddNode = dmTaskSetAddNodeFct }()
|
||||
|
||||
if err := task.SetAddNode(0); err != ErrTaskSetAddNode {
|
||||
t.Fatalf("An error should have occured running SetAddNode.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskSetRo(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.SetRo(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
DmTaskSetRo = dmTaskSetRoFail
|
||||
defer func() { DmTaskSetRo = dmTaskSetRoFct }()
|
||||
|
||||
if err := task.SetRo(); err != ErrTaskSetRo {
|
||||
t.Fatalf("An error should have occured running SetRo.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskAddTarget(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// Test success
|
||||
if err := task.AddTarget(0, 128, "thinp", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test failure
|
||||
DmTaskAddTarget = dmTaskAddTargetFail
|
||||
defer func() { DmTaskAddTarget = dmTaskAddTargetFct }()
|
||||
|
||||
if err := task.AddTarget(0, 128, "thinp", ""); err != ErrTaskAddTarget {
|
||||
t.Fatalf("An error should have occured running AddTarget.")
|
||||
}
|
||||
}
|
||||
|
||||
// func TestTaskGetInfo(t *testing.T) {
|
||||
// task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// // Test success
|
||||
// if _, err := task.GetInfo(); err != nil {
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
|
||||
// // Test failure
|
||||
// DmTaskGetInfo = dmTaskGetInfoFail
|
||||
// defer func() { DmTaskGetInfo = dmTaskGetInfoFct }()
|
||||
|
||||
// if _, err := task.GetInfo(); err != ErrTaskGetInfo {
|
||||
// t.Fatalf("An error should have occured running GetInfo.")
|
||||
// }
|
||||
// }
|
||||
|
||||
// func TestTaskGetNextTarget(t *testing.T) {
|
||||
// task := taskCreate(t, DeviceInfo)
|
||||
|
||||
// if next, _, _, _, _ := task.GetNextTarget(0); next == 0 {
|
||||
// t.Fatalf("The next target should not be 0.")
|
||||
// }
|
||||
// }
|
||||
|
||||
/// Utils
|
||||
func taskCreate(t *testing.T, taskType TaskType) *Task {
|
||||
task := TaskCreate(taskType)
|
||||
if task == nil {
|
||||
t.Fatalf("Error creating task")
|
||||
}
|
||||
return task
|
||||
}
|
||||
|
||||
/// Failure function replacement
|
||||
func dmTaskCreateFail(t int) *CDmTask {
|
||||
return nil
|
||||
}
|
||||
|
||||
func dmTaskRunFail(task *CDmTask) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetNameFail(task *CDmTask, name string) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetMessageFail(task *CDmTask, message string) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetSectorFail(task *CDmTask, sector uint64) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetCookieFail(task *CDmTask, cookie *uint, flags uint16) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetAddNodeFail(task *CDmTask, addNode AddNodeType) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskSetRoFail(task *CDmTask) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskAddTargetFail(task *CDmTask,
|
||||
start, size uint64, ttype, params string) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmTaskGetInfoFail(task *CDmTask, info *Info) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmGetNextTargetFail(task *CDmTask, next uintptr, start, length *uint64,
|
||||
target, params *string) uintptr {
|
||||
return 0
|
||||
}
|
||||
|
||||
func dmAttachLoopDeviceFail(filename string, fd *int) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func sysGetBlockSizeFail(fd uintptr, size *uint64) sysErrno {
|
||||
return 1
|
||||
}
|
||||
|
||||
func dmUdevWaitFail(cookie uint) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmSetDevDirFail(dir string) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func dmGetLibraryVersionFail(version *string) int {
|
||||
return -1
|
||||
func TestDevmapperTeardown(t *testing.T) {
|
||||
graphtest.PutDriver(t)
|
||||
}
|
||||
|
|
|
@ -26,7 +26,7 @@ type Driver struct {
|
|||
home string
|
||||
}
|
||||
|
||||
var Init = func(home string) (graphdriver.Driver, error) {
|
||||
func Init(home string) (graphdriver.Driver, error) {
|
||||
deviceSet, err := NewDeviceSet(home, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -94,7 +94,7 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
mp := path.Join(d.home, "mnt", id)
|
||||
|
||||
// Create the target directories if they don't exist
|
||||
if err := osMkdirAll(mp, 0755); err != nil && !osIsExist(err) {
|
||||
if err := os.MkdirAll(mp, 0755); err != nil && !os.IsExist(err) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
|
@ -104,13 +104,13 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
}
|
||||
|
||||
rootFs := path.Join(mp, "rootfs")
|
||||
if err := osMkdirAll(rootFs, 0755); err != nil && !osIsExist(err) {
|
||||
if err := os.MkdirAll(rootFs, 0755); err != nil && !os.IsExist(err) {
|
||||
d.DeviceSet.UnmountDevice(id)
|
||||
return "", err
|
||||
}
|
||||
|
||||
idFile := path.Join(mp, "id")
|
||||
if _, err := osStat(idFile); err != nil && osIsNotExist(err) {
|
||||
if _, err := os.Stat(idFile); err != nil && os.IsNotExist(err) {
|
||||
// Create an "id" file with the container/image id in it to help reconscruct this in case
|
||||
// of later problems
|
||||
if err := ioutil.WriteFile(idFile, []byte(id), 0600); err != nil {
|
||||
|
|
|
@ -1,880 +0,0 @@
|
|||
// +build linux,amd64
|
||||
|
||||
package devmapper
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/daemon/graphdriver"
|
||||
"io/ioutil"
|
||||
"path"
|
||||
"runtime"
|
||||
"strings"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Reduce the size the the base fs and loopback for the tests
|
||||
DefaultDataLoopbackSize = 300 * 1024 * 1024
|
||||
DefaultMetaDataLoopbackSize = 200 * 1024 * 1024
|
||||
DefaultBaseFsSize = 300 * 1024 * 1024
|
||||
}
|
||||
|
||||
// denyAllDevmapper mocks all calls to libdevmapper in the unit tests, and denies them by default
|
||||
func denyAllDevmapper() {
|
||||
// Hijack all calls to libdevmapper with default panics.
|
||||
// Authorized calls are selectively hijacked in each tests.
|
||||
DmTaskCreate = func(t int) *CDmTask {
|
||||
panic("DmTaskCreate: this method should not be called here")
|
||||
}
|
||||
DmTaskRun = func(task *CDmTask) int {
|
||||
panic("DmTaskRun: this method should not be called here")
|
||||
}
|
||||
DmTaskSetName = func(task *CDmTask, name string) int {
|
||||
panic("DmTaskSetName: this method should not be called here")
|
||||
}
|
||||
DmTaskSetMessage = func(task *CDmTask, message string) int {
|
||||
panic("DmTaskSetMessage: this method should not be called here")
|
||||
}
|
||||
DmTaskSetSector = func(task *CDmTask, sector uint64) int {
|
||||
panic("DmTaskSetSector: this method should not be called here")
|
||||
}
|
||||
DmTaskSetCookie = func(task *CDmTask, cookie *uint, flags uint16) int {
|
||||
panic("DmTaskSetCookie: this method should not be called here")
|
||||
}
|
||||
DmTaskSetAddNode = func(task *CDmTask, addNode AddNodeType) int {
|
||||
panic("DmTaskSetAddNode: this method should not be called here")
|
||||
}
|
||||
DmTaskSetRo = func(task *CDmTask) int {
|
||||
panic("DmTaskSetRo: this method should not be called here")
|
||||
}
|
||||
DmTaskAddTarget = func(task *CDmTask, start, size uint64, ttype, params string) int {
|
||||
panic("DmTaskAddTarget: this method should not be called here")
|
||||
}
|
||||
DmTaskGetInfo = func(task *CDmTask, info *Info) int {
|
||||
panic("DmTaskGetInfo: this method should not be called here")
|
||||
}
|
||||
DmGetNextTarget = func(task *CDmTask, next uintptr, start, length *uint64, target, params *string) uintptr {
|
||||
panic("DmGetNextTarget: this method should not be called here")
|
||||
}
|
||||
DmUdevWait = func(cookie uint) int {
|
||||
panic("DmUdevWait: this method should not be called here")
|
||||
}
|
||||
DmSetDevDir = func(dir string) int {
|
||||
panic("DmSetDevDir: this method should not be called here")
|
||||
}
|
||||
DmGetLibraryVersion = func(version *string) int {
|
||||
panic("DmGetLibraryVersion: this method should not be called here")
|
||||
}
|
||||
DmLogInitVerbose = func(level int) {
|
||||
panic("DmLogInitVerbose: this method should not be called here")
|
||||
}
|
||||
DmTaskDestroy = func(task *CDmTask) {
|
||||
panic("DmTaskDestroy: this method should not be called here")
|
||||
}
|
||||
LogWithErrnoInit = func() {
|
||||
panic("LogWithErrnoInit: this method should not be called here")
|
||||
}
|
||||
}
|
||||
|
||||
func denyAllSyscall() {
|
||||
sysMount = func(source, target, fstype string, flags uintptr, data string) (err error) {
|
||||
panic("sysMount: this method should not be called here")
|
||||
}
|
||||
sysUnmount = func(target string, flags int) (err error) {
|
||||
panic("sysUnmount: this method should not be called here")
|
||||
}
|
||||
sysCloseOnExec = func(fd int) {
|
||||
panic("sysCloseOnExec: this method should not be called here")
|
||||
}
|
||||
sysSyscall = func(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) {
|
||||
panic("sysSyscall: this method should not be called here")
|
||||
}
|
||||
// Not a syscall, but forbidding it here anyway
|
||||
Mounted = func(mnt string) (bool, error) {
|
||||
panic("devmapper.Mounted: this method should not be called here")
|
||||
}
|
||||
// osOpenFile = os.OpenFile
|
||||
// osNewFile = os.NewFile
|
||||
// osCreate = os.Create
|
||||
// osStat = os.Stat
|
||||
// osIsNotExist = os.IsNotExist
|
||||
// osIsExist = os.IsExist
|
||||
// osMkdirAll = os.MkdirAll
|
||||
// osRemoveAll = os.RemoveAll
|
||||
// osRename = os.Rename
|
||||
// osReadlink = os.Readlink
|
||||
|
||||
// execRun = func(name string, args ...string) error {
|
||||
// return exec.Command(name, args...).Run()
|
||||
// }
|
||||
}
|
||||
|
||||
func mkTestDirectory(t *testing.T) string {
|
||||
dir, err := ioutil.TempDir("", "docker-test-devmapper-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
return dir
|
||||
}
|
||||
|
||||
func newDriver(t *testing.T) *Driver {
|
||||
home := mkTestDirectory(t)
|
||||
d, err := Init(home)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
return d.(*Driver)
|
||||
}
|
||||
|
||||
func cleanup(d *Driver) {
|
||||
d.Cleanup()
|
||||
osRemoveAll(d.home)
|
||||
}
|
||||
|
||||
type Set map[string]bool
|
||||
|
||||
func (r Set) Assert(t *testing.T, names ...string) {
|
||||
for _, key := range names {
|
||||
required := true
|
||||
if strings.HasPrefix(key, "?") {
|
||||
key = key[1:]
|
||||
required = false
|
||||
}
|
||||
if _, exists := r[key]; !exists && required {
|
||||
t.Fatalf("Key not set: %s", key)
|
||||
}
|
||||
delete(r, key)
|
||||
}
|
||||
if len(r) != 0 {
|
||||
t.Fatalf("Unexpected keys: %v", r)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInit(t *testing.T) {
|
||||
var (
|
||||
calls = make(Set)
|
||||
taskMessages = make(Set)
|
||||
taskTypes = make(Set)
|
||||
home = mkTestDirectory(t)
|
||||
)
|
||||
defer osRemoveAll(home)
|
||||
|
||||
func() {
|
||||
denyAllDevmapper()
|
||||
DmSetDevDir = func(dir string) int {
|
||||
calls["DmSetDevDir"] = true
|
||||
expectedDir := "/dev"
|
||||
if dir != expectedDir {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmSetDevDir(%v)\nReceived: DmSetDevDir(%v)\n", expectedDir, dir)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
LogWithErrnoInit = func() {
|
||||
calls["DmLogWithErrnoInit"] = true
|
||||
}
|
||||
var task1 CDmTask
|
||||
DmTaskCreate = func(taskType int) *CDmTask {
|
||||
calls["DmTaskCreate"] = true
|
||||
taskTypes[fmt.Sprintf("%d", taskType)] = true
|
||||
return &task1
|
||||
}
|
||||
DmTaskSetName = func(task *CDmTask, name string) int {
|
||||
calls["DmTaskSetName"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskSetName(%v)\nReceived: DmTaskSetName(%v)\n", expectedTask, task)
|
||||
}
|
||||
// FIXME: use Set.AssertRegexp()
|
||||
if !strings.HasPrefix(name, "docker-") && !strings.HasPrefix(name, "/dev/mapper/docker-") ||
|
||||
!strings.HasSuffix(name, "-pool") && !strings.HasSuffix(name, "-base") {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskSetName(%v)\nReceived: DmTaskSetName(%v)\n", "docker-...-pool", name)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
DmTaskRun = func(task *CDmTask) int {
|
||||
calls["DmTaskRun"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskRun(%v)\nReceived: DmTaskRun(%v)\n", expectedTask, task)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
DmTaskGetInfo = func(task *CDmTask, info *Info) int {
|
||||
calls["DmTaskGetInfo"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskGetInfo(%v)\nReceived: DmTaskGetInfo(%v)\n", expectedTask, task)
|
||||
}
|
||||
// This will crash if info is not dereferenceable
|
||||
info.Exists = 0
|
||||
return 1
|
||||
}
|
||||
DmTaskSetSector = func(task *CDmTask, sector uint64) int {
|
||||
calls["DmTaskSetSector"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskSetSector(%v)\nReceived: DmTaskSetSector(%v)\n", expectedTask, task)
|
||||
}
|
||||
if expectedSector := uint64(0); sector != expectedSector {
|
||||
t.Fatalf("Wrong libdevmapper call to DmTaskSetSector\nExpected: %v\nReceived: %v\n", expectedSector, sector)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
DmTaskSetMessage = func(task *CDmTask, message string) int {
|
||||
calls["DmTaskSetMessage"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskSetSector(%v)\nReceived: DmTaskSetSector(%v)\n", expectedTask, task)
|
||||
}
|
||||
taskMessages[message] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskDestroy = func(task *CDmTask) {
|
||||
calls["DmTaskDestroy"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskDestroy(%v)\nReceived: DmTaskDestroy(%v)\n", expectedTask, task)
|
||||
}
|
||||
}
|
||||
DmTaskAddTarget = func(task *CDmTask, start, size uint64, ttype, params string) int {
|
||||
calls["DmTaskSetTarget"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskDestroy(%v)\nReceived: DmTaskDestroy(%v)\n", expectedTask, task)
|
||||
}
|
||||
if start != 0 {
|
||||
t.Fatalf("Wrong start: %d != %d", start, 0)
|
||||
}
|
||||
if ttype != "thin" && ttype != "thin-pool" {
|
||||
t.Fatalf("Wrong ttype: %s", ttype)
|
||||
}
|
||||
// Quick smoke test
|
||||
if params == "" {
|
||||
t.Fatalf("Params should not be empty")
|
||||
}
|
||||
return 1
|
||||
}
|
||||
fakeCookie := uint(4321)
|
||||
DmTaskSetCookie = func(task *CDmTask, cookie *uint, flags uint16) int {
|
||||
calls["DmTaskSetCookie"] = true
|
||||
expectedTask := &task1
|
||||
if task != expectedTask {
|
||||
t.Fatalf("Wrong libdevmapper call\nExpected: DmTaskDestroy(%v)\nReceived: DmTaskDestroy(%v)\n", expectedTask, task)
|
||||
}
|
||||
if flags != 0 {
|
||||
t.Fatalf("Cookie flags should be 0 (not %x)", flags)
|
||||
}
|
||||
*cookie = fakeCookie
|
||||
return 1
|
||||
}
|
||||
DmUdevWait = func(cookie uint) int {
|
||||
calls["DmUdevWait"] = true
|
||||
if cookie != fakeCookie {
|
||||
t.Fatalf("Wrong cookie: %d != %d", cookie, fakeCookie)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
DmTaskSetAddNode = func(task *CDmTask, addNode AddNodeType) int {
|
||||
if addNode != AddNodeOnCreate {
|
||||
t.Fatalf("Wrong AddNoteType: %v (expected %v)", addNode, AddNodeOnCreate)
|
||||
}
|
||||
calls["DmTaskSetAddNode"] = true
|
||||
return 1
|
||||
}
|
||||
execRun = func(name string, args ...string) error {
|
||||
calls["execRun"] = true
|
||||
if name != "mkfs.ext4" {
|
||||
t.Fatalf("Expected %s to be executed, not %s", "mkfs.ext4", name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
driver, err := Init(home)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer func() {
|
||||
if err := driver.Cleanup(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}()
|
||||
}()
|
||||
// Put all tests in a function to make sure the garbage collection will
|
||||
// occur.
|
||||
|
||||
// Call GC to cleanup runtime.Finalizers
|
||||
runtime.GC()
|
||||
|
||||
calls.Assert(t,
|
||||
"DmSetDevDir",
|
||||
"DmLogWithErrnoInit",
|
||||
"DmTaskSetName",
|
||||
"DmTaskRun",
|
||||
"DmTaskGetInfo",
|
||||
"DmTaskDestroy",
|
||||
"execRun",
|
||||
"DmTaskCreate",
|
||||
"DmTaskSetTarget",
|
||||
"DmTaskSetCookie",
|
||||
"DmUdevWait",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetMessage",
|
||||
"DmTaskSetAddNode",
|
||||
)
|
||||
taskTypes.Assert(t, "0", "6", "17")
|
||||
taskMessages.Assert(t, "create_thin 0", "set_transaction_id 0 1")
|
||||
}
|
||||
|
||||
func fakeInit() func(home string) (graphdriver.Driver, error) {
|
||||
oldInit := Init
|
||||
Init = func(home string) (graphdriver.Driver, error) {
|
||||
return &Driver{
|
||||
home: home,
|
||||
}, nil
|
||||
}
|
||||
return oldInit
|
||||
}
|
||||
|
||||
func restoreInit(init func(home string) (graphdriver.Driver, error)) {
|
||||
Init = init
|
||||
}
|
||||
|
||||
func mockAllDevmapper(calls Set) {
|
||||
DmSetDevDir = func(dir string) int {
|
||||
calls["DmSetDevDir"] = true
|
||||
return 0
|
||||
}
|
||||
LogWithErrnoInit = func() {
|
||||
calls["DmLogWithErrnoInit"] = true
|
||||
}
|
||||
DmTaskCreate = func(taskType int) *CDmTask {
|
||||
calls["DmTaskCreate"] = true
|
||||
return &CDmTask{}
|
||||
}
|
||||
DmTaskSetName = func(task *CDmTask, name string) int {
|
||||
calls["DmTaskSetName"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskRun = func(task *CDmTask) int {
|
||||
calls["DmTaskRun"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskGetInfo = func(task *CDmTask, info *Info) int {
|
||||
calls["DmTaskGetInfo"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskSetSector = func(task *CDmTask, sector uint64) int {
|
||||
calls["DmTaskSetSector"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskSetMessage = func(task *CDmTask, message string) int {
|
||||
calls["DmTaskSetMessage"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskDestroy = func(task *CDmTask) {
|
||||
calls["DmTaskDestroy"] = true
|
||||
}
|
||||
DmTaskAddTarget = func(task *CDmTask, start, size uint64, ttype, params string) int {
|
||||
calls["DmTaskSetTarget"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskSetCookie = func(task *CDmTask, cookie *uint, flags uint16) int {
|
||||
calls["DmTaskSetCookie"] = true
|
||||
return 1
|
||||
}
|
||||
DmUdevWait = func(cookie uint) int {
|
||||
calls["DmUdevWait"] = true
|
||||
return 1
|
||||
}
|
||||
DmTaskSetAddNode = func(task *CDmTask, addNode AddNodeType) int {
|
||||
calls["DmTaskSetAddNode"] = true
|
||||
return 1
|
||||
}
|
||||
execRun = func(name string, args ...string) error {
|
||||
calls["execRun"] = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func TestDriverName(t *testing.T) {
|
||||
denyAllDevmapper()
|
||||
defer denyAllDevmapper()
|
||||
|
||||
oldInit := fakeInit()
|
||||
defer restoreInit(oldInit)
|
||||
|
||||
d := newDriver(t)
|
||||
if d.String() != "devicemapper" {
|
||||
t.Fatalf("Expected driver name to be devicemapper got %s", d.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestDriverCreate(t *testing.T) {
|
||||
denyAllDevmapper()
|
||||
denyAllSyscall()
|
||||
defer denyAllSyscall()
|
||||
defer denyAllDevmapper()
|
||||
|
||||
calls := make(Set)
|
||||
mockAllDevmapper(calls)
|
||||
|
||||
sysMount = func(source, target, fstype string, flags uintptr, data string) (err error) {
|
||||
calls["sysMount"] = true
|
||||
// FIXME: compare the exact source and target strings (inodes + devname)
|
||||
if expectedSource := "/dev/mapper/docker-"; !strings.HasPrefix(source, expectedSource) {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedSource, source)
|
||||
}
|
||||
if expectedTarget := "/tmp/docker-test-devmapper-"; !strings.HasPrefix(target, expectedTarget) {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedTarget, target)
|
||||
}
|
||||
if expectedFstype := "ext4"; fstype != expectedFstype {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedFstype, fstype)
|
||||
}
|
||||
if expectedFlags := uintptr(3236757504); flags != expectedFlags {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedFlags, flags)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
sysUnmount = func(target string, flag int) error {
|
||||
//calls["sysUnmount"] = true
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
Mounted = func(mnt string) (bool, error) {
|
||||
calls["Mounted"] = true
|
||||
if !strings.HasPrefix(mnt, "/tmp/docker-test-devmapper-") || !strings.HasSuffix(mnt, "/mnt/1") {
|
||||
t.Fatalf("Wrong mounted call\nExpected: Mounted(%v)\nReceived: Mounted(%v)\n", "/tmp/docker-test-devmapper-.../mnt/1", mnt)
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
sysSyscall = func(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) {
|
||||
calls["sysSyscall"] = true
|
||||
if trap != sysSysIoctl {
|
||||
t.Fatalf("Unexpected syscall. Expecting SYS_IOCTL, received: %d", trap)
|
||||
}
|
||||
switch a2 {
|
||||
case LoopSetFd:
|
||||
calls["ioctl.loopsetfd"] = true
|
||||
case LoopCtlGetFree:
|
||||
calls["ioctl.loopctlgetfree"] = true
|
||||
case LoopGetStatus64:
|
||||
calls["ioctl.loopgetstatus"] = true
|
||||
case LoopSetStatus64:
|
||||
calls["ioctl.loopsetstatus"] = true
|
||||
case LoopClrFd:
|
||||
calls["ioctl.loopclrfd"] = true
|
||||
case LoopSetCapacity:
|
||||
calls["ioctl.loopsetcapacity"] = true
|
||||
case BlkGetSize64:
|
||||
calls["ioctl.blkgetsize"] = true
|
||||
default:
|
||||
t.Fatalf("Unexpected IOCTL. Received %d", a2)
|
||||
}
|
||||
return 0, 0, 0
|
||||
}
|
||||
|
||||
func() {
|
||||
d := newDriver(t)
|
||||
|
||||
calls.Assert(t,
|
||||
"DmSetDevDir",
|
||||
"DmLogWithErrnoInit",
|
||||
"DmTaskSetName",
|
||||
"DmTaskRun",
|
||||
"DmTaskGetInfo",
|
||||
"execRun",
|
||||
"DmTaskCreate",
|
||||
"DmTaskSetTarget",
|
||||
"DmTaskSetCookie",
|
||||
"DmUdevWait",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetMessage",
|
||||
"DmTaskSetAddNode",
|
||||
"sysSyscall",
|
||||
"ioctl.blkgetsize",
|
||||
"ioctl.loopsetfd",
|
||||
"ioctl.loopsetstatus",
|
||||
"?ioctl.loopctlgetfree",
|
||||
)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
calls.Assert(t,
|
||||
"DmTaskCreate",
|
||||
"DmTaskGetInfo",
|
||||
"DmTaskRun",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetName",
|
||||
"DmTaskSetMessage",
|
||||
)
|
||||
|
||||
}()
|
||||
|
||||
runtime.GC()
|
||||
|
||||
calls.Assert(t,
|
||||
"DmTaskDestroy",
|
||||
)
|
||||
}
|
||||
|
||||
func TestDriverRemove(t *testing.T) {
|
||||
denyAllDevmapper()
|
||||
denyAllSyscall()
|
||||
defer denyAllSyscall()
|
||||
defer denyAllDevmapper()
|
||||
|
||||
calls := make(Set)
|
||||
mockAllDevmapper(calls)
|
||||
|
||||
sysMount = func(source, target, fstype string, flags uintptr, data string) (err error) {
|
||||
calls["sysMount"] = true
|
||||
// FIXME: compare the exact source and target strings (inodes + devname)
|
||||
if expectedSource := "/dev/mapper/docker-"; !strings.HasPrefix(source, expectedSource) {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedSource, source)
|
||||
}
|
||||
if expectedTarget := "/tmp/docker-test-devmapper-"; !strings.HasPrefix(target, expectedTarget) {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedTarget, target)
|
||||
}
|
||||
if expectedFstype := "ext4"; fstype != expectedFstype {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedFstype, fstype)
|
||||
}
|
||||
if expectedFlags := uintptr(3236757504); flags != expectedFlags {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedFlags, flags)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
sysUnmount = func(target string, flags int) (err error) {
|
||||
// FIXME: compare the exact source and target strings (inodes + devname)
|
||||
if expectedTarget := "/tmp/docker-test-devmapper-"; !strings.HasPrefix(target, expectedTarget) {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedTarget, target)
|
||||
}
|
||||
if expectedFlags := 0; flags != expectedFlags {
|
||||
t.Fatalf("Wrong syscall call\nExpected: Mount(%v)\nReceived: Mount(%v)\n", expectedFlags, flags)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
Mounted = func(mnt string) (bool, error) {
|
||||
calls["Mounted"] = true
|
||||
return false, nil
|
||||
}
|
||||
|
||||
sysSyscall = func(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) {
|
||||
calls["sysSyscall"] = true
|
||||
if trap != sysSysIoctl {
|
||||
t.Fatalf("Unexpected syscall. Expecting SYS_IOCTL, received: %d", trap)
|
||||
}
|
||||
switch a2 {
|
||||
case LoopSetFd:
|
||||
calls["ioctl.loopsetfd"] = true
|
||||
case LoopCtlGetFree:
|
||||
calls["ioctl.loopctlgetfree"] = true
|
||||
case LoopGetStatus64:
|
||||
calls["ioctl.loopgetstatus"] = true
|
||||
case LoopSetStatus64:
|
||||
calls["ioctl.loopsetstatus"] = true
|
||||
case LoopClrFd:
|
||||
calls["ioctl.loopclrfd"] = true
|
||||
case LoopSetCapacity:
|
||||
calls["ioctl.loopsetcapacity"] = true
|
||||
case BlkGetSize64:
|
||||
calls["ioctl.blkgetsize"] = true
|
||||
default:
|
||||
t.Fatalf("Unexpected IOCTL. Received %d", a2)
|
||||
}
|
||||
return 0, 0, 0
|
||||
}
|
||||
|
||||
func() {
|
||||
d := newDriver(t)
|
||||
|
||||
calls.Assert(t,
|
||||
"DmSetDevDir",
|
||||
"DmLogWithErrnoInit",
|
||||
"DmTaskSetName",
|
||||
"DmTaskRun",
|
||||
"DmTaskGetInfo",
|
||||
"execRun",
|
||||
"DmTaskCreate",
|
||||
"DmTaskSetTarget",
|
||||
"DmTaskSetCookie",
|
||||
"DmUdevWait",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetMessage",
|
||||
"DmTaskSetAddNode",
|
||||
"sysSyscall",
|
||||
"ioctl.blkgetsize",
|
||||
"ioctl.loopsetfd",
|
||||
"ioctl.loopsetstatus",
|
||||
"?ioctl.loopctlgetfree",
|
||||
)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
calls.Assert(t,
|
||||
"DmTaskCreate",
|
||||
"DmTaskGetInfo",
|
||||
"DmTaskRun",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetName",
|
||||
"DmTaskSetMessage",
|
||||
)
|
||||
|
||||
Mounted = func(mnt string) (bool, error) {
|
||||
calls["Mounted"] = true
|
||||
return true, nil
|
||||
}
|
||||
|
||||
if err := d.Remove("1"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
calls.Assert(t,
|
||||
"DmTaskRun",
|
||||
"DmTaskSetSector",
|
||||
"DmTaskSetName",
|
||||
"DmTaskSetMessage",
|
||||
"DmTaskCreate",
|
||||
"DmTaskGetInfo",
|
||||
"DmTaskSetCookie",
|
||||
"DmTaskSetTarget",
|
||||
"DmTaskSetAddNode",
|
||||
"DmUdevWait",
|
||||
)
|
||||
}()
|
||||
runtime.GC()
|
||||
|
||||
calls.Assert(t,
|
||||
"DmTaskDestroy",
|
||||
)
|
||||
}
|
||||
|
||||
func TestCleanup(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
t.Skip("Unimplemented")
|
||||
d := newDriver(t)
|
||||
defer osRemoveAll(d.home)
|
||||
|
||||
mountPoints := make([]string, 2)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Mount the id
|
||||
p, err := d.Get("1", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
mountPoints[0] = p
|
||||
|
||||
if err := d.Create("2", "1"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
p, err = d.Get("2", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
mountPoints[1] = p
|
||||
|
||||
// Ensure that all the mount points are currently mounted
|
||||
for _, p := range mountPoints {
|
||||
if mounted, err := Mounted(p); err != nil {
|
||||
t.Fatal(err)
|
||||
} else if !mounted {
|
||||
t.Fatalf("Expected %s to be mounted", p)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure that devices are active
|
||||
for _, p := range []string{"1", "2"} {
|
||||
if !d.HasActivatedDevice(p) {
|
||||
t.Fatalf("Expected %s to have an active device", p)
|
||||
}
|
||||
}
|
||||
|
||||
if err := d.Cleanup(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Ensure that all the mount points are no longer mounted
|
||||
for _, p := range mountPoints {
|
||||
if mounted, err := Mounted(p); err != nil {
|
||||
t.Fatal(err)
|
||||
} else if mounted {
|
||||
t.Fatalf("Expected %s to not be mounted", p)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure that devices are no longer activated
|
||||
for _, p := range []string{"1", "2"} {
|
||||
if d.HasActivatedDevice(p) {
|
||||
t.Fatalf("Expected %s not be an active device", p)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNotMounted(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
t.Skip("Not implemented")
|
||||
d := newDriver(t)
|
||||
defer cleanup(d)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
mounted, err := Mounted(path.Join(d.home, "mnt", "1"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if mounted {
|
||||
t.Fatal("Id 1 should not be mounted")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMounted(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
d := newDriver(t)
|
||||
defer cleanup(d)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
mounted, err := Mounted(path.Join(d.home, "mnt", "1"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !mounted {
|
||||
t.Fatal("Id 1 should be mounted")
|
||||
}
|
||||
}
|
||||
|
||||
func TestInitCleanedDriver(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
d := newDriver(t)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := d.Cleanup(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
driver, err := Init(d.home)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
d = driver.(*Driver)
|
||||
defer cleanup(d)
|
||||
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMountMountedDriver(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
d := newDriver(t)
|
||||
defer cleanup(d)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Perform get on same id to ensure that it will
|
||||
// not be mounted twice
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetReturnsValidDevice(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
d := newDriver(t)
|
||||
defer cleanup(d)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !d.HasDevice("1") {
|
||||
t.Fatalf("Expected id 1 to be in device set")
|
||||
}
|
||||
|
||||
if _, err := d.Get("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !d.HasActivatedDevice("1") {
|
||||
t.Fatalf("Expected id 1 to be activated")
|
||||
}
|
||||
|
||||
if !d.HasInitializedDevice("1") {
|
||||
t.Fatalf("Expected id 1 to be initialized")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDriverGetSize(t *testing.T) {
|
||||
t.Skip("FIXME: not a unit test")
|
||||
t.Skipf("Size is currently not implemented")
|
||||
|
||||
d := newDriver(t)
|
||||
defer cleanup(d)
|
||||
|
||||
if err := d.Create("1", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
mountPoint, err := d.Get("1", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
size := int64(1024)
|
||||
|
||||
f, err := osCreate(path.Join(mountPoint, "test_file"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := f.Truncate(size); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
// diffSize, err := d.DiffSize("1")
|
||||
// if err != nil {
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
// if diffSize != size {
|
||||
// t.Fatalf("Expected size %d got %d", size, diffSize)
|
||||
// }
|
||||
}
|
||||
|
||||
func assertMap(t *testing.T, m map[string]bool, keys ...string) {
|
||||
for _, key := range keys {
|
||||
if _, exists := m[key]; !exists {
|
||||
t.Fatalf("Key not set: %s", key)
|
||||
}
|
||||
delete(m, key)
|
||||
}
|
||||
if len(m) != 0 {
|
||||
t.Fatalf("Unexpected keys: %v", m)
|
||||
}
|
||||
}
|
|
@ -3,11 +3,12 @@
|
|||
package devmapper
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func ioctlLoopCtlGetFree(fd uintptr) (int, error) {
|
||||
index, _, err := sysSyscall(sysSysIoctl, fd, LoopCtlGetFree, 0)
|
||||
index, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, LoopCtlGetFree, 0)
|
||||
if err != 0 {
|
||||
return 0, err
|
||||
}
|
||||
|
@ -15,21 +16,21 @@ func ioctlLoopCtlGetFree(fd uintptr) (int, error) {
|
|||
}
|
||||
|
||||
func ioctlLoopSetFd(loopFd, sparseFd uintptr) error {
|
||||
if _, _, err := sysSyscall(sysSysIoctl, loopFd, LoopSetFd, sparseFd); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, loopFd, LoopSetFd, sparseFd); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ioctlLoopSetStatus64(loopFd uintptr, loopInfo *LoopInfo64) error {
|
||||
if _, _, err := sysSyscall(sysSysIoctl, loopFd, LoopSetStatus64, uintptr(unsafe.Pointer(loopInfo))); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, loopFd, LoopSetStatus64, uintptr(unsafe.Pointer(loopInfo))); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ioctlLoopClrFd(loopFd uintptr) error {
|
||||
if _, _, err := sysSyscall(sysSysIoctl, loopFd, LoopClrFd, 0); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, loopFd, LoopClrFd, 0); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
|
@ -38,14 +39,14 @@ func ioctlLoopClrFd(loopFd uintptr) error {
|
|||
func ioctlLoopGetStatus64(loopFd uintptr) (*LoopInfo64, error) {
|
||||
loopInfo := &LoopInfo64{}
|
||||
|
||||
if _, _, err := sysSyscall(sysSysIoctl, loopFd, LoopGetStatus64, uintptr(unsafe.Pointer(loopInfo))); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, loopFd, LoopGetStatus64, uintptr(unsafe.Pointer(loopInfo))); err != 0 {
|
||||
return nil, err
|
||||
}
|
||||
return loopInfo, nil
|
||||
}
|
||||
|
||||
func ioctlLoopSetCapacity(loopFd uintptr, value int) error {
|
||||
if _, _, err := sysSyscall(sysSysIoctl, loopFd, LoopSetCapacity, uintptr(value)); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, loopFd, LoopSetCapacity, uintptr(value)); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
|
@ -53,7 +54,7 @@ func ioctlLoopSetCapacity(loopFd uintptr, value int) error {
|
|||
|
||||
func ioctlBlkGetSize64(fd uintptr) (int64, error) {
|
||||
var size int64
|
||||
if _, _, err := sysSyscall(sysSysIoctl, fd, BlkGetSize64, uintptr(unsafe.Pointer(&size))); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, BlkGetSize64, uintptr(unsafe.Pointer(&size))); err != 0 {
|
||||
return 0, err
|
||||
}
|
||||
return size, nil
|
||||
|
@ -64,7 +65,7 @@ func ioctlBlkDiscard(fd uintptr, offset, length uint64) error {
|
|||
r[0] = offset
|
||||
r[1] = length
|
||||
|
||||
if _, _, err := sysSyscall(sysSysIoctl, fd, BlkDiscard, uintptr(unsafe.Pointer(&r[0]))); err != 0 {
|
||||
if _, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, BlkDiscard, uintptr(unsafe.Pointer(&r[0]))); err != 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
|
|
|
@ -3,25 +3,27 @@
|
|||
package devmapper
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// FIXME: this is copy-pasted from the aufs driver.
|
||||
// It should be moved into the core.
|
||||
|
||||
var Mounted = func(mountpoint string) (bool, error) {
|
||||
mntpoint, err := osStat(mountpoint)
|
||||
func Mounted(mountpoint string) (bool, error) {
|
||||
mntpoint, err := os.Stat(mountpoint)
|
||||
if err != nil {
|
||||
if osIsNotExist(err) {
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
parent, err := osStat(filepath.Join(mountpoint, ".."))
|
||||
parent, err := os.Stat(filepath.Join(mountpoint, ".."))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
mntpointSt := toSysStatT(mntpoint.Sys())
|
||||
parentSt := toSysStatT(parent.Sys())
|
||||
mntpointSt := mntpoint.Sys().(*syscall.Stat_t)
|
||||
parentSt := parent.Sys().(*syscall.Stat_t)
|
||||
return mntpointSt.Dev != parentSt.Dev, nil
|
||||
}
|
||||
|
|
|
@ -1,57 +0,0 @@
|
|||
// +build linux,amd64
|
||||
|
||||
package devmapper
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
type (
|
||||
sysStatT syscall.Stat_t
|
||||
sysErrno syscall.Errno
|
||||
|
||||
osFile struct{ *os.File }
|
||||
)
|
||||
|
||||
var (
|
||||
sysMount = syscall.Mount
|
||||
sysUnmount = syscall.Unmount
|
||||
sysCloseOnExec = syscall.CloseOnExec
|
||||
sysSyscall = syscall.Syscall
|
||||
|
||||
osOpenFile = func(name string, flag int, perm os.FileMode) (*osFile, error) {
|
||||
f, err := os.OpenFile(name, flag, perm)
|
||||
return &osFile{File: f}, err
|
||||
}
|
||||
osOpen = func(name string) (*osFile, error) { f, err := os.Open(name); return &osFile{File: f}, err }
|
||||
osNewFile = os.NewFile
|
||||
osCreate = os.Create
|
||||
osStat = os.Stat
|
||||
osIsNotExist = os.IsNotExist
|
||||
osIsExist = os.IsExist
|
||||
osMkdirAll = os.MkdirAll
|
||||
osRemoveAll = os.RemoveAll
|
||||
osRename = os.Rename
|
||||
osReadlink = os.Readlink
|
||||
|
||||
execRun = func(name string, args ...string) error { return exec.Command(name, args...).Run() }
|
||||
)
|
||||
|
||||
const (
|
||||
sysMsMgcVal = syscall.MS_MGC_VAL
|
||||
sysMsRdOnly = syscall.MS_RDONLY
|
||||
sysEInval = syscall.EINVAL
|
||||
sysSysIoctl = syscall.SYS_IOCTL
|
||||
sysEBusy = syscall.EBUSY
|
||||
|
||||
osORdOnly = os.O_RDONLY
|
||||
osORdWr = os.O_RDWR
|
||||
osOCreate = os.O_CREATE
|
||||
osModeDevice = os.ModeDevice
|
||||
)
|
||||
|
||||
func toSysStatT(i interface{}) *sysStatT {
|
||||
return (*sysStatT)(i.(*syscall.Stat_t))
|
||||
}
|
|
@ -1,9 +1,9 @@
|
|||
package graphdriver
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/archive"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
@ -43,6 +43,8 @@ var (
|
|||
"devicemapper",
|
||||
"vfs",
|
||||
}
|
||||
|
||||
ErrNotSupported = errors.New("driver not supported")
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -62,7 +64,7 @@ func GetDriver(name, home string) (Driver, error) {
|
|||
if initFunc, exists := drivers[name]; exists {
|
||||
return initFunc(path.Join(home, name))
|
||||
}
|
||||
return nil, fmt.Errorf("No such driver: %s", name)
|
||||
return nil, ErrNotSupported
|
||||
}
|
||||
|
||||
func New(root string) (driver Driver, err error) {
|
||||
|
@ -74,9 +76,12 @@ func New(root string) (driver Driver, err error) {
|
|||
|
||||
// Check for priority drivers first
|
||||
for _, name := range priority {
|
||||
if driver, err = GetDriver(name, root); err != nil {
|
||||
utils.Debugf("Error loading driver %s: %s", name, err)
|
||||
continue
|
||||
driver, err = GetDriver(name, root)
|
||||
if err != nil {
|
||||
if err == ErrNotSupported {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return driver, nil
|
||||
}
|
||||
|
@ -84,9 +89,12 @@ func New(root string) (driver Driver, err error) {
|
|||
// Check all registered drivers if no priority driver is found
|
||||
for _, initFunc := range drivers {
|
||||
if driver, err = initFunc(root); err != nil {
|
||||
continue
|
||||
if err == ErrNotSupported {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return driver, nil
|
||||
}
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("No supported storage backend found")
|
||||
}
|
||||
|
|
228
daemon/graphdriver/graphtest/graphtest.go
Normal file
228
daemon/graphdriver/graphtest/graphtest.go
Normal file
|
@ -0,0 +1,228 @@
|
|||
package graphtest
|
||||
|
||||
import (
|
||||
"github.com/dotcloud/docker/daemon/graphdriver"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"syscall"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var (
|
||||
drv *Driver
|
||||
)
|
||||
|
||||
type Driver struct {
|
||||
graphdriver.Driver
|
||||
root string
|
||||
refCount int
|
||||
}
|
||||
|
||||
func newDriver(t *testing.T, name string) *Driver {
|
||||
root, err := ioutil.TempDir("/var/tmp", "docker-graphtest-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(root, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
d, err := graphdriver.GetDriver(name, root)
|
||||
if err != nil {
|
||||
if err == graphdriver.ErrNotSupported {
|
||||
t.Skip("Driver %s not supported", name)
|
||||
}
|
||||
t.Fatal(err)
|
||||
}
|
||||
return &Driver{d, root, 1}
|
||||
}
|
||||
|
||||
func cleanup(t *testing.T, d *Driver) {
|
||||
if err := drv.Cleanup(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
os.RemoveAll(d.root)
|
||||
}
|
||||
|
||||
func GetDriver(t *testing.T, name string) graphdriver.Driver {
|
||||
if drv == nil {
|
||||
drv = newDriver(t, name)
|
||||
} else {
|
||||
drv.refCount++
|
||||
}
|
||||
return drv
|
||||
}
|
||||
|
||||
func PutDriver(t *testing.T) {
|
||||
if drv == nil {
|
||||
t.Skip("No driver to put!")
|
||||
}
|
||||
drv.refCount--
|
||||
if drv.refCount == 0 {
|
||||
cleanup(t, drv)
|
||||
drv = nil
|
||||
}
|
||||
}
|
||||
|
||||
func verifyFile(t *testing.T, path string, mode os.FileMode, uid, gid uint32) {
|
||||
fi, err := os.Stat(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if fi.Mode()&os.ModeType != mode&os.ModeType {
|
||||
t.Fatalf("Expected %s type 0x%x, got 0x%x", path, mode&os.ModeType, fi.Mode()&os.ModeType)
|
||||
}
|
||||
|
||||
if fi.Mode()&os.ModePerm != mode&os.ModePerm {
|
||||
t.Fatalf("Expected %s mode %o, got %o", path, mode&os.ModePerm, fi.Mode()&os.ModePerm)
|
||||
}
|
||||
|
||||
if fi.Mode()&os.ModeSticky != mode&os.ModeSticky {
|
||||
t.Fatalf("Expected %s sticky 0x%x, got 0x%x", path, mode&os.ModeSticky, fi.Mode()&os.ModeSticky)
|
||||
}
|
||||
|
||||
if fi.Mode()&os.ModeSetuid != mode&os.ModeSetuid {
|
||||
t.Fatalf("Expected %s setuid 0x%x, got 0x%x", path, mode&os.ModeSetuid, fi.Mode()&os.ModeSetuid)
|
||||
}
|
||||
|
||||
if fi.Mode()&os.ModeSetgid != mode&os.ModeSetgid {
|
||||
t.Fatalf("Expected %s setgid 0x%x, got 0x%x", path, mode&os.ModeSetgid, fi.Mode()&os.ModeSetgid)
|
||||
}
|
||||
|
||||
if stat, ok := fi.Sys().(*syscall.Stat_t); ok {
|
||||
if stat.Uid != uid {
|
||||
t.Fatal("%s no owned by uid %d", path, uid)
|
||||
}
|
||||
if stat.Gid != gid {
|
||||
t.Fatal("%s not owned by gid %d", path, gid)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Creates an new image and verifies it is empty and the right metadata
|
||||
func DriverTestCreateEmpty(t *testing.T, drivername string) {
|
||||
driver := GetDriver(t, drivername)
|
||||
defer PutDriver(t)
|
||||
|
||||
if err := driver.Create("empty", ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !driver.Exists("empty") {
|
||||
t.Fatal("Newly created image doesn't exist")
|
||||
}
|
||||
|
||||
dir, err := driver.Get("empty", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
verifyFile(t, dir, 0755|os.ModeDir, 0, 0)
|
||||
|
||||
// Verify that the directory is empty
|
||||
fis, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(fis) != 0 {
|
||||
t.Fatal("New directory not empty")
|
||||
}
|
||||
|
||||
driver.Put("empty")
|
||||
|
||||
if err := driver.Remove("empty"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func createBase(t *testing.T, driver graphdriver.Driver, name string) {
|
||||
// We need to be able to set any perms
|
||||
oldmask := syscall.Umask(0)
|
||||
defer syscall.Umask(oldmask)
|
||||
|
||||
if err := driver.Create(name, ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
dir, err := driver.Get(name, "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer driver.Put(name)
|
||||
|
||||
subdir := path.Join(dir, "a subdir")
|
||||
if err := os.Mkdir(subdir, 0705|os.ModeSticky); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.Chown(subdir, 1, 2); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
file := path.Join(dir, "a file")
|
||||
if err := ioutil.WriteFile(file, []byte("Some data"), 0222|os.ModeSetuid); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func verifyBase(t *testing.T, driver graphdriver.Driver, name string) {
|
||||
dir, err := driver.Get(name, "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer driver.Put(name)
|
||||
|
||||
subdir := path.Join(dir, "a subdir")
|
||||
verifyFile(t, subdir, 0705|os.ModeDir|os.ModeSticky, 1, 2)
|
||||
|
||||
file := path.Join(dir, "a file")
|
||||
verifyFile(t, file, 0222|os.ModeSetuid, 0, 0)
|
||||
|
||||
fis, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(fis) != 2 {
|
||||
t.Fatal("Unexpected files in base image")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func DriverTestCreateBase(t *testing.T, drivername string) {
|
||||
driver := GetDriver(t, drivername)
|
||||
defer PutDriver(t)
|
||||
|
||||
createBase(t, driver, "Base")
|
||||
verifyBase(t, driver, "Base")
|
||||
|
||||
if err := driver.Remove("Base"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func DriverTestCreateSnap(t *testing.T, drivername string) {
|
||||
driver := GetDriver(t, drivername)
|
||||
defer PutDriver(t)
|
||||
|
||||
createBase(t, driver, "Base")
|
||||
|
||||
if err := driver.Create("Snap", "Base"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
verifyBase(t, driver, "Snap")
|
||||
|
||||
if err := driver.Remove("Snap"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := driver.Remove("Base"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
|
@ -47,7 +47,7 @@ func (d *Driver) Create(id, parent string) error {
|
|||
if err := os.MkdirAll(path.Dir(dir), 0700); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Mkdir(dir, 0700); err != nil {
|
||||
if err := os.Mkdir(dir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
if parent == "" {
|
||||
|
|
28
daemon/graphdriver/vfs/vfs_test.go
Normal file
28
daemon/graphdriver/vfs/vfs_test.go
Normal file
|
@ -0,0 +1,28 @@
|
|||
package vfs
|
||||
|
||||
import (
|
||||
"github.com/dotcloud/docker/daemon/graphdriver/graphtest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// This avoids creating a new driver for each test if all tests are run
|
||||
// Make sure to put new tests between TestVfsSetup and TestVfsTeardown
|
||||
func TestVfsSetup(t *testing.T) {
|
||||
graphtest.GetDriver(t, "vfs")
|
||||
}
|
||||
|
||||
func TestVfsCreateEmpty(t *testing.T) {
|
||||
graphtest.DriverTestCreateEmpty(t, "vfs")
|
||||
}
|
||||
|
||||
func TestVfsCreateBase(t *testing.T) {
|
||||
graphtest.DriverTestCreateBase(t, "vfs")
|
||||
}
|
||||
|
||||
func TestVfsCreateSnap(t *testing.T) {
|
||||
graphtest.DriverTestCreateSnap(t, "vfs")
|
||||
}
|
||||
|
||||
func TestVfsTeardown(t *testing.T) {
|
||||
graphtest.PutDriver(t)
|
||||
}
|
|
@ -26,5 +26,8 @@ func (history *History) Swap(i, j int) {
|
|||
|
||||
func (history *History) Add(container *Container) {
|
||||
*history = append(*history, container)
|
||||
}
|
||||
|
||||
func (history *History) Sort() {
|
||||
sort.Sort(history)
|
||||
}
|
||||
|
|
27
daemon/inspect.go
Normal file
27
daemon/inspect.go
Normal file
|
@ -0,0 +1,27 @@
|
|||
package daemon
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/dotcloud/docker/engine"
|
||||
"github.com/dotcloud/docker/runconfig"
|
||||
)
|
||||
|
||||
func (daemon *Daemon) ContainerInspect(job *engine.Job) engine.Status {
|
||||
if len(job.Args) != 1 {
|
||||
return job.Errorf("usage: %s NAME", job.Name)
|
||||
}
|
||||
name := job.Args[0]
|
||||
if container := daemon.Get(name); container != nil {
|
||||
b, err := json.Marshal(&struct {
|
||||
*Container
|
||||
HostConfig *runconfig.HostConfig
|
||||
}{container, container.HostConfig()})
|
||||
if err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
job.Stdout.Write(b)
|
||||
return engine.StatusOK
|
||||
}
|
||||
return job.Errorf("No such container: %s", name)
|
||||
}
|
|
@ -23,7 +23,7 @@ func (settings *NetworkSettings) PortMappingAPI() *engine.Table {
|
|||
p, _ := nat.ParsePort(port.Port())
|
||||
if len(bindings) == 0 {
|
||||
out := &engine.Env{}
|
||||
out.SetInt("PublicPort", p)
|
||||
out.SetInt("PrivatePort", p)
|
||||
out.Set("Type", port.Proto())
|
||||
outs.Add(out)
|
||||
continue
|
||||
|
|
|
@ -380,7 +380,7 @@ func AllocatePort(job *engine.Job) engine.Status {
|
|||
ip = defaultBindingIP
|
||||
id = job.Args[0]
|
||||
hostIP = job.Getenv("HostIP")
|
||||
hostPort = job.GetenvInt("HostPort")
|
||||
origHostPort = job.GetenvInt("HostPort")
|
||||
containerPort = job.GetenvInt("ContainerPort")
|
||||
proto = job.Getenv("Proto")
|
||||
network = currentInterfaces[id]
|
||||
|
@ -390,29 +390,45 @@ func AllocatePort(job *engine.Job) engine.Status {
|
|||
ip = net.ParseIP(hostIP)
|
||||
}
|
||||
|
||||
// host ip, proto, and host port
|
||||
hostPort, err = portallocator.RequestPort(ip, proto, hostPort)
|
||||
if err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
|
||||
var (
|
||||
hostPort int
|
||||
container net.Addr
|
||||
host net.Addr
|
||||
)
|
||||
|
||||
if proto == "tcp" {
|
||||
host = &net.TCPAddr{IP: ip, Port: hostPort}
|
||||
container = &net.TCPAddr{IP: network.IP, Port: containerPort}
|
||||
} else {
|
||||
host = &net.UDPAddr{IP: ip, Port: hostPort}
|
||||
container = &net.UDPAddr{IP: network.IP, Port: containerPort}
|
||||
/*
|
||||
Try up to 10 times to get a port that's not already allocated.
|
||||
|
||||
In the event of failure to bind, return the error that portmapper.Map
|
||||
yields.
|
||||
*/
|
||||
for i := 0; i < 10; i++ {
|
||||
// host ip, proto, and host port
|
||||
hostPort, err = portallocator.RequestPort(ip, proto, origHostPort)
|
||||
|
||||
if err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
|
||||
if proto == "tcp" {
|
||||
host = &net.TCPAddr{IP: ip, Port: hostPort}
|
||||
container = &net.TCPAddr{IP: network.IP, Port: containerPort}
|
||||
} else {
|
||||
host = &net.UDPAddr{IP: ip, Port: hostPort}
|
||||
container = &net.UDPAddr{IP: network.IP, Port: containerPort}
|
||||
}
|
||||
|
||||
if err = portmapper.Map(container, ip, hostPort); err == nil {
|
||||
break
|
||||
}
|
||||
|
||||
job.Logf("Failed to bind %s:%d for container address %s:%d. Trying another port.", ip.String(), hostPort, network.IP.String(), containerPort)
|
||||
}
|
||||
|
||||
if err := portmapper.Map(container, ip, hostPort); err != nil {
|
||||
portallocator.ReleasePort(ip, proto, hostPort)
|
||||
if err != nil {
|
||||
return job.Error(err)
|
||||
}
|
||||
|
||||
network.PortMappings = append(network.PortMappings, host)
|
||||
|
||||
out := engine.Env{}
|
||||
|
|
|
@ -7,9 +7,19 @@ import (
|
|||
"github.com/dotcloud/docker/pkg/collections"
|
||||
"net"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type networkSet map[string]*collections.OrderedIntSet
|
||||
type allocatedMap struct {
|
||||
*collections.OrderedIntSet
|
||||
last int32
|
||||
}
|
||||
|
||||
func newAllocatedMap() *allocatedMap {
|
||||
return &allocatedMap{OrderedIntSet: collections.NewOrderedIntSet()}
|
||||
}
|
||||
|
||||
type networkSet map[string]*allocatedMap
|
||||
|
||||
var (
|
||||
ErrNoAvailableIPs = errors.New("no available ip addresses on network")
|
||||
|
@ -19,7 +29,6 @@ var (
|
|||
var (
|
||||
lock = sync.Mutex{}
|
||||
allocatedIPs = networkSet{}
|
||||
availableIPS = networkSet{}
|
||||
)
|
||||
|
||||
// RequestIP requests an available ip from the given network. It
|
||||
|
@ -55,13 +64,11 @@ func ReleaseIP(address *net.IPNet, ip *net.IP) error {
|
|||
checkAddress(address)
|
||||
|
||||
var (
|
||||
existing = allocatedIPs[address.String()]
|
||||
available = availableIPS[address.String()]
|
||||
allocated = allocatedIPs[address.String()]
|
||||
pos = getPosition(address, ip)
|
||||
)
|
||||
|
||||
existing.Remove(int(pos))
|
||||
available.Push(int(pos))
|
||||
allocated.Remove(int(pos))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -82,29 +89,19 @@ func getPosition(address *net.IPNet, ip *net.IP) int32 {
|
|||
func getNextIp(address *net.IPNet) (*net.IP, error) {
|
||||
var (
|
||||
ownIP = ipToInt(&address.IP)
|
||||
available = availableIPS[address.String()]
|
||||
allocated = allocatedIPs[address.String()]
|
||||
first, _ = networkdriver.NetworkRange(address)
|
||||
base = ipToInt(&first)
|
||||
size = int(networkdriver.NetworkSize(address.Mask))
|
||||
max = int32(size - 2) // size -1 for the broadcast address, -1 for the gateway address
|
||||
pos = int32(available.Pop())
|
||||
pos = atomic.LoadInt32(&allocated.last)
|
||||
)
|
||||
|
||||
// We pop and push the position not the ip
|
||||
if pos != 0 {
|
||||
ip := intToIP(int32(base + pos))
|
||||
allocated.Push(int(pos))
|
||||
|
||||
return ip, nil
|
||||
}
|
||||
|
||||
var (
|
||||
firstNetIP = address.IP.To4().Mask(address.Mask)
|
||||
firstAsInt = ipToInt(&firstNetIP) + 1
|
||||
)
|
||||
|
||||
pos = int32(allocated.PullBack())
|
||||
for i := int32(0); i < max; i++ {
|
||||
pos = pos%max + 1
|
||||
next := int32(base + pos)
|
||||
|
@ -116,6 +113,7 @@ func getNextIp(address *net.IPNet) (*net.IP, error) {
|
|||
if !allocated.Exists(int(pos)) {
|
||||
ip := intToIP(next)
|
||||
allocated.Push(int(pos))
|
||||
atomic.StoreInt32(&allocated.last, pos)
|
||||
return ip, nil
|
||||
}
|
||||
}
|
||||
|
@ -124,15 +122,14 @@ func getNextIp(address *net.IPNet) (*net.IP, error) {
|
|||
|
||||
func registerIP(address *net.IPNet, ip *net.IP) error {
|
||||
var (
|
||||
existing = allocatedIPs[address.String()]
|
||||
available = availableIPS[address.String()]
|
||||
allocated = allocatedIPs[address.String()]
|
||||
pos = getPosition(address, ip)
|
||||
)
|
||||
|
||||
if existing.Exists(int(pos)) {
|
||||
if allocated.Exists(int(pos)) {
|
||||
return ErrIPAlreadyAllocated
|
||||
}
|
||||
available.Remove(int(pos))
|
||||
atomic.StoreInt32(&allocated.last, pos)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -153,7 +150,6 @@ func intToIP(n int32) *net.IP {
|
|||
func checkAddress(address *net.IPNet) {
|
||||
key := address.String()
|
||||
if _, exists := allocatedIPs[key]; !exists {
|
||||
allocatedIPs[key] = collections.NewOrderedIntSet()
|
||||
availableIPS[key] = collections.NewOrderedIntSet()
|
||||
allocatedIPs[key] = newAllocatedMap()
|
||||
}
|
||||
}
|
||||
|
|
|
@ -8,7 +8,6 @@ import (
|
|||
|
||||
func reset() {
|
||||
allocatedIPs = networkSet{}
|
||||
availableIPS = networkSet{}
|
||||
}
|
||||
|
||||
func TestRequestNewIps(t *testing.T) {
|
||||
|
@ -18,8 +17,10 @@ func TestRequestNewIps(t *testing.T) {
|
|||
Mask: []byte{255, 255, 255, 0},
|
||||
}
|
||||
|
||||
var ip *net.IP
|
||||
var err error
|
||||
for i := 2; i < 10; i++ {
|
||||
ip, err := RequestIP(network, nil)
|
||||
ip, err = RequestIP(network, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -28,6 +29,17 @@ func TestRequestNewIps(t *testing.T) {
|
|||
t.Fatalf("Expected ip %s got %s", expected, ip.String())
|
||||
}
|
||||
}
|
||||
value := intToIP(ipToInt(ip) + 1).String()
|
||||
if err := ReleaseIP(network, ip); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ip, err = RequestIP(network, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if ip.String() != value {
|
||||
t.Fatalf("Expected to receive the next ip %s got %s", value, ip.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestReleaseIp(t *testing.T) {
|
||||
|
@ -64,6 +76,17 @@ func TestGetReleasedIp(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for i := 0; i < 252; i++ {
|
||||
_, err = RequestIP(network, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = ReleaseIP(network, ip)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
ip, err = RequestIP(network, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
@ -185,24 +208,6 @@ func TestIPAllocator(t *testing.T) {
|
|||
|
||||
newIPs[i] = ip
|
||||
}
|
||||
// Before loop begin
|
||||
// 2(u) - 3(u) - 4(f) - 5(f) - 6(f)
|
||||
// ↑
|
||||
|
||||
// After i = 0
|
||||
// 2(u) - 3(u) - 4(f) - 5(u) - 6(f)
|
||||
// ↑
|
||||
|
||||
// After i = 1
|
||||
// 2(u) - 3(u) - 4(f) - 5(u) - 6(u)
|
||||
// ↑
|
||||
|
||||
// After i = 2
|
||||
// 2(u) - 3(u) - 4(u) - 5(u) - 6(u)
|
||||
// ↑
|
||||
|
||||
// Reordered these because the new set will always return the
|
||||
// lowest ips first and not in the order that they were released
|
||||
assertIPEquals(t, &expectedIPs[2], newIPs[0])
|
||||
assertIPEquals(t, &expectedIPs[3], newIPs[1])
|
||||
assertIPEquals(t, &expectedIPs[4], newIPs[2])
|
||||
|
@ -234,6 +239,86 @@ func TestAllocateFirstIP(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestAllocateAllIps(t *testing.T) {
|
||||
defer reset()
|
||||
network := &net.IPNet{
|
||||
IP: []byte{192, 168, 0, 1},
|
||||
Mask: []byte{255, 255, 255, 0},
|
||||
}
|
||||
|
||||
var (
|
||||
current, first *net.IP
|
||||
err error
|
||||
isFirst = true
|
||||
)
|
||||
|
||||
for err == nil {
|
||||
current, err = RequestIP(network, nil)
|
||||
if isFirst {
|
||||
first = current
|
||||
isFirst = false
|
||||
}
|
||||
}
|
||||
|
||||
if err != ErrNoAvailableIPs {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if _, err := RequestIP(network, nil); err != ErrNoAvailableIPs {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := ReleaseIP(network, first); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
again, err := RequestIP(network, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
assertIPEquals(t, first, again)
|
||||
}
|
||||
|
||||
func TestAllocateDifferentSubnets(t *testing.T) {
|
||||
defer reset()
|
||||
network1 := &net.IPNet{
|
||||
IP: []byte{192, 168, 0, 1},
|
||||
Mask: []byte{255, 255, 255, 0},
|
||||
}
|
||||
network2 := &net.IPNet{
|
||||
IP: []byte{127, 0, 0, 1},
|
||||
Mask: []byte{255, 255, 255, 0},
|
||||
}
|
||||
expectedIPs := []net.IP{
|
||||
0: net.IPv4(192, 168, 0, 2),
|
||||
1: net.IPv4(192, 168, 0, 3),
|
||||
2: net.IPv4(127, 0, 0, 2),
|
||||
3: net.IPv4(127, 0, 0, 3),
|
||||
}
|
||||
|
||||
ip11, err := RequestIP(network1, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ip12, err := RequestIP(network1, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ip21, err := RequestIP(network2, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ip22, err := RequestIP(network2, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
assertIPEquals(t, &expectedIPs[0], ip11)
|
||||
assertIPEquals(t, &expectedIPs[1], ip12)
|
||||
assertIPEquals(t, &expectedIPs[2], ip21)
|
||||
assertIPEquals(t, &expectedIPs[3], ip22)
|
||||
}
|
||||
|
||||
func assertIPEquals(t *testing.T, ip1, ip2 *net.IP) {
|
||||
if !ip1.Equal(*ip2) {
|
||||
t.Fatalf("Expected IP %s, got %s", ip1, ip2)
|
||||
|
|
|
@ -2,21 +2,21 @@ package portallocator
|
|||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/dotcloud/docker/pkg/collections"
|
||||
"net"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type (
|
||||
portMap map[int]bool
|
||||
protocolMap map[string]portMap
|
||||
ipMapping map[string]protocolMap
|
||||
)
|
||||
|
||||
const (
|
||||
BeginPortRange = 49153
|
||||
EndPortRange = 65535
|
||||
)
|
||||
|
||||
type (
|
||||
portMappings map[string]*collections.OrderedIntSet
|
||||
ipMapping map[string]portMappings
|
||||
)
|
||||
|
||||
var (
|
||||
ErrAllPortsAllocated = errors.New("all ports are allocated")
|
||||
ErrPortAlreadyAllocated = errors.New("port has already been allocated")
|
||||
|
@ -24,165 +24,106 @@ var (
|
|||
)
|
||||
|
||||
var (
|
||||
currentDynamicPort = map[string]int{
|
||||
"tcp": BeginPortRange - 1,
|
||||
"udp": BeginPortRange - 1,
|
||||
}
|
||||
defaultIP = net.ParseIP("0.0.0.0")
|
||||
defaultAllocatedPorts = portMappings{}
|
||||
otherAllocatedPorts = ipMapping{}
|
||||
lock = sync.Mutex{}
|
||||
mutex sync.Mutex
|
||||
|
||||
defaultIP = net.ParseIP("0.0.0.0")
|
||||
globalMap = ipMapping{}
|
||||
)
|
||||
|
||||
func init() {
|
||||
defaultAllocatedPorts["tcp"] = collections.NewOrderedIntSet()
|
||||
defaultAllocatedPorts["udp"] = collections.NewOrderedIntSet()
|
||||
}
|
||||
|
||||
// RequestPort returns an available port if the port is 0
|
||||
// If the provided port is not 0 then it will be checked if
|
||||
// it is available for allocation
|
||||
func RequestPort(ip net.IP, proto string, port int) (int, error) {
|
||||
lock.Lock()
|
||||
defer lock.Unlock()
|
||||
mutex.Lock()
|
||||
defer mutex.Unlock()
|
||||
|
||||
if err := validateProtocol(proto); err != nil {
|
||||
if err := validateProto(proto); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// If the user requested a specific port to be allocated
|
||||
ip = getDefault(ip)
|
||||
|
||||
mapping := getOrCreate(ip)
|
||||
|
||||
if port > 0 {
|
||||
if err := registerSetPort(ip, proto, port); err != nil {
|
||||
if !mapping[proto][port] {
|
||||
mapping[proto][port] = true
|
||||
return port, nil
|
||||
} else {
|
||||
return 0, ErrPortAlreadyAllocated
|
||||
}
|
||||
} else {
|
||||
port, err := findPort(ip, proto)
|
||||
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return port, nil
|
||||
}
|
||||
return registerDynamicPort(ip, proto)
|
||||
}
|
||||
|
||||
// ReleasePort will return the provided port back into the
|
||||
// pool for reuse
|
||||
func ReleasePort(ip net.IP, proto string, port int) error {
|
||||
lock.Lock()
|
||||
defer lock.Unlock()
|
||||
mutex.Lock()
|
||||
defer mutex.Unlock()
|
||||
|
||||
if err := validateProtocol(proto); err != nil {
|
||||
return err
|
||||
}
|
||||
ip = getDefault(ip)
|
||||
|
||||
allocated := defaultAllocatedPorts[proto]
|
||||
allocated.Remove(port)
|
||||
mapping := getOrCreate(ip)
|
||||
delete(mapping[proto], port)
|
||||
|
||||
if !equalsDefault(ip) {
|
||||
registerIP(ip)
|
||||
|
||||
// Remove the port for the specific ip address
|
||||
allocated = otherAllocatedPorts[ip.String()][proto]
|
||||
allocated.Remove(port)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ReleaseAll() error {
|
||||
lock.Lock()
|
||||
defer lock.Unlock()
|
||||
mutex.Lock()
|
||||
defer mutex.Unlock()
|
||||
|
||||
currentDynamicPort["tcp"] = BeginPortRange - 1
|
||||
currentDynamicPort["udp"] = BeginPortRange - 1
|
||||
|
||||
defaultAllocatedPorts = portMappings{}
|
||||
defaultAllocatedPorts["tcp"] = collections.NewOrderedIntSet()
|
||||
defaultAllocatedPorts["udp"] = collections.NewOrderedIntSet()
|
||||
|
||||
otherAllocatedPorts = ipMapping{}
|
||||
globalMap = ipMapping{}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func registerDynamicPort(ip net.IP, proto string) (int, error) {
|
||||
func getOrCreate(ip net.IP) protocolMap {
|
||||
ipstr := ip.String()
|
||||
|
||||
if !equalsDefault(ip) {
|
||||
registerIP(ip)
|
||||
|
||||
ipAllocated := otherAllocatedPorts[ip.String()][proto]
|
||||
|
||||
port, err := findNextPort(proto, ipAllocated)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
if _, ok := globalMap[ipstr]; !ok {
|
||||
globalMap[ipstr] = protocolMap{
|
||||
"tcp": portMap{},
|
||||
"udp": portMap{},
|
||||
}
|
||||
ipAllocated.Push(port)
|
||||
return port, nil
|
||||
|
||||
} else {
|
||||
|
||||
allocated := defaultAllocatedPorts[proto]
|
||||
|
||||
port, err := findNextPort(proto, allocated)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
allocated.Push(port)
|
||||
return port, nil
|
||||
}
|
||||
}
|
||||
|
||||
func registerSetPort(ip net.IP, proto string, port int) error {
|
||||
allocated := defaultAllocatedPorts[proto]
|
||||
if allocated.Exists(port) {
|
||||
return ErrPortAlreadyAllocated
|
||||
}
|
||||
|
||||
if !equalsDefault(ip) {
|
||||
registerIP(ip)
|
||||
|
||||
ipAllocated := otherAllocatedPorts[ip.String()][proto]
|
||||
if ipAllocated.Exists(port) {
|
||||
return ErrPortAlreadyAllocated
|
||||
}
|
||||
ipAllocated.Push(port)
|
||||
} else {
|
||||
allocated.Push(port)
|
||||
}
|
||||
return nil
|
||||
return globalMap[ipstr]
|
||||
}
|
||||
|
||||
func equalsDefault(ip net.IP) bool {
|
||||
return ip == nil || ip.Equal(defaultIP)
|
||||
}
|
||||
func findPort(ip net.IP, proto string) (int, error) {
|
||||
port := BeginPortRange
|
||||
|
||||
func findNextPort(proto string, allocated *collections.OrderedIntSet) (int, error) {
|
||||
port := nextPort(proto)
|
||||
startSearchPort := port
|
||||
for allocated.Exists(port) {
|
||||
port = nextPort(proto)
|
||||
if startSearchPort == port {
|
||||
mapping := getOrCreate(ip)
|
||||
|
||||
for mapping[proto][port] {
|
||||
port++
|
||||
|
||||
if port > EndPortRange {
|
||||
return 0, ErrAllPortsAllocated
|
||||
}
|
||||
}
|
||||
|
||||
mapping[proto][port] = true
|
||||
|
||||
return port, nil
|
||||
}
|
||||
|
||||
func nextPort(proto string) int {
|
||||
c := currentDynamicPort[proto] + 1
|
||||
if c > EndPortRange {
|
||||
c = BeginPortRange
|
||||
func getDefault(ip net.IP) net.IP {
|
||||
if ip == nil {
|
||||
return defaultIP
|
||||
}
|
||||
currentDynamicPort[proto] = c
|
||||
return c
|
||||
|
||||
return ip
|
||||
}
|
||||
|
||||
func registerIP(ip net.IP) {
|
||||
if _, exists := otherAllocatedPorts[ip.String()]; !exists {
|
||||
otherAllocatedPorts[ip.String()] = portMappings{
|
||||
"tcp": collections.NewOrderedIntSet(),
|
||||
"udp": collections.NewOrderedIntSet(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func validateProtocol(proto string) error {
|
||||
if _, exists := defaultAllocatedPorts[proto]; !exists {
|
||||
func validateProto(proto string) error {
|
||||
if proto != "tcp" && proto != "udp" {
|
||||
return ErrUnknownProtocol
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -2,9 +2,10 @@ package daemon
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/dotcloud/docker/pkg/units"
|
||||
)
|
||||
|
||||
type State struct {
|
||||
|
@ -22,12 +23,12 @@ func (s *State) String() string {
|
|||
defer s.RUnlock()
|
||||
|
||||
if s.Running {
|
||||
return fmt.Sprintf("Up %s", utils.HumanDuration(time.Now().UTC().Sub(s.StartedAt)))
|
||||
return fmt.Sprintf("Up %s", units.HumanDuration(time.Now().UTC().Sub(s.StartedAt)))
|
||||
}
|
||||
if s.FinishedAt.IsZero() {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("Exited (%d) %s ago", s.ExitCode, utils.HumanDuration(time.Now().UTC().Sub(s.FinishedAt)))
|
||||
return fmt.Sprintf("Exited (%d) %s ago", s.ExitCode, units.HumanDuration(time.Now().UTC().Sub(s.FinishedAt)))
|
||||
}
|
||||
|
||||
func (s *State) IsRunning() bool {
|
||||
|
|
|
@ -10,7 +10,7 @@ import (
|
|||
|
||||
"github.com/dotcloud/docker/archive"
|
||||
"github.com/dotcloud/docker/daemon/execdriver"
|
||||
"github.com/dotcloud/docker/utils"
|
||||
"github.com/dotcloud/docker/pkg/symlink"
|
||||
)
|
||||
|
||||
type BindMap struct {
|
||||
|
@ -40,8 +40,11 @@ func setupMountsForContainer(container *Container) error {
|
|||
{container.ResolvConfPath, "/etc/resolv.conf", false, true},
|
||||
}
|
||||
|
||||
if container.HostnamePath != "" && container.HostsPath != "" {
|
||||
if container.HostnamePath != "" {
|
||||
mounts = append(mounts, execdriver.Mount{container.HostnamePath, "/etc/hostname", false, true})
|
||||
}
|
||||
|
||||
if container.HostsPath != "" {
|
||||
mounts = append(mounts, execdriver.Mount{container.HostsPath, "/etc/hosts", false, true})
|
||||
}
|
||||
|
||||
|
@ -94,11 +97,11 @@ func applyVolumesFrom(container *Container) error {
|
|||
if _, exists := container.Volumes[volPath]; exists {
|
||||
continue
|
||||
}
|
||||
stat, err := os.Stat(filepath.Join(c.basefs, volPath))
|
||||
stat, err := os.Stat(c.getResourcePath(volPath))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := createIfNotExists(filepath.Join(container.basefs, volPath), stat.IsDir()); err != nil {
|
||||
if err := createIfNotExists(container.getResourcePath(volPath), stat.IsDir()); err != nil {
|
||||
return err
|
||||
}
|
||||
container.Volumes[volPath] = id
|
||||
|
@ -162,115 +165,17 @@ func createVolumes(container *Container) error {
|
|||
return err
|
||||
}
|
||||
|
||||
volumesDriver := container.daemon.volumes.Driver()
|
||||
// Create the requested volumes if they don't exist
|
||||
for volPath := range container.Config.Volumes {
|
||||
volPath = filepath.Clean(volPath)
|
||||
volIsDir := true
|
||||
// Skip existing volumes
|
||||
if _, exists := container.Volumes[volPath]; exists {
|
||||
continue
|
||||
}
|
||||
var srcPath string
|
||||
var isBindMount bool
|
||||
srcRW := false
|
||||
// If an external bind is defined for this volume, use that as a source
|
||||
if bindMap, exists := binds[volPath]; exists {
|
||||
isBindMount = true
|
||||
srcPath = bindMap.SrcPath
|
||||
if !filepath.IsAbs(srcPath) {
|
||||
return fmt.Errorf("%s must be an absolute path", srcPath)
|
||||
}
|
||||
if strings.ToLower(bindMap.Mode) == "rw" {
|
||||
srcRW = true
|
||||
}
|
||||
if stat, err := os.Stat(bindMap.SrcPath); err != nil {
|
||||
return err
|
||||
} else {
|
||||
volIsDir = stat.IsDir()
|
||||
}
|
||||
// Otherwise create an directory in $ROOT/volumes/ and use that
|
||||
} else {
|
||||
|
||||
// Do not pass a container as the parameter for the volume creation.
|
||||
// The graph driver using the container's information ( Image ) to
|
||||
// create the parent.
|
||||
c, err := container.daemon.volumes.Create(nil, "", "", "", "", nil, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srcPath, err = volumesDriver.Get(c.ID, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Driver %s failed to get volume rootfs %s: %s", volumesDriver, c.ID, err)
|
||||
}
|
||||
srcRW = true // RW by default
|
||||
}
|
||||
|
||||
if p, err := filepath.EvalSymlinks(srcPath); err != nil {
|
||||
return err
|
||||
} else {
|
||||
srcPath = p
|
||||
}
|
||||
|
||||
// Create the mountpoint
|
||||
rootVolPath, err := utils.FollowSymlinkInScope(filepath.Join(container.basefs, volPath), container.basefs)
|
||||
if err != nil {
|
||||
if err := initializeVolume(container, volPath, binds); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
newVolPath, err := filepath.Rel(container.basefs, rootVolPath)
|
||||
if err != nil {
|
||||
for volPath := range binds {
|
||||
if err := initializeVolume(container, volPath, binds); err != nil {
|
||||
return err
|
||||
}
|
||||
newVolPath = "/" + newVolPath
|
||||
|
||||
if volPath != newVolPath {
|
||||
delete(container.Volumes, volPath)
|
||||
delete(container.VolumesRW, volPath)
|
||||
}
|
||||
|
||||
container.Volumes[newVolPath] = srcPath
|
||||
container.VolumesRW[newVolPath] = srcRW
|
||||
|
||||
if err := createIfNotExists(rootVolPath, volIsDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Do not copy or change permissions if we are mounting from the host
|
||||
if srcRW && !isBindMount {
|
||||
volList, err := ioutil.ReadDir(rootVolPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(volList) > 0 {
|
||||
srcList, err := ioutil.ReadDir(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(srcList) == 0 {
|
||||
// If the source volume is empty copy files from the root into the volume
|
||||
if err := archive.CopyWithTar(rootVolPath, srcPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var stat syscall.Stat_t
|
||||
if err := syscall.Stat(rootVolPath, &stat); err != nil {
|
||||
return err
|
||||
}
|
||||
var srcStat syscall.Stat_t
|
||||
if err := syscall.Stat(srcPath, &srcStat); err != nil {
|
||||
return err
|
||||
}
|
||||
// Change the source volume's ownership if it differs from the root
|
||||
// files that were just copied
|
||||
if stat.Uid != srcStat.Uid || stat.Gid != srcStat.Gid {
|
||||
if err := os.Chown(srcPath, int(stat.Uid), int(stat.Gid)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@ -296,3 +201,130 @@ func createIfNotExists(path string, isDir bool) error {
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func initializeVolume(container *Container, volPath string, binds map[string]BindMap) error {
|
||||
volumesDriver := container.daemon.volumes.Driver()
|
||||
volPath = filepath.Clean(volPath)
|
||||
// Skip existing volumes
|
||||
if _, exists := container.Volumes[volPath]; exists {
|
||||
return nil
|
||||
}
|
||||
|
||||
var (
|
||||
srcPath string
|
||||
isBindMount bool
|
||||
volIsDir = true
|
||||
|
||||
srcRW = false
|
||||
)
|
||||
|
||||
// If an external bind is defined for this volume, use that as a source
|
||||
if bindMap, exists := binds[volPath]; exists {
|
||||
isBindMount = true
|
||||
srcPath = bindMap.SrcPath
|
||||
if !filepath.IsAbs(srcPath) {
|
||||
return fmt.Errorf("%s must be an absolute path", srcPath)
|
||||
}
|
||||
if strings.ToLower(bindMap.Mode) == "rw" {
|
||||
srcRW = true
|
||||
}
|
||||
if stat, err := os.Stat(bindMap.SrcPath); err != nil {
|
||||
return err
|
||||
} else {
|
||||
volIsDir = stat.IsDir()
|
||||
}
|
||||
// Otherwise create an directory in $ROOT/volumes/ and use that
|
||||
} else {
|
||||
// Do not pass a container as the parameter for the volume creation.
|
||||
// The graph driver using the container's information ( Image ) to
|
||||
// create the parent.
|
||||
c, err := container.daemon.volumes.Create(nil, "", "", "", "", nil, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srcPath, err = volumesDriver.Get(c.ID, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Driver %s failed to get volume rootfs %s: %s", volumesDriver, c.ID, err)
|
||||
}
|
||||
srcRW = true // RW by default
|
||||
}
|
||||
|
||||
if p, err := filepath.EvalSymlinks(srcPath); err != nil {
|
||||
return err
|
||||
} else {
|
||||
srcPath = p
|
||||
}
|
||||
|
||||
// Create the mountpoint
|
||||
rootVolPath, err := symlink.FollowSymlinkInScope(filepath.Join(container.basefs, volPath), container.basefs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
newVolPath, err := filepath.Rel(container.basefs, rootVolPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newVolPath = "/" + newVolPath
|
||||
|
||||
if volPath != newVolPath {
|
||||
delete(container.Volumes, volPath)
|
||||
delete(container.VolumesRW, volPath)
|
||||
}
|
||||
|
||||
container.Volumes[newVolPath] = srcPath
|
||||
container.VolumesRW[newVolPath] = srcRW
|
||||
|
||||
if err := createIfNotExists(rootVolPath, volIsDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Do not copy or change permissions if we are mounting from the host
|
||||
if srcRW && !isBindMount {
|
||||
if err := copyExistingContents(rootVolPath, srcPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func copyExistingContents(rootVolPath, srcPath string) error {
|
||||
volList, err := ioutil.ReadDir(rootVolPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(volList) > 0 {
|
||||
srcList, err := ioutil.ReadDir(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(srcList) == 0 {
|
||||
// If the source volume is empty copy files from the root into the volume
|
||||
if err := archive.CopyWithTar(rootVolPath, srcPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
stat syscall.Stat_t
|
||||
srcStat syscall.Stat_t
|
||||
)
|
||||
|
||||
if err := syscall.Stat(rootVolPath, &stat); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := syscall.Stat(srcPath, &srcStat); err != nil {
|
||||
return err
|
||||
}
|
||||
// Change the source volume's ownership if it differs from the root
|
||||
// files that were just copied
|
||||
if stat.Uid != srcStat.Uid || stat.Gid != srcStat.Gid {
|
||||
if err := os.Chown(srcPath, int(stat.Uid), int(stat.Gid)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
3
daemonconfig/README.md
Normal file
3
daemonconfig/README.md
Normal file
|
@ -0,0 +1,3 @@
|
|||
This directory contains code pertaining to the configuration of the docker deamon
|
||||
|
||||
These are the configuration settings that you pass to the docker daemon when you launch it with say: `docker -d -e lxc`
|
|
@ -98,6 +98,9 @@ func main() {
|
|||
}
|
||||
|
||||
if *flDaemon {
|
||||
if runtime.GOOS != "linux" {
|
||||
log.Fatalf("The Docker daemon is only supported on linux")
|
||||
}
|
||||
if os.Geteuid() != 0 {
|
||||
log.Fatalf("The Docker daemon needs to be run as root")
|
||||
}
|
||||
|
@ -185,6 +188,7 @@ func main() {
|
|||
job.Setenv("TlsCa", *flCa)
|
||||
job.Setenv("TlsCert", *flCert)
|
||||
job.Setenv("TlsKey", *flKey)
|
||||
job.SetenvBool("BufferRequests", true)
|
||||
if err := job.Run(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
FROM debian:jessie
|
||||
MAINTAINER Sven Dowideit <SvenDowideit@docker.com> (@SvenDowideit)
|
||||
|
||||
RUN apt-get update && apt-get install -yq make python-pip python-setuptools vim-tiny git pandoc
|
||||
RUN apt-get update && apt-get install -yq make python-pip python-setuptools vim-tiny git gettext
|
||||
|
||||
RUN pip install mkdocs
|
||||
|
||||
|
|
100
docs/README.md
100
docs/README.md
|
@ -1,37 +1,35 @@
|
|||
# Docker Documentation
|
||||
|
||||
The source for Docker documentation is here under `sources/` and uses
|
||||
extended Markdown, as implemented by [mkdocs](http://mkdocs.org).
|
||||
The source for Docker documentation is here under `sources/` and uses extended
|
||||
Markdown, as implemented by [MkDocs](http://mkdocs.org).
|
||||
|
||||
The HTML files are built and hosted on `https://docs.docker.io`, and
|
||||
update automatically after each change to the master or release branch
|
||||
of [Docker on GitHub](https://github.com/dotcloud/docker)
|
||||
thanks to post-commit hooks. The "docs" branch maps to the "latest"
|
||||
documentation and the "master" (unreleased development) branch maps to
|
||||
the "master" documentation.
|
||||
The HTML files are built and hosted on `https://docs.docker.io`, and update
|
||||
automatically after each change to the master or release branch of [Docker on
|
||||
GitHub](https://github.com/dotcloud/docker) thanks to post-commit hooks. The
|
||||
`docs` branch maps to the "latest" documentation and the `master` (unreleased
|
||||
development) branch maps to the "master" documentation.
|
||||
|
||||
## Branches
|
||||
|
||||
**There are two branches related to editing docs**: `master` and a
|
||||
`docs` branch. You should always edit documentation on a local branch
|
||||
of the `master` branch, and send a PR against `master`.
|
||||
**There are two branches related to editing docs**: `master` and a `docs`
|
||||
branch. You should always edit documentation on a local branch of the `master`
|
||||
branch, and send a PR against `master`.
|
||||
|
||||
That way your fixes will automatically get included in later releases,
|
||||
and docs maintainers can easily cherry-pick your changes into the
|
||||
`docs` release branch. In the rare case where your change is not
|
||||
forward-compatible, you may need to base your changes on the `docs`
|
||||
branch.
|
||||
That way your fixes will automatically get included in later releases, and docs
|
||||
maintainers can easily cherry-pick your changes into the `docs` release branch.
|
||||
In the rare case where your change is not forward-compatible, you may need to
|
||||
base your changes on the `docs` branch.
|
||||
|
||||
Also, now that we have a `docs` branch, we can keep the
|
||||
[http://docs.docker.io](http://docs.docker.io) docs up to date with any
|
||||
bugs found between `docker` code releases.
|
||||
[http://docs.docker.io](http://docs.docker.io) docs up to date with any bugs
|
||||
found between Docker code releases.
|
||||
|
||||
**Warning**: When *reading* the docs, the
|
||||
[http://beta-docs.docker.io](http://beta-docs.docker.io) documentation
|
||||
may include features not yet part of any official docker release. The
|
||||
`beta-docs` site should be used only for understanding bleeding-edge
|
||||
development and `docs.docker.io` (which points to the `docs`
|
||||
branch`) should be used for the latest official release.
|
||||
[http://beta-docs.docker.io](http://beta-docs.docker.io) documentation may
|
||||
include features not yet part of any official Docker release. The `beta-docs`
|
||||
site should be used only for understanding bleeding-edge development and
|
||||
`docs.docker.io` (which points to the `docs` branch`) should be used for the
|
||||
latest official release.
|
||||
|
||||
## Contributing
|
||||
|
||||
|
@ -41,59 +39,61 @@ branch`) should be used for the latest official release.
|
|||
|
||||
## Getting Started
|
||||
|
||||
Docker documentation builds are done in a Docker container, which
|
||||
installs all the required tools, adds the local `docs/` directory and
|
||||
builds the HTML docs. It then starts a HTTP server on port 8000 so that
|
||||
you can connect and see your changes.
|
||||
Docker documentation builds are done in a Docker container, which installs all
|
||||
the required tools, adds the local `docs/` directory and builds the HTML docs.
|
||||
It then starts a HTTP server on port 8000 so that you can connect and see your
|
||||
changes.
|
||||
|
||||
In the root of the `docker` source directory:
|
||||
|
||||
make docs
|
||||
|
||||
If you have any issues you need to debug, you can use `make docs-shell` and
|
||||
then run `mkdocs serve`
|
||||
If you have any issues you need to debug, you can use `make docs-shell` and then
|
||||
run `mkdocs serve`
|
||||
|
||||
## Style guide
|
||||
|
||||
The documentation is written with paragraphs wrapped at 80 colum lines to make
|
||||
it easier for terminal use.
|
||||
|
||||
### Examples
|
||||
|
||||
When writing examples give the user hints by making them resemble what
|
||||
they see in their shell:
|
||||
When writing examples give the user hints by making them resemble what they see
|
||||
in their shell:
|
||||
|
||||
- Indent shell examples by 4 spaces so they get rendered as code.
|
||||
- Start typed commands with `$ ` (dollar space), so that they are easily
|
||||
differentiated from program output.
|
||||
differentiated from program output.
|
||||
- Program output has no prefix.
|
||||
- Comments begin with `# ` (hash space).
|
||||
- In-container shell commands begin with `$$ ` (dollar dollar space).
|
||||
|
||||
### Images
|
||||
|
||||
When you need to add images, try to make them as small as possible
|
||||
(e.g. as gifs). Usually images should go in the same directory as the
|
||||
`.md` file which references them, or in a subdirectory if one already
|
||||
exists.
|
||||
When you need to add images, try to make them as small as possible (e.g. as
|
||||
gifs). Usually images should go in the same directory as the `.md` file which
|
||||
references them, or in a subdirectory if one already exists.
|
||||
|
||||
## Working using GitHub's file editor
|
||||
|
||||
Alternatively, for small changes and typos you might want to use
|
||||
GitHub's built in file editor. It allows you to preview your changes
|
||||
right on-line (though there can be some differences between GitHub
|
||||
Markdown and [MkDocs Markdown](http://www.mkdocs.org/user-guide/writing-your-docs/)).
|
||||
Just be careful not to create many commits. And you must still
|
||||
[sign your work!](../CONTRIBUTING.md#sign-your-work)
|
||||
Alternatively, for small changes and typos you might want to use GitHub's built
|
||||
in file editor. It allows you to preview your changes right on-line (though
|
||||
there can be some differences between GitHub Markdown and [MkDocs
|
||||
Markdown](http://www.mkdocs.org/user-guide/writing-your-docs/)). Just be
|
||||
careful not to create many commits. And you must still [sign your
|
||||
work!](../CONTRIBUTING.md#sign-your-work)
|
||||
|
||||
## Publishing Documentation
|
||||
|
||||
To publish a copy of the documentation you need a `docs/awsconfig`
|
||||
file containing AWS settings to deploy to. The release script will
|
||||
To publish a copy of the documentation you need a `docs/awsconfig` To make life
|
||||
easier for file containing AWS settings to deploy to. The release script will
|
||||
create an s3 if needed, and will then push the files to it.
|
||||
|
||||
[profile dowideit-docs]
|
||||
aws_access_key_id = IHOIUAHSIDH234rwf....
|
||||
aws_secret_access_key = OIUYSADJHLKUHQWIUHE......
|
||||
region = ap-southeast-2
|
||||
[profile dowideit-docs] aws_access_key_id = IHOIUAHSIDH234rwf....
|
||||
aws_secret_access_key = OIUYSADJHLKUHQWIUHE...... region = ap-southeast-2
|
||||
|
||||
The `profile` name must be the same as the name of the bucket you are
|
||||
deploying to - which you call from the `docker` directory:
|
||||
The `profile` name must be the same as the name of the bucket you are deploying
|
||||
to - which you call from the `docker` directory:
|
||||
|
||||
make AWS_S3_BUCKET=dowideit-docs docs-release
|
||||
|
||||
|
|
|
@ -28,15 +28,14 @@ pages:
|
|||
- ['index.md', 'About', 'Docker']
|
||||
- ['introduction/index.md', '**HIDDEN**']
|
||||
- ['introduction/understanding-docker.md', 'About', 'Understanding Docker']
|
||||
- ['introduction/technology.md', 'About', 'The Technology']
|
||||
- ['introduction/working-with-docker.md', 'About', 'Working with Docker']
|
||||
- ['introduction/get-docker.md', 'About', 'Get Docker']
|
||||
|
||||
# Installation:
|
||||
- ['installation/index.md', '**HIDDEN**']
|
||||
- ['installation/mac.md', 'Installation', 'Mac OS X']
|
||||
- ['installation/ubuntulinux.md', 'Installation', 'Ubuntu']
|
||||
- ['installation/rhel.md', 'Installation', 'Red Hat Enterprise Linux']
|
||||
- ['installation/debian.md', 'Installation', 'Debian']
|
||||
- ['installation/gentoolinux.md', 'Installation', 'Gentoo']
|
||||
- ['installation/google.md', 'Installation', 'Google Cloud Platform']
|
||||
- ['installation/rackspace.md', 'Installation', 'Rackspace Cloud']
|
||||
|
@ -57,7 +56,7 @@ pages:
|
|||
- ['examples/hello_world.md', 'Examples', 'Hello World']
|
||||
- ['examples/nodejs_web_app.md', 'Examples', 'Node.js web application']
|
||||
- ['examples/python_web_app.md', 'Examples', 'Python web application']
|
||||
- ['examples/mongodb.md', 'Examples', 'MongoDB service']
|
||||
- ['examples/mongodb.md', 'Examples', 'Dockerizing MongoDB']
|
||||
- ['examples/running_redis_service.md', 'Examples', 'Redis service']
|
||||
- ['examples/postgresql_service.md', 'Examples', 'PostgreSQL service']
|
||||
- ['examples/running_riak_service.md', 'Examples', 'Running a Riak service']
|
||||
|
@ -94,6 +93,7 @@ pages:
|
|||
- ['reference/commandline/index.md', '**HIDDEN**']
|
||||
- ['reference/commandline/cli.md', 'Reference', 'Command line']
|
||||
- ['reference/builder.md', 'Reference', 'Dockerfile']
|
||||
- ['faq.md', 'Reference', 'FAQ']
|
||||
- ['reference/run.md', 'Reference', 'Run Reference']
|
||||
- ['articles/index.md', '**HIDDEN**']
|
||||
- ['articles/runmetrics.md', 'Reference', 'Runtime metrics']
|
||||
|
|
|
@ -19,7 +19,7 @@ EOF
|
|||
[ "$AWS_S3_BUCKET" ] || usage
|
||||
|
||||
#VERSION=$(cat VERSION)
|
||||
BUCKET=$AWS_S3_BUCKET
|
||||
export BUCKET=$AWS_S3_BUCKET
|
||||
|
||||
export AWS_CONFIG_FILE=$(pwd)/awsconfig
|
||||
[ -e "$AWS_CONFIG_FILE" ] || usage
|
||||
|
@ -37,7 +37,10 @@ setup_s3() {
|
|||
# Make the bucket accessible through website endpoints.
|
||||
echo "make $BUCKET accessible as a website"
|
||||
#aws s3 website s3://$BUCKET --index-document index.html --error-document jsearch/index.html
|
||||
s3conf=$(cat s3_website.json)
|
||||
s3conf=$(cat s3_website.json | envsubst)
|
||||
echo
|
||||
echo $s3conf
|
||||
echo
|
||||
aws s3api put-bucket-website --bucket $BUCKET --website-configuration "$s3conf"
|
||||
}
|
||||
|
||||
|
@ -54,7 +57,7 @@ upload_current_documentation() {
|
|||
echo " to $dst"
|
||||
echo
|
||||
#s3cmd --recursive --follow-symlinks --preserve --acl-public sync "$src" "$dst"
|
||||
aws s3 sync --acl public-read --exclude "*.rej" --exclude "*.rst" --exclude "*.orig" --exclude "*.py" "$src" "$dst"
|
||||
aws s3 sync --cache-control "max-age=3600" --acl public-read --exclude "*.rej" --exclude "*.rst" --exclude "*.orig" --exclude "*.py" "$src" "$dst"
|
||||
}
|
||||
|
||||
setup_s3
|
||||
|
|
|
@ -6,12 +6,12 @@
|
|||
"Suffix": "index.html"
|
||||
},
|
||||
"RoutingRules": [
|
||||
{ "Condition": { "KeyPrefixEquals": "en/latest/" }, "Redirect": { "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "en/master/" }, "Redirect": { "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "en/v0.6.3/" }, "Redirect": { "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "jsearch/index.html" }, "Redirect": { "ReplaceKeyPrefixWith": "jsearch/" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "index/" }, "Redirect": { "ReplaceKeyPrefixWith": "docker-io/" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } }
|
||||
{ "Condition": { "KeyPrefixEquals": "en/latest/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "en/master/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "en/v0.6.3/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "jsearch/index.html" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "jsearch/" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "index/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "docker-io/" } },
|
||||
{ "Condition": { "KeyPrefixEquals": "reference/api/index_api/" }, "Redirect": { "HostName": "$BUCKET", "ReplaceKeyPrefixWith": "reference/api/docker-io_api/" } }
|
||||
]
|
||||
}
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ For Docker containers using cgroups, the container name will be the full
|
|||
ID or long ID of the container. If a container shows up as ae836c95b4c3
|
||||
in `docker ps`, its long ID might be something like
|
||||
`ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79`. You can
|
||||
look it up with `docker inspect` or `docker ps -notrunc`.
|
||||
look it up with `docker inspect` or `docker ps --no-trunc`.
|
||||
|
||||
Putting everything together to look at the memory metrics for a Docker
|
||||
container, take a look at `/sys/fs/cgroup/memory/lxc/<longid>/`.
|
||||
|
@ -310,8 +310,8 @@ layer; you will also have to add traffic going through the userland
|
|||
proxy.
|
||||
|
||||
Then, you will need to check those counters on a regular basis. If you
|
||||
happen to use `collectd`, there is a nice plugin to
|
||||
automate iptables counters collection.
|
||||
happen to use `collectd`, there is a [nice plugin](https://collectd.org/wiki/index.php/Plugin:IPTables)
|
||||
to automate iptables counters collection.
|
||||
|
||||
### Interface-level counters
|
||||
|
||||
|
|
|
@ -7,20 +7,25 @@ page_keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker.io,
|
|||
## Trusted Builds
|
||||
|
||||
*Trusted Builds* is a special feature allowing you to specify a source
|
||||
repository with a *Dockerfile* to be built by the Docker build clusters. The
|
||||
system will clone your repository and build the Dockerfile using the repository
|
||||
as the context. The resulting image will then be uploaded to the registry and
|
||||
marked as a `Trusted Build`.
|
||||
repository with a `Dockerfile` to be built by the
|
||||
[Docker.io](https://index.docker.io) build clusters. The system will
|
||||
clone your repository and build the `Dockerfile` using the repository as
|
||||
the context. The resulting image will then be uploaded to the registry
|
||||
and marked as a *Trusted Build*.
|
||||
|
||||
Trusted Builds have a number of advantages. For example, users of *your* Trusted
|
||||
Build can be certain that the resulting image was built exactly how it claims
|
||||
to be.
|
||||
|
||||
Furthermore, the Dockerfile will be available to anyone browsing your repository
|
||||
Furthermore, the `Dockerfile` will be available to anyone browsing your repository
|
||||
on the registry. Another advantage of the Trusted Builds feature is the automated
|
||||
builds. This makes sure that your repository is always up to date.
|
||||
|
||||
### Linking with a GitHub account
|
||||
Trusted builds are supported for both public and private repositories on
|
||||
both [GitHub](http://github.com) and
|
||||
[BitBucket](https://bitbucket.org/).
|
||||
|
||||
### Setting up Trusted Builds with GitHub
|
||||
|
||||
In order to setup a Trusted Build, you need to first link your [Docker.io](
|
||||
https://index.docker.io) account with a GitHub one. This will allow the registry
|
||||
|
@ -30,23 +35,28 @@ to see your repositories.
|
|||
> https://index.docker.io) needs to setup a GitHub service hook. Although nothing
|
||||
> else is done with your account, this is how GitHub manages permissions, sorry!
|
||||
|
||||
### Creating a Trusted Build
|
||||
Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
|
||||
get started and then select [+ Add
|
||||
New](https://index.docker.io/builds/add/).
|
||||
|
||||
Select the [GitHub
|
||||
service](https://index.docker.io/associate/github/).
|
||||
|
||||
Then follow the instructions to authorize and link your GitHub account
|
||||
to Docker.io.
|
||||
|
||||
#### Creating a Trusted Build
|
||||
|
||||
You can [create a Trusted Build](https://index.docker.io/builds/github/select/)
|
||||
from any of your public GitHub repositories with a Dockerfile.
|
||||
from any of your public or private GitHub repositories with a `Dockerfile`.
|
||||
|
||||
> **Note:** We currently only support public repositories. To have more than
|
||||
> one Docker image from the same GitHub repository, you will need to set up one
|
||||
> Trusted Build per Dockerfile, each using a different image name. This rule
|
||||
> applies to building multiple branches on the same GitHub repository as well.
|
||||
|
||||
### GitHub organizations
|
||||
#### GitHub organizations
|
||||
|
||||
GitHub organizations appear once your membership to that organization is
|
||||
made public on GitHub. To verify, you can look at the members tab for your
|
||||
organization on GitHub.
|
||||
|
||||
### GitHub service hooks
|
||||
#### GitHub service hooks
|
||||
|
||||
You can follow the below steps to configure the GitHub service hooks for your
|
||||
Trusted Build:
|
||||
|
@ -74,9 +84,32 @@ Trusted Build:
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
### Setting up Trusted Builds with BitBucket
|
||||
|
||||
In order to setup a Trusted Build, you need to first link your
|
||||
[Docker.io]( https://index.docker.io) account with a BitBucket one. This
|
||||
will allow the registry to see your repositories.
|
||||
|
||||
Click on the [Trusted Builds tab](https://index.docker.io/builds/) to
|
||||
get started and then select [+ Add
|
||||
New](https://index.docker.io/builds/add/).
|
||||
|
||||
Select the [BitBucket
|
||||
service](https://index.docker.io/associate/bitbucket/).
|
||||
|
||||
Then follow the instructions to authorize and link your BitBucket account
|
||||
to Docker.io.
|
||||
|
||||
#### Creating a Trusted Build
|
||||
|
||||
You can [create a Trusted
|
||||
Build](https://index.docker.io/builds/bitbucket/select/)
|
||||
from any of your public or private BitBucket repositories with a
|
||||
`Dockerfile`.
|
||||
|
||||
### The Dockerfile and Trusted Builds
|
||||
|
||||
During the build process, we copy the contents of your Dockerfile. We also
|
||||
During the build process, we copy the contents of your `Dockerfile`. We also
|
||||
add it to the [Docker.io](https://index.docker.io) for the Docker community
|
||||
to see on the repository page.
|
||||
|
||||
|
@ -89,14 +122,18 @@ repository's full description.
|
|||
> If you change the full description after a build, it will be
|
||||
> rewritten the next time the Trusted Build has been built. To make changes,
|
||||
> modify the README.md from the Git repository. We will look for a README.md
|
||||
> in the same directory as your Dockerfile.
|
||||
> in the same directory as your `Dockerfile`.
|
||||
|
||||
### Build triggers
|
||||
|
||||
If you need another way to trigger your Trusted Builds outside of GitHub, you
|
||||
can setup a build trigger. When you turn on the build trigger for a Trusted
|
||||
Build, it will give you a URL to which you can send POST requests. This will
|
||||
trigger the Trusted Build process, which is similar to GitHub webhooks.
|
||||
If you need another way to trigger your Trusted Builds outside of GitHub
|
||||
or BitBucket, you can setup a build trigger. When you turn on the build
|
||||
trigger for a Trusted Build, it will give you a URL to which you can
|
||||
send POST requests. This will trigger the Trusted Build process, which
|
||||
is similar to GitHub web hooks.
|
||||
|
||||
Build Triggers are available under the Settings tab of each Trusted
|
||||
Build.
|
||||
|
||||
> **Note:**
|
||||
> You can only trigger one build at a time and no more than one
|
||||
|
@ -105,6 +142,52 @@ trigger the Trusted Build process, which is similar to GitHub webhooks.
|
|||
> You can find the logs of last 10 triggers on the settings page to verify
|
||||
> if everything is working correctly.
|
||||
|
||||
### Webhooks
|
||||
|
||||
Also available for Trusted Builds are Webhooks. Webhooks can be called
|
||||
after a successful repository push is made.
|
||||
|
||||
The web hook call will generate a HTTP POST with the following JSON
|
||||
payload:
|
||||
|
||||
```
|
||||
{
|
||||
"push_data":{
|
||||
"pushed_at":1385141110,
|
||||
"images":[
|
||||
"imagehash1",
|
||||
"imagehash2",
|
||||
"imagehash3"
|
||||
],
|
||||
"pusher":"username"
|
||||
},
|
||||
"repository":{
|
||||
"status":"Active",
|
||||
"description":"my docker repo that does cool things",
|
||||
"is_trusted":false,
|
||||
"full_description":"This is my full description",
|
||||
"repo_url":"https://index.docker.io/u/username/reponame/",
|
||||
"owner":"username",
|
||||
"is_official":false,
|
||||
"is_private":false,
|
||||
"name":"reponame",
|
||||
"namespace":"username",
|
||||
"star_count":1,
|
||||
"comment_count":1,
|
||||
"date_created":1370174400,
|
||||
"dockerfile":"my full dockerfile is listed here",
|
||||
"repo_name":"username/reponame"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Webhooks are available under the Settings tab of each Trusted
|
||||
Build.
|
||||
|
||||
> **Note:** If you want to test your webhook out then we recommend using
|
||||
> a tool like [requestb.in](http://requestb.in/).
|
||||
|
||||
|
||||
### Repository links
|
||||
|
||||
Repository links are a way to associate one Trusted Build with another. If one
|
||||
|
|
|
@ -28,7 +28,7 @@ We're assuming your Docker host is reachable at `localhost`. If not,
|
|||
replace `localhost` with the public IP of your Docker host.
|
||||
|
||||
$ HOST=localhost
|
||||
$ URL="http://$HOST:$(sudo docker port $COUCH1 5984 | grep -Po '\d+$')/_utils/"
|
||||
$ URL="http://$HOST:$(sudo docker port $COUCH1 5984 | grep -o '[1-9][0-9]*$')/_utils/"
|
||||
$ echo "Navigate to $URL in your browser, and use the couch interface to add data"
|
||||
|
||||
## Create second database
|
||||
|
@ -40,7 +40,7 @@ This time, we're requesting shared access to `$COUCH1`'s volumes.
|
|||
## Browse data on the second database
|
||||
|
||||
$ HOST=localhost
|
||||
$ URL="http://$HOST:$(sudo docker port $COUCH2 5984 | grep -Po '\d+$')/_utils/"
|
||||
$ URL="http://$HOST:$(sudo docker port $COUCH2 5984 | grep -o '[1-9][0-9]*$')/_utils/"
|
||||
$ echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!'
|
||||
|
||||
Congratulations, you are now running two Couchdb containers, completely
|
||||
|
|
|
@ -1,89 +1,164 @@
|
|||
page_title: Building a Docker Image with MongoDB
|
||||
page_description: How to build a Docker image with MongoDB pre-installed
|
||||
page_keywords: docker, example, package installation, networking, mongodb
|
||||
page_title: Dockerizing MongoDB
|
||||
page_description: Creating a Docker image with MongoDB pre-installed using a Dockerfile and sharing the image on Docker.io
|
||||
page_keywords: docker, dockerize, dockerizing, article, example, docker.io, platform, package, installation, networking, mongodb, containers, images, image, sharing, dockerfile, build, auto-building, virtualization, framework
|
||||
|
||||
# Building an Image with MongoDB
|
||||
# Dockerizing MongoDB
|
||||
|
||||
> **Note**:
|
||||
## Introduction
|
||||
|
||||
In this example, we are going to learn how to build a Docker image
|
||||
with MongoDB pre-installed.
|
||||
We'll also see how to `push` that image to the [Docker.io registry](
|
||||
https://index.docker.io) and share it with others!
|
||||
|
||||
Using Docker and containers for deploying [MongoDB](https://www.mongodb.org/)
|
||||
instances will bring several benefits, such as:
|
||||
|
||||
- Easy to maintain, highly configurable MongoDB instances;
|
||||
- Ready to run and start working within milliseconds;
|
||||
- Based on globally accessible and shareable images.
|
||||
|
||||
> **Note:**
|
||||
>
|
||||
> - This example assumes you have Docker running in daemon mode. For
|
||||
> more information please see [*Check your Docker
|
||||
> install*](../hello_world/#running-examples).
|
||||
> - **If you don't like sudo** then see [*Giving non-root
|
||||
> access*](/installation/binaries/#dockergroup)
|
||||
> This example assumes you have Docker running in daemon mode. To verify,
|
||||
> try running `sudo docker info`.
|
||||
> For more information, please see: [*Check your Docker installation*](
|
||||
> /examples/hello_world/#running-examples).
|
||||
|
||||
The goal of this example is to show how you can build your own Docker
|
||||
images with MongoDB pre-installed. We will do that by constructing a
|
||||
Dockerfile that downloads a base image, adds an
|
||||
apt source and installs the database software on Ubuntu.
|
||||
> **Note:**
|
||||
>
|
||||
> If you do **_not_** like `sudo`, you might want to check out:
|
||||
> [*Giving non-root access*](installation/binaries/#giving-non-root-access).
|
||||
|
||||
## Creating a Dockerfile
|
||||
## Creating a Dockerfile for MongoDB
|
||||
|
||||
Create an empty file called Dockerfile:
|
||||
Let's create our `Dockerfile` and start building it:
|
||||
|
||||
$ touch Dockerfile
|
||||
$ nano Dockerfile
|
||||
|
||||
Next, define the parent image you want to use to build your own image on
|
||||
top of. Here, we'll use [Ubuntu](https://index.docker.io/_/ubuntu/)
|
||||
(tag: `latest`) available on the [docker
|
||||
index](http://index.docker.io):
|
||||
Although optional, it is handy to have comments at the beginning of a
|
||||
`Dockerfile` explaining its purpose:
|
||||
|
||||
FROM ubuntu:latest
|
||||
# Dockerizing MongoDB: Dockerfile for building MongoDB images
|
||||
# Based on ubuntu:latest, installs MongoDB following the instructions from:
|
||||
# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
|
||||
|
||||
Since we want to be running the latest version of MongoDB we'll need to
|
||||
add the 10gen repo to our apt sources list.
|
||||
> **Tip:** `Dockerfile`s are flexible. However, they need to follow a certain
|
||||
> format. The first item to be defined is the name of an image, which becomes
|
||||
> the *parent* of your *Dockerized MongoDB* image.
|
||||
|
||||
# Add 10gen official apt source to the sources list
|
||||
We will build our image using the latest version of Ubuntu from the
|
||||
[Docker.io Ubuntu](https://index.docker.io/_/ubuntu/) repository.
|
||||
|
||||
# Format: FROM repository[:version]
|
||||
FROM ubuntu:latest
|
||||
|
||||
Continuing, we will declare the `MAINTAINER` of the `Dockerfile`:
|
||||
|
||||
# Format: MAINTAINER Name <email@addr.ess>
|
||||
MAINTAINER M.Y. Name <myname@addr.ess>
|
||||
|
||||
> **Note:** Although Ubuntu systems have MongoDB packages, they are likely to
|
||||
> be outdated. Therefore in this example, we will use the official MongoDB
|
||||
> packages.
|
||||
|
||||
We will begin with importing the MongoDB public GPG key. We will also create
|
||||
a MongoDB repository file for the package manager.
|
||||
|
||||
# Installation:
|
||||
# Import MongoDB public GPG key AND create a MongoDB list file
|
||||
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
|
||||
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
|
||||
|
||||
Then, we don't want Ubuntu to complain about init not being available so
|
||||
we'll divert `/sbin/initctl` to
|
||||
`/bin/true` so it thinks everything is working.
|
||||
After this initial preparation we can update our packages and install MongoDB.
|
||||
|
||||
# Hack for initctl not being available in Ubuntu
|
||||
RUN dpkg-divert --local --rename --add /sbin/initctl
|
||||
RUN ln -s /bin/true /sbin/initctl
|
||||
|
||||
Afterwards we'll be able to update our apt repositories and install
|
||||
MongoDB
|
||||
|
||||
# Install MongoDB
|
||||
# Update apt-get sources AND install MongoDB
|
||||
RUN apt-get update
|
||||
RUN apt-get install mongodb-10gen
|
||||
RUN apt-get install -y -q mongodb-org
|
||||
|
||||
To run MongoDB we'll have to create the default data directory (because
|
||||
we want it to run without needing to provide a special configuration
|
||||
file)
|
||||
> **Tip:** You can install a specific version of MongoDB by using a list
|
||||
> of required packages with versions, e.g.:
|
||||
>
|
||||
> RUN apt-get install -y -q mongodb-org=2.6.1 mongodb-org-server=2.6.1 mongodb-org-shell=2.6.1 mongodb-org-mongos=2.6.1 mongodb-org-tools=2.6.1
|
||||
|
||||
MongoDB requires a data directory. Let's create it as the final step of our
|
||||
installation instructions.
|
||||
|
||||
# Create the MongoDB data directory
|
||||
RUN mkdir -p /data/db
|
||||
|
||||
Finally, we'll expose the standard port that MongoDB runs on, 27107, as
|
||||
well as define an `ENTRYPOINT` instruction for the
|
||||
container.
|
||||
Lastly we set the `ENTRYPOINT` which will tell Docker to run `mongod` inside
|
||||
the containers launched from our MongoDB image. And for ports, we will use
|
||||
the `EXPOSE` instruction.
|
||||
|
||||
# Expose port 27017 from the container to the host
|
||||
EXPOSE 27017
|
||||
ENTRYPOINT ["usr/bin/mongod"]
|
||||
|
||||
Now, lets build the image which will go through the
|
||||
Dockerfile we made and run all of the commands.
|
||||
# Set usr/bin/mongod as the dockerized entry-point application
|
||||
ENTRYPOINT usr/bin/mongod
|
||||
|
||||
$ sudo docker build -t <yourname>/mongodb .
|
||||
Now save the file and let's build our image.
|
||||
|
||||
Now you should be able to run `mongod` as a daemon
|
||||
and be able to connect on the local port!
|
||||
> **Note:**
|
||||
>
|
||||
> The full version of this `Dockerfile` can be found [here](/
|
||||
> /examples/mongodb/Dockerfile).
|
||||
|
||||
# Regular style
|
||||
$ MONGO_ID=$(sudo docker run -d <yourname>/mongodb)
|
||||
## Building the MongoDB Docker image
|
||||
|
||||
# Lean and mean
|
||||
$ MONGO_ID=$(sudo docker run -d <yourname>/mongodb --noprealloc --smallfiles)
|
||||
With our `Dockerfile`, we can now build the MongoDB image using Docker. Unless
|
||||
experimenting, it is always a good practice to tag Docker images by passing the
|
||||
`--tag` option to `docker build` command.
|
||||
|
||||
# Check the logs out
|
||||
$ sudo docker logs $MONGO_ID
|
||||
# Format: sudo docker build --tag/-t <user-name>/<repository> .
|
||||
# Example:
|
||||
$ sudo docker build --tag my/repo .
|
||||
|
||||
# Connect and play around
|
||||
$ mongo --port <port you get from `docker ps`>
|
||||
Once this command is issued, Docker will go through the `Dockerfile` and build
|
||||
the image. The final image will be tagged `my/repo`.
|
||||
|
||||
Sweet!
|
||||
## Pushing the MongoDB image to Docker.io
|
||||
|
||||
All Docker image repositories can be hosted and shared on
|
||||
[Docker.io](https://index.docker.io) with the `docker push` command. For this,
|
||||
you need to be logged-in.
|
||||
|
||||
# Log-in
|
||||
$ sudo docker login
|
||||
Username:
|
||||
..
|
||||
|
||||
# Push the image
|
||||
# Format: sudo docker push <user-name>/<repository>
|
||||
$ sudo docker push my/repo
|
||||
The push refers to a repository [my/repo] (len: 1)
|
||||
Sending image list
|
||||
Pushing repository my/repo (1 tags)
|
||||
..
|
||||
|
||||
## Using the MongoDB image
|
||||
|
||||
Using the MongoDB image we created, we can run one or more MongoDB instances
|
||||
as daemon process(es).
|
||||
|
||||
# Basic way
|
||||
# Usage: sudo docker run --name <name for container> -d <user-name>/<repository>
|
||||
$ sudo docker run --name mongo_instance_001 -d my/repo
|
||||
|
||||
# Dockerized MongoDB, lean and mean!
|
||||
# Usage: sudo docker run --name <name for container> -d <user-name>/<repository> --noprealloc --smallfiles
|
||||
$ sudo docker run --name mongo_instance_001 -d my/repo --noprealloc --smallfiles
|
||||
|
||||
# Checking out the logs of a MongoDB container
|
||||
# Usage: sudo docker logs <name for container>
|
||||
$ sudo docker logs mongo_instance_001
|
||||
|
||||
# Playing with MongoDB
|
||||
# Usage: mongo --port <port you get from `docker ps`>
|
||||
$ mongo --port 12345
|
||||
|
||||
## Learn more
|
||||
|
||||
- [Linking containers](/use/working_with_links_names/)
|
||||
- [Cross-host linking containers](/use/ambassador_pattern_linking/)
|
||||
- [Creating a Trusted Build](/docker-io/builds/#trusted-builds)
|
||||
|
|
24
docs/sources/examples/mongodb/Dockerfile
Normal file
24
docs/sources/examples/mongodb/Dockerfile
Normal file
|
@ -0,0 +1,24 @@
|
|||
# Dockerizing MongoDB: Dockerfile for building MongoDB images
|
||||
# Based on ubuntu:latest, installs MongoDB following the instructions from:
|
||||
# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
|
||||
|
||||
FROM ubuntu:latest
|
||||
MAINTAINER Docker
|
||||
|
||||
# Installation:
|
||||
# Import MongoDB public GPG key AND create a MongoDB list file
|
||||
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
|
||||
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
|
||||
|
||||
# Update apt-get sources AND install MongoDB
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y -q mongodb-org
|
||||
|
||||
# Create the MongoDB data directory
|
||||
RUN mkdir -p /data/db
|
||||
|
||||
# Expose port #27017 from the container to the host
|
||||
EXPOSE 27017
|
||||
|
||||
# Set usr/bin/mongod as the dockerized entry-point application
|
||||
ENTRYPOINT usr/bin/mongod
|
|
@ -84,7 +84,7 @@ Build an image from the Dockerfile assign it a name.
|
|||
|
||||
And run the PostgreSQL server container (in the foreground):
|
||||
|
||||
$ sudo docker run -rm -P -name pg_test eg_postgresql
|
||||
$ sudo docker run --rm -P --name pg_test eg_postgresql
|
||||
|
||||
There are 2 ways to connect to the PostgreSQL server. We can use [*Link
|
||||
Containers*](/use/working_with_links_names/#working-with-links-names),
|
||||
|
@ -101,7 +101,7 @@ Containers can be linked to another container's ports directly using
|
|||
`docker run`. This will set a number of environment
|
||||
variables that can then be used to connect:
|
||||
|
||||
$ sudo docker run -rm -t -i -link pg_test:pg eg_postgresql bash
|
||||
$ sudo docker run --rm -t -i --link pg_test:pg eg_postgresql bash
|
||||
|
||||
postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password
|
||||
|
||||
|
@ -143,7 +143,7 @@ prompt, you can create a table and populate it.
|
|||
You can use the defined volumes to inspect the PostgreSQL log files and
|
||||
to backup your configuration and data:
|
||||
|
||||
$ docker run -rm --volumes-from pg_test -t -i busybox sh
|
||||
$ docker run --rm --volumes-from pg_test -t -i busybox sh
|
||||
|
||||
/ # ls
|
||||
bin etc lib linuxrc mnt proc run sys usr
|
||||
|
|
|
@ -51,7 +51,7 @@ the `$URL` variable. The container is given a name
|
|||
While this example is simple, you could run any number of interactive
|
||||
commands, try things out, and then exit when you're done.
|
||||
|
||||
$ sudo docker run -i -t -name pybuilder_run shykes/pybuilder bash
|
||||
$ sudo docker run -i -t --name pybuilder_run shykes/pybuilder bash
|
||||
|
||||
$$ URL=http://github.com/shykes/helloflask/archive/master.tar.gz
|
||||
$$ /usr/local/bin/buildapp $URL
|
||||
|
|
|
@ -35,12 +35,12 @@ quick access to a test container.
|
|||
|
||||
Build the image using:
|
||||
|
||||
$ sudo docker build -rm -t eg_sshd .
|
||||
$ sudo docker build --rm -t eg_sshd .
|
||||
|
||||
Then run it. You can then use `docker port` to find
|
||||
out what host port the container's port 22 is mapped to:
|
||||
|
||||
$ sudo docker run -d -P -name test_sshd eg_sshd
|
||||
$ sudo docker run -d -P --name test_sshd eg_sshd
|
||||
$ sudo docker port test_sshd 22
|
||||
0.0.0.0:49154
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ install and manage both an SSH daemon and an Apache daemon.
|
|||
Let's start by creating a basic `Dockerfile` for our
|
||||
new image.
|
||||
|
||||
FROM ubuntu:latest
|
||||
FROM ubuntu:13.04
|
||||
MAINTAINER examples@docker.io
|
||||
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
|
||||
RUN apt-get update
|
||||
|
|
|
@ -1,82 +1,99 @@
|
|||
page_title: About Docker
|
||||
page_description: Docker introduction home page
|
||||
page_description: Introduction to Docker.
|
||||
page_keywords: docker, introduction, documentation, about, technology, understanding, Dockerfile
|
||||
|
||||
# About Docker
|
||||
|
||||
*Secure And Portable Containers Made Easy*
|
||||
**Develop, Ship and Run Any Application, Anywhere**
|
||||
|
||||
## Introduction
|
||||
|
||||
[**Docker**](https://www.docker.io) is a container based virtualization
|
||||
framework. Unlike traditional virtualization Docker is fast, lightweight
|
||||
and easy to use. Docker allows you to create containers holding
|
||||
all the dependencies for an application. Each container is kept isolated
|
||||
from any other, and nothing gets shared.
|
||||
[**Docker**](https://www.docker.io) is a platform for developers and
|
||||
sysadmins to develop, ship, and run applications. Docker consists of:
|
||||
|
||||
## Docker highlights
|
||||
* The Docker Engine - our lightweight and powerful open source container
|
||||
virtualization technology combined with a work flow to help you build
|
||||
and containerize your applications.
|
||||
* [Docker.io](https://index.docker.io) - our SAAS service that helps you
|
||||
share and manage your applications stacks.
|
||||
|
||||
- **Containers provide sand-boxing:**
|
||||
Applications run securely without outside access.
|
||||
- **Docker allows simple portability:**
|
||||
Containers are directories, they can be zipped and transported.
|
||||
- **It all works fast:**
|
||||
Starting a container is a very fast single process.
|
||||
- **Docker is easy on the system resources (unlike VMs):**
|
||||
No more than what each application needs.
|
||||
- **Agnostic in its _essence_:**
|
||||
Free of framework, language or platform dependencies.
|
||||
Docker enables applications to be quickly assembled from components and
|
||||
eliminates the friction when shipping code. We want to help you get code
|
||||
from your desktop, tested and deployed into production as fast as
|
||||
possible.
|
||||
|
||||
And most importantly:
|
||||
## Why Docker?
|
||||
|
||||
- **Docker reduces complexity:**
|
||||
Docker accepts commands *in plain English*, e.g. `docker run [..]`.
|
||||
- **Faster delivery of your applications**
|
||||
* We want to help your environment work better. Docker containers,
|
||||
and the work flow that comes with them, helps your developers,
|
||||
sysadmins, QA folks, and release engineers work together to get code
|
||||
into production and doing something useful. We've created a standard
|
||||
container format that allows developers to care about their applications
|
||||
inside containers and sysadmins and operators to care about running the
|
||||
container. This creates a separation of duties that makes managing and
|
||||
deploying code much easier and much more streamlined.
|
||||
* We make it easy to build new containers, enable rapid iteration of
|
||||
your applications and increase the visibility of changes. This
|
||||
helps everyone in your organization understand how an application works
|
||||
and how it is built.
|
||||
* Docker containers are lightweight and fast! Containers have
|
||||
sub-second launch times! With containers you can reduce the cycle
|
||||
time in development, testing and deployment.
|
||||
|
||||
- **Deploy and scale more easily**
|
||||
* Docker containers run (almost!) everywhere. You can deploy your
|
||||
containers on desktops, physical servers, virtual machines, into
|
||||
data centers and to public and private clouds.
|
||||
* As Docker runs on so many platforms it makes it easy to move your
|
||||
appications around. You can easily move an application from a
|
||||
testing environment into the cloud and back whenever you need.
|
||||
* The lightweight containers Docker creates also making scaling and
|
||||
down really fast and easy. If you need more containers you can
|
||||
quickly launch them and then shut them down when you don't need them
|
||||
anymore.
|
||||
|
||||
- **Get higher density and run more workloads**
|
||||
* Docker containers don't need a hypervisor so you can pack more of
|
||||
them onto your hosts. This means you get more value out of every
|
||||
server and can potentially reduce the money you spend on equipment and
|
||||
licenses!
|
||||
|
||||
- **Faster deployment makes for easier management**
|
||||
* As Docker speeds up your work flow it makes it easier to make lots
|
||||
of little changes instead of huge, big bang updates. Smaller
|
||||
changes mean smaller risks and mean more uptime!
|
||||
|
||||
## About this guide
|
||||
|
||||
In this introduction we will take you on a tour and show you what
|
||||
makes Docker tick.
|
||||
First we'll show you [what makes Docker tick in our Understanding Docker
|
||||
section](introduction/understanding-docker.md):
|
||||
|
||||
On the [**first page**](introduction/understanding-docker.md), which is
|
||||
**_informative_**:
|
||||
|
||||
- You will find information on Docker;
|
||||
- And discover Docker's features.
|
||||
- We will also compare Docker to virtual machines;
|
||||
- You will find see how Docker works at a high level;
|
||||
- The architecture of Docker;
|
||||
- Discover Docker's features;
|
||||
- See how Docker compares to virtual machines;
|
||||
- And see some common use cases.
|
||||
|
||||
> [Click here to go to Understanding Docker](introduction/understanding-docker.md).
|
||||
> [Click here to go to the Understanding
|
||||
> Docker section](introduction/understanding-docker.md).
|
||||
|
||||
The [**second page**](introduction/technology.md) has **_technical_** information on:
|
||||
Next we get [**practical** with the Working with Docker
|
||||
section](introduction/working-with-docker.md) and you can learn about:
|
||||
|
||||
- The architecture of Docker;
|
||||
- The underlying technology, and;
|
||||
- *How* Docker works.
|
||||
- Docker on the command line;
|
||||
- Get introduced to your first Docker commands;
|
||||
- Get to know your way around the basics of Docker operation.
|
||||
|
||||
> [Click here to go to Understanding the Technology](introduction/technology.md).
|
||||
> [Click here to go to the Working with
|
||||
> Docker section](introduction/working-with-docker.md).
|
||||
|
||||
On the [**third page**](introduction/working-with-docker.md) we get **_practical_**.
|
||||
There you can:
|
||||
|
||||
- Learn about Docker's components (i.e. Containers, Images and the
|
||||
Dockerfile);
|
||||
- And get started working with them straight away.
|
||||
|
||||
> [Click here to go to Working with Docker](introduction/working-with-docker.md).
|
||||
|
||||
Finally, on the [**fourth**](introduction/get-docker.md) page, we go **_hands on_**
|
||||
and see:
|
||||
|
||||
- The installation instructions, and;
|
||||
- How Docker makes some hard problems much, much easier.
|
||||
|
||||
> [Click here to go to Get Docker](introduction/get-docker.md).
|
||||
If you want to see how to install Docker you can jump to the
|
||||
[installation](/installation/#installation) section.
|
||||
|
||||
> **Note**:
|
||||
> We know how valuable your time is. Therefore, the documentation is prepared
|
||||
> in a way to allow anyone to start from any section need. Although we strongly
|
||||
> recommend that you visit [Understanding Docker](
|
||||
> introduction/understanding-docker.md) to see how Docker is different, if you
|
||||
> already have some knowledge and want to quickly get started with Docker,
|
||||
> don't hesitate to jump to [Working with Docker](
|
||||
> introduction/working-with-docker.md).
|
||||
> We know how valuable your time is so you if you want to get started
|
||||
> with Docker straight away don't hesitate to jump to [Working with
|
||||
> Docker](introduction/working-with-docker.md). For a fuller
|
||||
> understanding of Docker though we do recommend you read [Understanding
|
||||
> Docker]( introduction/understanding-docker.md).
|
||||
|
|
|
@ -12,6 +12,7 @@ techniques for installing Docker all the time.
|
|||
- [Ubuntu](ubuntulinux/)
|
||||
- [Red Hat Enterprise Linux](rhel/)
|
||||
- [Fedora](fedora/)
|
||||
- [Debian](debian/)
|
||||
- [Arch Linux](archlinux/)
|
||||
- [CRUX Linux](cruxlinux/)
|
||||
- [Gentoo](gentoolinux/)
|
||||
|
@ -22,4 +23,4 @@ techniques for installing Docker all the time.
|
|||
- [Amazon EC2](amazon/)
|
||||
- [Rackspace Cloud](rackspace/)
|
||||
- [Google Cloud Platform](google/)
|
||||
- [Binaries](binaries/)
|
||||
- [Binaries](binaries/)
|
||||
|
|
|
@ -17,50 +17,24 @@ page_keywords: crux linux, virtualization, Docker, documentation, installation
|
|||
> some binaries to be updated and published.
|
||||
|
||||
Installing on CRUX Linux can be handled via the ports from [James
|
||||
Mills](http://prologic.shortcircuit.net.au/):
|
||||
Mills](http://prologic.shortcircuit.net.au/) and are included in the
|
||||
official [contrib](http://crux.nu/portdb/?a=repo&q=contrib) ports:
|
||||
|
||||
- [docker](https://bitbucket.org/prologic/ports/src/tip/docker/)
|
||||
- [docker-bin](https://bitbucket.org/prologic/ports/src/tip/docker-bin/)
|
||||
- [docker-git](https://bitbucket.org/prologic/ports/src/tip/docker-git/)
|
||||
- docker
|
||||
- docker-bin
|
||||
|
||||
The `docker` port will install the latest tagged
|
||||
version of Docker. The `docker-bin` port will
|
||||
install the latest tagged versin of Docker from upstream built binaries.
|
||||
The `docker-git` package will build from the current
|
||||
master branch.
|
||||
install the latest tagged version of Docker from upstream built binaries.
|
||||
|
||||
## Installation
|
||||
|
||||
For the time being (*until the CRUX Docker port(s) get into the official
|
||||
contrib repository*) you will need to install [James
|
||||
Mills`](https://bitbucket.org/prologic/ports) ports repository. You can
|
||||
do so via:
|
||||
Assuming you have contrib enabled, update your ports tree and install docker (*as root*):
|
||||
|
||||
Download the `httpup` file to
|
||||
`/etc/ports/`:
|
||||
# prt-get depinst docker
|
||||
|
||||
$ curl -q -o - http://crux.nu/portdb/?a=getup&q=prologic > /etc/ports/prologic.httpup
|
||||
You can install `docker-bin` instead if you wish to avoid compilation time.
|
||||
|
||||
Add `prtdir /usr/ports/prologic` to
|
||||
`/etc/prt-get.conf`:
|
||||
|
||||
$ vim /etc/prt-get.conf
|
||||
|
||||
# or:
|
||||
$ echo "prtdir /usr/ports/prologic" >> /etc/prt-get.conf
|
||||
|
||||
Update ports and prt-get cache:
|
||||
|
||||
$ ports -u
|
||||
$ prt-get cache
|
||||
|
||||
To install (*and its dependencies*):
|
||||
|
||||
$ prt-get depinst docker
|
||||
|
||||
Use `docker-bin` for the upstream binary or
|
||||
`docker-git` to build and install from the master
|
||||
branch from git.
|
||||
|
||||
## Kernel Requirements
|
||||
|
||||
|
@ -68,24 +42,34 @@ To have a working **CRUX+Docker** Host you must ensure your Kernel has
|
|||
the necessary modules enabled for LXC containers to function correctly
|
||||
and Docker Daemon to work properly.
|
||||
|
||||
Please read the `README.rst`:
|
||||
Please read the `README`:
|
||||
|
||||
$ prt-get readme docker
|
||||
|
||||
There is a `test_kernel_config.sh` script in the
|
||||
above ports which you can use to test your Kernel configuration:
|
||||
The `docker` and `docker-bin` ports install the `contrib/check-config.sh`
|
||||
script provided by the Docker contributors for checking your kernel
|
||||
configuration as a suitable Docker Host.
|
||||
|
||||
$ cd /usr/ports/prologic/docker
|
||||
$ ./test_kernel_config.sh /usr/src/linux/.config
|
||||
$ /usr/share/docker/check-config.sh
|
||||
|
||||
## Starting Docker
|
||||
|
||||
There is a rc script created for Docker. To start the Docker service:
|
||||
There is a rc script created for Docker. To start the Docker service (*as root*):
|
||||
|
||||
$ sudo su -
|
||||
$ /etc/rc.d/docker start
|
||||
# /etc/rc.d/docker start
|
||||
|
||||
To start on system boot:
|
||||
|
||||
- Edit `/etc/rc.conf`
|
||||
- Put `docker` into the `SERVICES=(...)` array after `net`.
|
||||
|
||||
## Issues
|
||||
|
||||
If you have any issues please file a bug with the
|
||||
[CRUX Bug Tracker](http://crux.nu/bugs/).
|
||||
|
||||
## Support
|
||||
|
||||
For support contact the [CRUX Mailing List](http://crux.nu/Main/MailingLists)
|
||||
or join CRUX's [IRC Channels](http://crux.nu/Main/IrcChannels). on the
|
||||
[FreeNode](http://freenode.net/) IRC Network.
|
||||
|
|
78
docs/sources/installation/debian.md
Normal file
78
docs/sources/installation/debian.md
Normal file
|
@ -0,0 +1,78 @@
|
|||
page_title: Installation on Debian
|
||||
page_description: Instructions for installing Docker on Debian
|
||||
page_keywords: Docker, Docker documentation, installation, debian
|
||||
|
||||
# Debian
|
||||
|
||||
> **Note**:
|
||||
> Docker is still under heavy development! We don't recommend using it in
|
||||
> production yet, but we're getting closer with each release. Please see
|
||||
> our blog post, [Getting to Docker 1.0](
|
||||
> http://blog.docker.io/2013/08/getting-to-docker-1-0/)
|
||||
|
||||
Docker is supported on the following versions of Debian:
|
||||
|
||||
- [*Debian 8.0 Jessie (64-bit)*](#debian-jessie-8-64-bit)
|
||||
|
||||
## Debian Jessie 8.0 (64-bit)
|
||||
|
||||
Debian 8 comes with a 3.14.0 Linux kernel, and a `docker.io` package which
|
||||
installs all its prerequisites from Debian's repository.
|
||||
|
||||
> **Note**:
|
||||
> Debian contains a much older KDE3/GNOME2 package called ``docker``, so the
|
||||
> package and the executable are called ``docker.io``.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the latest Debian package (may not be the latest Docker release):
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install docker.io
|
||||
$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
|
||||
$ sudo sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io
|
||||
|
||||
To verify that everything has worked as expected:
|
||||
|
||||
$ sudo docker run -i -t ubuntu /bin/bash
|
||||
|
||||
Which should download the `ubuntu` image, and then start `bash` in a container.
|
||||
|
||||
> **Note**:
|
||||
> If you want to enable memory and swap accounting see
|
||||
> [this](/installation/ubuntulinux/#memory-and-swap-accounting).
|
||||
|
||||
### Giving non-root access
|
||||
|
||||
The `docker` daemon always runs as the `root` user, and since Docker
|
||||
version 0.5.2, the `docker` daemon binds to a Unix socket instead of a
|
||||
TCP port. By default that Unix socket is owned by the user `root`, and
|
||||
so, by default, you can access it with `sudo`.
|
||||
|
||||
Starting in version 0.5.3, if you (or your Docker installer) create a
|
||||
Unix group called `docker` and add users to it, then the `docker` daemon
|
||||
will make the ownership of the Unix socket read/writable by the `docker`
|
||||
group when the daemon starts. The `docker` daemon must always run as the
|
||||
root user, but if you run the `docker` client as a user in the `docker`
|
||||
group then you don't need to add `sudo` to all the client commands. From
|
||||
Docker 0.9.0 you can use the `-G` flag to specify an alternative group.
|
||||
|
||||
> **Warning**:
|
||||
> The `docker` group (or the group specified with the `-G` flag) is
|
||||
> `root`-equivalent; see [*Docker Daemon Attack Surface*](
|
||||
> /articles/security/#dockersecurity-daemon) details.
|
||||
|
||||
**Example:**
|
||||
|
||||
# Add the docker group if it doesn't already exist.
|
||||
$ sudo groupadd docker
|
||||
|
||||
# Add the connected user "${USER}" to the docker group.
|
||||
# Change the user name to match your preferred user.
|
||||
# You may have to logout and log back in again for
|
||||
# this to take effect.
|
||||
$ sudo gpasswd -a ${USER} docker
|
||||
|
||||
# Restart the Docker daemon.
|
||||
$ sudo service docker restart
|
||||
|
|
@ -32,7 +32,7 @@ it. To proceed with `docker-io` installation on Fedora 19, please remove
|
|||
|
||||
$ sudo yum -y remove docker
|
||||
|
||||
For Fedora 20 and later, the `wmdocker` package will
|
||||
For Fedora 21 and later, the `wmdocker` package will
|
||||
provide the same functionality as `docker` and will
|
||||
also not conflict with `docker-io`.
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue