Compare commits
260 commits
Author | SHA1 | Date | |
---|---|---|---|
|
301bfbdd21 | ||
|
5213a0a67e | ||
|
164ab2cfc9 | ||
|
fc438e4171 | ||
|
c4b72004fd | ||
|
b74db4d8e3 | ||
|
ad2a9f3d1e | ||
|
782e218be6 | ||
|
5d402c1c27 | ||
|
76f6c611f1 | ||
|
9aa451ec5f | ||
|
98638feaae | ||
|
0d8b909281 | ||
|
f646c041b8 | ||
|
683700619d | ||
|
650ddc1f81 | ||
|
af3b1fbcbc | ||
|
785e38d57b | ||
|
cc68a963e0 | ||
|
a6bbdf347a | ||
|
89b2a14932 | ||
|
5df3c258a1 | ||
|
d3bbe11b99 | ||
|
b6766eb676 | ||
|
25ccb74f4f | ||
|
e85cc97d65 | ||
|
b0e27887f2 | ||
|
fa6685ca85 | ||
|
6ccc50d04e | ||
|
d544b3ee99 | ||
|
a54af42edb | ||
|
7abc60a93b | ||
|
3bdc7244a8 | ||
|
fa29ecbceb | ||
|
c70c8f1ed9 | ||
|
4efcbf5784 | ||
|
9114864ace | ||
|
0626fa4555 | ||
|
d41cd45c3d | ||
|
a9f9a79d5c | ||
|
08f1f62f41 | ||
|
e47de0ba4d | ||
|
e3007c51a5 | ||
|
4971a81f75 | ||
|
b42d833cd8 | ||
|
281f0fa53b | ||
|
36ada7d158 | ||
|
307195b3e1 | ||
|
d1024638f9 | ||
|
9396307501 | ||
|
54a58f9886 | ||
|
ab595882e3 | ||
|
fcd432d110 | ||
|
77efb507b3 | ||
|
c23ad97de5 | ||
|
b28e6b7eda | ||
|
785665203d | ||
|
36a62de41f | ||
|
e7d0711142 | ||
|
6e916fca02 | ||
|
bfe58ad819 | ||
|
8d485949d6 | ||
|
7e83ae34dc | ||
|
ef76dd0761 | ||
|
b706ee90ca | ||
|
2f4b69229a | ||
|
b34165ae37 | ||
|
cef1a10f5e | ||
|
2c531b4fb7 | ||
|
8fc5559841 | ||
|
ea8b5e07b6 | ||
|
3cf8c53515 | ||
|
15af7564cf | ||
|
340c9a4619 | ||
|
3b2ed5b2df | ||
|
0b5c960bb0 | ||
|
14a2985f37 | ||
|
a1d16b4557 | ||
|
290e0ea54c | ||
|
5901eda4f7 | ||
|
c2bea5ba26 | ||
|
4703824f00 | ||
|
4bf3f74126 | ||
|
5b1136ba8d | ||
|
f264a73afd | ||
|
fc4f927588 | ||
|
84314e09ab | ||
|
895569df9d | ||
|
21223f8873 | ||
|
ac9b38b60d | ||
|
53b4fd1d81 | ||
|
ae4c7b1493 | ||
|
71c0acf1ca | ||
|
97a5055198 | ||
|
2d532738b5 | ||
|
3eaabe0257 | ||
|
b9f11557af | ||
|
ce9bc253f6 | ||
|
6bbe5722aa | ||
|
cde2df6db9 | ||
|
d9cf30d7de | ||
|
9af185e3d0 | ||
|
8d2798c37e | ||
|
4858230a07 | ||
|
4dc5990d75 | ||
|
365d80b3e1 | ||
|
2535db8678 | ||
|
54edfc41c6 | ||
|
4a6f2274be | ||
|
db08f19e36 | ||
|
af370ff997 | ||
|
645836f250 | ||
|
3d85e51ef4 | ||
|
5618fbf18c | ||
|
bf312aca9c | ||
|
4172802d68 | ||
|
fb06ddf4db | ||
|
d126e2fb72 | ||
|
3130169eb2 | ||
|
260a835cb4 | ||
|
d42b3f6765 | ||
|
134990dd07 | ||
|
aa7da70459 | ||
|
8ff913e45f | ||
|
642c4a7814 | ||
|
3522bdfc99 | ||
|
490c26524c | ||
|
7ae170ddab | ||
|
b4f0d68843 | ||
|
cbd50baf8b | ||
|
0b25417cce | ||
|
eb405d2b73 | ||
|
133773a4d0 | ||
|
9cdfce68f8 | ||
|
716b5b2547 | ||
|
52b8adea4d | ||
|
b1b86e9166 | ||
|
35d6def3aa | ||
|
2153d9ec9d | ||
|
b45be307a3 | ||
|
f75a21ef32 | ||
|
dae909fb3e | ||
|
99589731ac | ||
|
fd018754e2 | ||
|
a573ab1f81 | ||
|
c774c390b1 | ||
|
896f8b337e | ||
|
40ff845220 | ||
|
60be8487c1 | ||
|
5e5e07e106 | ||
|
9e4c6c75f5 | ||
|
5d1b0aecd0 | ||
|
f685fe1a99 | ||
|
9e62a2aad2 | ||
|
460806241c | ||
|
651cace5ee | ||
|
e941f698da | ||
|
cc5c9013d9 | ||
|
c17ee39d12 | ||
|
eeb30821ea | ||
|
38c206f97b | ||
|
346c5297b0 | ||
|
e66633c39e | ||
|
2f69842afa | ||
|
c5d179891f | ||
|
4d7d1736bd | ||
|
a5b5bdbbb4 | ||
|
9e3bfd5864 | ||
|
bab77d4991 | ||
|
4e1ff10d60 | ||
|
2936442f9d | ||
|
c5769cf53b | ||
|
b84d18ec21 | ||
|
2e92a84fa8 | ||
|
deb08a1012 | ||
|
4b21fdc96a | ||
|
1818ca9d75 | ||
|
9db0bd88f5 | ||
|
b98f05b4f4 | ||
|
ed2fcd9a2a | ||
|
1e67fdf3e4 | ||
|
2a60e5cac6 | ||
|
c319887dbb | ||
|
3fd08cc5e6 | ||
|
cd062fd3b3 | ||
|
55186eae32 | ||
|
76215b3268 | ||
|
f97f3e98fc | ||
|
7500c8cc72 | ||
|
c3eed8430c | ||
|
073d7841b4 | ||
|
8b0179c771 | ||
|
85d1517184 | ||
|
b59dced332 | ||
|
89ede3ae23 | ||
|
8facb73a8f | ||
|
c4fa814ecd | ||
|
e9279d57f7 | ||
|
01c531a72e | ||
|
a17e61c020 | ||
|
19b22712c0 | ||
|
e6629d4c10 | ||
|
0c598f34b6 | ||
|
e799da7e6a | ||
|
50552642ca | ||
|
474631498c | ||
|
08ccfd36e1 | ||
|
c3b3f8201a | ||
|
81a3a72727 | ||
|
70c594508f | ||
|
1fa9574e2b | ||
|
4a59dc5a41 | ||
|
c4b33d5334 | ||
|
197d61d01a | ||
|
abe8c11e36 | ||
|
08d09e7733 | ||
|
e23f622d38 | ||
|
73f7f515be | ||
|
501b0c387d | ||
|
cf1d012fda | ||
|
ad37aac45b | ||
|
f66010ad31 | ||
|
34fca93daf | ||
|
048db1da22 | ||
|
e768fc8468 | ||
|
31755449e1 | ||
|
fef9c5b432 | ||
|
519ac1252c | ||
|
89276c679e | ||
|
5a71ca6739 | ||
|
92c9bab6ab | ||
|
f04334ea04 | ||
|
ea799625bd | ||
|
3ef31215f4 | ||
|
4b03e857de | ||
|
c5e8051c81 | ||
|
6d324b4192 | ||
|
060330bf46 | ||
|
413155df6e | ||
|
48ce060e8c | ||
|
6106313b20 | ||
|
ae4f265053 | ||
|
4fc85b47fc | ||
|
3e890411bc | ||
|
1987d6e5df | ||
|
e4995d1517 | ||
|
6558158dc3 | ||
|
c985e2b84b | ||
|
6be088a3eb | ||
|
8c390f0987 | ||
|
d8ba21d07d | ||
|
b9d6c87592 | ||
|
03238022c8 | ||
|
1f8ea55c3d | ||
|
6fa49df0d9 | ||
|
32a5308237 | ||
|
76489af40f | ||
|
fad79467dd | ||
|
e651c1b2b9 | ||
|
b6f3c16ddc |
264 changed files with 5717 additions and 10406 deletions
82
.mailmap
82
.mailmap
|
@ -93,7 +93,8 @@ Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
|
|||
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
|
||||
Sven Dowideit <SvenDowideit@home.org.au> <sven@t440s.home.gateway>
|
||||
<alexl@redhat.com> <alexander.larsson@gmail.com>
|
||||
Alexandr Morozov <lk4d4math@gmail.com>
|
||||
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
|
||||
Alexander Morozov <lk4d4@docker.com>
|
||||
<git.nivoc@neverbox.com> <kuehnle@online.de>
|
||||
O.S. Tezer <ostezer@gmail.com>
|
||||
<ostezer@gmail.com> <ostezer@users.noreply.github.com>
|
||||
|
@ -106,7 +107,9 @@ Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
|
|||
Sridhar Ratnakumar <sridharr@activestate.com>
|
||||
Sridhar Ratnakumar <sridharr@activestate.com> <github@srid.name>
|
||||
Liang-Chi Hsieh <viirya@gmail.com>
|
||||
Aleksa Sarai <cyphar@cyphar.com>
|
||||
Aleksa Sarai <asarai@suse.de>
|
||||
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
|
||||
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
|
||||
Will Weaver <monkey@buildingbananas.com>
|
||||
Timothy Hobbs <timothyhobbs@seznam.cz>
|
||||
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
|
||||
|
@ -117,24 +120,27 @@ Nathan LeClaire <nathan.leclaire@docker.com> <nathanleclaire@gmail.com>
|
|||
<marc@marc-abramowitz.com> <msabramo@gmail.com>
|
||||
Matthew Heon <mheon@redhat.com> <mheon@mheonlaptop.redhat.com>
|
||||
<bernat@luffy.cx> <vincent@bernat.im>
|
||||
<bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
|
||||
<p@pwaller.net> <peter@scraperwiki.com>
|
||||
<andrew.weiss@outlook.com> <andrew.weiss@microsoft.com>
|
||||
Francisco Carriedo <fcarriedo@gmail.com>
|
||||
<julienbordellier@gmail.com> <git@julienbordellier.com>
|
||||
<ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
|
||||
<lk4d4@docker.com> <lk4d4math@gmail.com>
|
||||
<arnaud.porterie@docker.com> <icecrime@gmail.com>
|
||||
<baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
|
||||
Brian Goff <cpuguy83@gmail.com>
|
||||
<cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
|
||||
<ewindisch@docker.com> <eric@windisch.us>
|
||||
<eric@windisch.us> <ewindisch@docker.com>
|
||||
<frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
|
||||
Hollie Teal <hollie@docker.com>
|
||||
<hollie@docker.com> <hollie.teal@docker.com>
|
||||
<hollie@docker.com> <hollietealok@users.noreply.github.com>
|
||||
<huu@prismskylabs.com> <whoshuu@gmail.com>
|
||||
Jessica Frazelle <jess@docker.com> Jessie Frazelle <jfrazelle@users.noreply.github.com>
|
||||
<jess@docker.com> <jfrazelle@users.noreply.github.com>
|
||||
Jessica Frazelle <jess@mesosphere.com>
|
||||
Jessica Frazelle <jess@mesosphere.com> <jfrazelle@users.noreply.github.com>
|
||||
Jessica Frazelle <jess@mesosphere.com> <acidburn@docker.com>
|
||||
Jessica Frazelle <jess@mesosphere.com> <jess@docker.com>
|
||||
Jessica Frazelle <jess@mesosphere.com> <princess@docker.com>
|
||||
<konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
|
||||
<tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
|
||||
<estesp@linux.vnet.ibm.com> <estesp@gmail.com>
|
||||
|
@ -142,6 +148,8 @@ Jessica Frazelle <jess@docker.com> Jessie Frazelle <jfrazelle@users.noreply.gith
|
|||
Thomas LEVEIL <thomasleveil@gmail.com> Thomas LÉVEIL <thomasleveil@users.noreply.github.com>
|
||||
<oi@truffles.me.uk> <timruffles@googlemail.com>
|
||||
<Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@redhat.com>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@linux.com>
|
||||
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
|
||||
|
@ -151,8 +159,9 @@ Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
|
|||
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
|
||||
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
|
||||
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
|
||||
<jess@docker.com> <princess@docker.com>
|
||||
John Howard (VM) <John.Howard@microsoft.com> John Howard <jhoward@microsoft.com>
|
||||
John Howard (VM) <John.Howard@microsoft.com>
|
||||
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
|
||||
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
|
||||
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
|
||||
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
|
||||
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
|
||||
|
@ -169,3 +178,60 @@ bin liu <liubin0329@users.noreply.github.com> <liubin0329@gmail.com>
|
|||
John Howard (VM) <John.Howard@microsoft.com> jhowardmsft <jhoward@microsoft.com>
|
||||
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
|
||||
Tangi COLIN <tangicolin@gmail.com> tangicolin <tangicolin@gmail.com>
|
||||
Allen Sun <allen.sun@daocloud.io>
|
||||
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
|
||||
<aanm90@gmail.com> <martins@noironetworks.com>
|
||||
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
|
||||
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
|
||||
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
|
||||
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
|
||||
Chander G <chandergovind@gmail.com>
|
||||
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
|
||||
Ying Li <cyli@twistedmatrix.com>
|
||||
Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local>
|
||||
<dqminh@cloudflare.com> <dqminh89@gmail.com>
|
||||
Daniel, Dao Quang Minh <dqminh@cloudflare.com>
|
||||
Daniel Nephin <dnephin@docker.com> <dnephin@gmail.com>
|
||||
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
|
||||
Doug Tangren <d.tangren@gmail.com>
|
||||
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
|
||||
Ben Golub <ben.golub@dotcloud.com>
|
||||
Harold Cooper <hrldcpr@gmail.com>
|
||||
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
|
||||
Josh Hawn <josh.hawn@docker.com> <jlhawn@berkeley.edu>
|
||||
Justin Cormack <justin.cormack@docker.com>
|
||||
<justin.cormack@docker.com> <justin.cormack@unikernel.com>
|
||||
<justin.cormack@docker.com> <justin@specialbusservice.com>
|
||||
Kamil Domański <kamil@domanski.co>
|
||||
Lei Jitang <leijitang@huawei.com>
|
||||
<leijitang@huawei.com> <leijitang@gmail.com>
|
||||
Linus Heckemann <lheckemann@twig-world.com>
|
||||
<lheckemann@twig-world.com> <anonymouse2048@gmail.com>
|
||||
Lynda O'Leary <lyndaoleary29@gmail.com>
|
||||
<lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
|
||||
Marianna Tessel <mtesselh@gmail.com>
|
||||
Michael Huettermann <michael@huettermann.net>
|
||||
Moysés Borges <moysesb@gmail.com>
|
||||
<moysesb@gmail.com> <moyses.furtado@wplex.com.br>
|
||||
Nigel Poulton <nigelpoulton@hotmail.com>
|
||||
Qiang Huang <h.huangqiang@huawei.com>
|
||||
<h.huangqiang@huawei.com> <qhuang@10.0.2.15>
|
||||
Boaz Shuster <ripcurld.github@gmail.com>
|
||||
Shuwei Hao <haosw@cn.ibm.com>
|
||||
<haosw@cn.ibm.com> <haoshuwei24@gmail.com>
|
||||
Soshi Katsuta <soshi.katsuta@gmail.com>
|
||||
<soshi.katsuta@gmail.com> <katsuta_soshi@cyberagent.co.jp>
|
||||
Stefan Berger <stefanb@linux.vnet.ibm.com>
|
||||
<stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
|
||||
Stephen Day <stephen.day@docker.com>
|
||||
<stephen.day@docker.com> <stevvooe@users.noreply.github.com>
|
||||
Toli Kuznets <toli@docker.com>
|
||||
Tristan Carel <tristan@cogniteev.com>
|
||||
<tristan@cogniteev.com> <tristan.carel@gmail.com>
|
||||
Vincent Demeester <vincent@sbr.pm>
|
||||
<vincent@sbr.pm> <vincent+github@demeester.fr>
|
||||
Vishnu Kannan <vishnuk@google.com>
|
||||
xlgao-zju <xlgao@zju.edu.cn> xlgao <xlgao@zju.edu.cn>
|
||||
yuchangchun <yuchangchun1@huawei.com> y00277921 <yuchangchun1@huawei.com>
|
||||
<zij@case.edu> <zjaffee@us.ibm.com>
|
||||
|
||||
|
|
181
CHANGELOG.md
181
CHANGELOG.md
|
@ -2,9 +2,164 @@
|
|||
|
||||
Items starting with `DEPRECATE` are important deprecation notices. For more
|
||||
information on the list of deprecated flags and APIs please have a look at
|
||||
https://docs.docker.com/misc/deprecated/ where target removal dates can also
|
||||
https://docs.docker.com/engine/deprecated/ where target removal dates can also
|
||||
be found.
|
||||
|
||||
## 1.11.0 (2016-04-13)
|
||||
|
||||
**IMPORTANT**: With Docker 1.11, a Linux docker installation is now made of 4 binaries (`docker`, [`docker-containerd`](https://github.com/docker/containerd), [`docker-containerd-shim`](https://github.com/docker/containerd) and [`docker-runc`](https://github.com/opencontainers/runc)). If you have scripts relying on docker being a single static binaries, please make sure to update them. Interaction with the daemon stay the same otherwise, the usage of the other binaries should be transparent. A Windows docker installation remains a single binary, `docker.exe`.
|
||||
|
||||
### Builder
|
||||
|
||||
- Fix a bug where Docker would not used the correct uid/gid when processing the `WORKDIR` command ([#21033](https://github.com/docker/docker/pull/21033))
|
||||
- Fix a bug where copy operations with userns would not use the proper uid/gid ([#20782](https://github.com/docker/docker/pull/20782), [#21162](https://github.com/docker/docker/pull/21162))
|
||||
|
||||
### Client
|
||||
|
||||
* Usage of the `:` separator for security option has been deprecated. `=` should be used instead ([#21232](https://github.com/docker/docker/pull/21232))
|
||||
+ The client user agent is now passed to the registry on `pull`, `build`, `push`, `login` and `search` operations ([#21306](https://github.com/docker/docker/pull/21306), [#21373](https://github.com/docker/docker/pull/21373))
|
||||
* Allow setting the Domainname and Hostname separately through the API ([#20200](https://github.com/docker/docker/pull/20200))
|
||||
* Docker info will now warn users if it can not detect the kernel version or the operating system ([#21128](https://github.com/docker/docker/pull/21128))
|
||||
- Fix an issue where `docker stats --no-stream` output could be all 0s ([#20803](https://github.com/docker/docker/pull/20803))
|
||||
- Fix a bug where some newly started container would not appear in a running `docker stats` command ([#20792](https://github.com/docker/docker/pull/20792))
|
||||
* Post processing is no longer enabled for linux-cgo terminals ([#20587](https://github.com/docker/docker/pull/20587))
|
||||
- Values to `--hostname` are now refused if they do not comply with [RFC1123](https://tools.ietf.org/html/rfc1123) ([#20566](https://github.com/docker/docker/pull/20566))
|
||||
+ Docker learned how to use a SOCKS proxy ([#20366](https://github.com/docker/docker/pull/20366), [#18373](https://github.com/docker/docker/pull/18373))
|
||||
+ Docker now supports external credential stores ([#20107](https://github.com/docker/docker/pull/20107))
|
||||
* `docker ps` now supports displaying the list of volumes mounted inside a container ([#20017](https://github.com/docker/docker/pull/20017))
|
||||
* `docker info` now also report Docker's root directory location ([#19986](https://github.com/docker/docker/pull/19986))
|
||||
- Docker now prohibits login in with an empty username (spaces are trimmed) ([#19806](https://github.com/docker/docker/pull/19806))
|
||||
* Docker events attributes are now sorted by key ([#19761](https://github.com/docker/docker/pull/19761))
|
||||
* `docker ps` no longer show exported port for stopped containers ([#19483](https://github.com/docker/docker/pull/19483))
|
||||
- Docker now cleans after itself if a save/export command fails ([#17849](https://github.com/docker/docker/pull/17849))
|
||||
* Docker load learned how to display a progress bar ([#17329](https://github.com/docker/docker/pull/17329), [#120078](https://github.com/docker/docker/pull/20078))
|
||||
|
||||
### Distribution
|
||||
|
||||
- Fix a panic that occurred when pulling an images with 0 layers ([#21222](https://github.com/docker/docker/pull/21222))
|
||||
- Fix a panic that could occur on error while pushing to a registry with a misconfigured token service ([#21212](https://github.com/docker/docker/pull/21212))
|
||||
+ All first-level delegation roles are now signed when doing a trusted push ([#21046](https://github.com/docker/docker/pull/21046))
|
||||
+ OAuth support for registries was added ([#20970](https://github.com/docker/docker/pull/20970))
|
||||
* `docker login` now handles token using the implementation found in [docker/distribution](https://github.com/docker/distribution) ([#20832](https://github.com/docker/docker/pull/20832))
|
||||
* `docker login` will no longer prompt for an email ([#20565](https://github.com/docker/docker/pull/20565))
|
||||
* Docker will now fallback to registry V1 if no basic auth credentials are available ([#20241](https://github.com/docker/docker/pull/20241))
|
||||
* Docker will now try to resume layer download where it left off after a network error/timeout ([#19840](https://github.com/docker/docker/pull/19840))
|
||||
- Fix generated manifest mediaType when pushing cross-repository ([#19509](https://github.com/docker/docker/pull/19509))
|
||||
- Fix docker requesting additional push credentials when pulling an image if Content Trust is enabled ([#20382](https://github.com/docker/docker/pull/20382))
|
||||
|
||||
### Logging
|
||||
|
||||
- Fix a race in the journald log driver ([#21311](https://github.com/docker/docker/pull/21311))
|
||||
* Docker syslog driver now uses the RFC-5424 format when emitting logs ([#20121](https://github.com/docker/docker/pull/20121))
|
||||
* Docker GELF log driver now allows to specify the compression algorithm and level via the `gelf-compression-type` and `gelf-compression-level` options ([#19831](https://github.com/docker/docker/pull/19831))
|
||||
* Docker daemon learned to output uncolorized logs via the `--raw-logs` options ([#19794](https://github.com/docker/docker/pull/19794))
|
||||
+ Docker, on Windows platform, now includes an ETW (Event Tracing in Windows) logging driver named `etwlogs` ([#19689](https://github.com/docker/docker/pull/19689))
|
||||
* Journald log driver learned how to handle tags ([#19564](https://github.com/docker/docker/pull/19564))
|
||||
+ The fluentd log driver learned the following options: `fluentd-address`, `fluentd-buffer-limit`, `fluentd-retry-wait`, `fluentd-max-retries` and `fluentd-async-connect` ([#19439](https://github.com/docker/docker/pull/19439))
|
||||
+ Docker learned to send log to Google Cloud via the new `gcplogs` logging driver. ([#18766](https://github.com/docker/docker/pull/18766))
|
||||
|
||||
|
||||
### Misc
|
||||
|
||||
+ When saving linked images together with `docker save` a subsequent `docker load` will correctly restore their parent/child relationship ([#21385](https://github.com/docker/docker/pull/c))
|
||||
+ Support for building the Docker cli for OpenBSD was added ([#21325](https://github.com/docker/docker/pull/21325))
|
||||
+ Labels can now be applied at network, volume and image creation ([#21270](https://github.com/docker/docker/pull/21270))
|
||||
* The `dockremap` is now created as a system user ([#21266](https://github.com/docker/docker/pull/21266))
|
||||
- Fix a few response body leaks ([#21258](https://github.com/docker/docker/pull/21258))
|
||||
- Docker, when run as a service with systemd, will now properly manage its processes cgroups ([#20633](https://github.com/docker/docker/pull/20633))
|
||||
* Docker info now reports the value of cgroup KernelMemory or emits a warning if it is not supported ([#20863](https://github.com/docker/docker/pull/20863))
|
||||
* Docker info now also reports the cgroup driver in use ([#20388](https://github.com/docker/docker/pull/20388))
|
||||
* Docker completion is now available on PowerShell ([#19894](https://github.com/docker/docker/pull/19894))
|
||||
* `dockerinit` is no more ([#19490](https://github.com/docker/docker/pull/19490),[#19851](https://github.com/docker/docker/pull/19851))
|
||||
+ Support for building Docker on arm64 was added ([#19013](https://github.com/docker/docker/pull/19013))
|
||||
+ Experimental support for building docker.exe in a native Windows Docker installation ([#18348](https://github.com/docker/docker/pull/18348))
|
||||
|
||||
### Networking
|
||||
|
||||
- Fix panic if a node is forcibly removed from the cluster ([#21671](https://github.com/docker/docker/pull/21671))
|
||||
- Fix "error creating vxlan interface" when starting a container in a Swarm cluster ([#21671](https://github.com/docker/docker/pull/21671))
|
||||
* `docker network inspect` will now report all endpoints whether they have an active container or not ([#21160](https://github.com/docker/docker/pull/21160))
|
||||
+ Experimental support for the MacVlan and IPVlan network drivers have been added ([#21122](https://github.com/docker/docker/pull/21122))
|
||||
* Output of `docker network ls` is now sorted by network name ([#20383](https://github.com/docker/docker/pull/20383))
|
||||
- Fix a bug where Docker would allow a network to be created with the reserved `default` name ([#19431](https://github.com/docker/docker/pull/19431))
|
||||
* `docker network inspect` returns whether a network is internal or not ([#19357](https://github.com/docker/docker/pull/19357))
|
||||
+ Control IPv6 via explicit option when creating a network (`docker network create --ipv6`). This shows up as a new `EnableIPv6` field in `docker network inspect` ([#17513](https://github.com/docker/docker/pull/17513))
|
||||
* Support for AAAA Records (aka IPv6 Service Discovery) in embedded DNS Server ([#21396](https://github.com/docker/docker/pull/21396))
|
||||
- Fix to not forward docker domain IPv6 queries to external servers ([#21396](https://github.com/docker/docker/pull/21396))
|
||||
* Multiple A/AAAA records from embedded DNS Server for DNS Round robin ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fix endpoint count inconsistency after an ungraceful dameon restart ([#21261](https://github.com/docker/docker/pull/21261))
|
||||
- Move the ownership of exposed ports and port-mapping options from Endpoint to Sandbox ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fixed a bug which prevents docker reload when host is configured with ipv6.disable=1 ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Added inbuilt nil IPAM driver ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fixed bug in iptables.Exists() logic [#21019](https://github.com/docker/docker/pull/21019)
|
||||
- Fixed a Veth interface leak when using overlay network ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fixed a bug which prevents docker reload after a network delete during shutdown ([#20214](https://github.com/docker/docker/pull/20214))
|
||||
- Make sure iptables chains are recreated on firewalld reload ([#20419](https://github.com/docker/docker/pull/20419))
|
||||
- Allow to pass global datastore during config reload ([#20419](https://github.com/docker/docker/pull/20419))
|
||||
- For anonymous containers use the alias name for IP to name mapping, ie:DNS PTR record ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fix a panic when deleting an entry from /etc/hosts file ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Source the forwarded DNS queries from the container net namespace ([#21019](https://github.com/docker/docker/pull/21019))
|
||||
- Fix to retain the network internal mode config for bridge networks on daemon reload ([#21780] (https://github.com/docker/docker/pull/21780))
|
||||
- Fix to retain IPAM driver option configs on daemon reload ([#21914] (https://github.com/docker/docker/pull/21914))
|
||||
|
||||
### Plugins
|
||||
|
||||
- Fix a file descriptor leak that would occur every time plugins were enumerated ([#20686](https://github.com/docker/docker/pull/20686))
|
||||
- Fix an issue where Authz plugin would corrupt the payload body when faced with a large amount of data ([#20602](https://github.com/docker/docker/pull/20602))
|
||||
|
||||
### Runtime
|
||||
|
||||
- Fix a panic that could occur when cleanup after a container started with invalid parameters ([#21716](https://github.com/docker/docker/pull/21716))
|
||||
- Fix a race with event timers stopping early ([#21692](https://github.com/docker/docker/pull/21692))
|
||||
- Fix race conditions in the layer store, potentially corrupting the map and crashing the process ([#21677](https://github.com/docker/docker/pull/21677))
|
||||
- Un-deprecate auto-creation of host directories for mounts. This feature was marked deprecated in ([#21666](https://github.com/docker/docker/pull/21666))
|
||||
Docker 1.9, but was decided to be too much of an backward-incompatible change, so it was decided to keep the feature.
|
||||
+ It is now possible for containers to share the NET and IPC namespaces when `userns` is enabled ([#21383](https://github.com/docker/docker/pull/21383))
|
||||
+ `docker inspect <image-id>` will now expose the rootfs layers ([#21370](https://github.com/docker/docker/pull/21370))
|
||||
+ Docker Windows gained a minimal `top` implementation ([#21354](https://github.com/docker/docker/pull/21354))
|
||||
* Docker learned to report the faulty exe when a container cannot be started due to its condition ([#21345](https://github.com/docker/docker/pull/21345))
|
||||
* Docker with device mapper will now refuse to run if `udev sync` is not available ([#21097](https://github.com/docker/docker/pull/21097))
|
||||
- Fix a bug where Docker would not validate the config file upon configuration reload ([#21089](https://github.com/docker/docker/pull/21089))
|
||||
- Fix a hang that would happen on attach if initial start was to fail ([#21048](https://github.com/docker/docker/pull/21048))
|
||||
- Fix an issue where registry service options in the daemon configuration file were not properly taken into account ([#21045](https://github.com/docker/docker/pull/21045))
|
||||
- Fix a race between the exec and resize operations ([#21022](https://github.com/docker/docker/pull/21022))
|
||||
- Fix an issue where nanoseconds were not correctly taken in account when filtering Docker events ([#21013](https://github.com/docker/docker/pull/21013))
|
||||
- Fix the handling of Docker command when passed a 64 bytes id ([#21002](https://github.com/docker/docker/pull/21002))
|
||||
* Docker will now return a `204` (i.e http.StatusNoContent) code when it successfully deleted a network ([#20977](https://github.com/docker/docker/pull/20977))
|
||||
- Fix a bug where the daemon would wait indefinitely in case the process it was about to killed had already exited on its own ([#20967](https://github.com/docker/docker/pull/20967)
|
||||
* The devmapper driver learned the `dm.min_free_space` option. If the mapped device free space reaches the passed value, new device creation will be prohibited. ([#20786](https://github.com/docker/docker/pull/20786))
|
||||
+ Docker can now prevent processes in container to gain new privileges via the `--security-opt=no-new-privileges` flag ([#20727](https://github.com/docker/docker/pull/20727))
|
||||
- Starting a container with the `--device` option will now correctly resolves symlinks ([#20684](https://github.com/docker/docker/pull/20684))
|
||||
+ Docker now relies on [`containerd`](https://github.com/docker/containerd) and [`runc`](https://github.com/opencontainers/runc) to spawn containers. ([#20662](https://github.com/docker/docker/pull/20662))
|
||||
- Fix docker configuration reloading to only alter value present in the given config file ([#20604](https://github.com/docker/docker/pull/20604))
|
||||
+ Docker now allows setting a container hostname via the `--hostname` flag when `--net=host` ([#20177](https://github.com/docker/docker/pull/20177))
|
||||
+ Docker now allows executing privileged container while running with `--userns-remap` if both `--privileged` and the new `--userns=host` flag are specified ([#20111](https://github.com/docker/docker/pull/20111))
|
||||
- Fix Docker not cleaning up correctly old containers upon restarting after a crash ([#19679](https://github.com/docker/docker/pull/19679))
|
||||
* Docker will now error out if it doesn't recognize a configuration key within the config file ([#19517](https://github.com/docker/docker/pull/19517))
|
||||
- Fix container loading, on daemon startup, when they depends on a plugin running within a container ([#19500](https://github.com/docker/docker/pull/19500))
|
||||
* `docker update` learned how to change a container restart policy ([#19116](https://github.com/docker/docker/pull/19116))
|
||||
* `docker inspect` now also returns a new `State` field containing the container state in a human readable way (i.e. one of `created`, `restarting`, `running`, `paused`, `exited` or `dead`)([#18966](https://github.com/docker/docker/pull/18966))
|
||||
+ Docker learned to limit the number of active pids (i.e. processes) within the container via the `pids-limit` flags. NOTE: This requires `CGROUP_PIDS=y` to be in the kernel configuration. ([#18697](https://github.com/docker/docker/pull/18697))
|
||||
- `docker load` now has a `--quiet` option to suppress the load output ([#20078](https://github.com/docker/docker/pull/20078))
|
||||
- Fix a bug in neighbor discovery for IPv6 peers ([#20842](https://github.com/docker/docker/pull/20842))
|
||||
- Fix a panic during cleanup if a container was started with invalid options ([#21802](https://github.com/docker/docker/pull/21802))
|
||||
- Fix a situation where a container cannot be stopped if the terminal is closed ([#21840](https://github.com/docker/docker/pull/21840))
|
||||
|
||||
### Security
|
||||
|
||||
* Object with the `pcp_pmcd_t` selinux type were given management access to `/var/lib/docker(/.*)?` ([#21370](https://github.com/docker/docker/pull/21370))
|
||||
* `restart_syscall`, `copy_file_range`, `mlock2` joined the list of allowed calls in the default seccomp profile ([#21117](https://github.com/docker/docker/pull/21117), [#21262](https://github.com/docker/docker/pull/21262))
|
||||
* `send`, `recv` and `x32` were added to the list of allowed syscalls and arch in the default seccomp profile ([#19432](https://github.com/docker/docker/pull/19432))
|
||||
* Docker Content Trust now requests the server to perform snapshot signing ([#21046](https://github.com/docker/docker/pull/21046))
|
||||
* Support for using YubiKeys for Content Trust signing has been moved out of experimental ([#21591](https://github.com/docker/docker/pull/21591))
|
||||
|
||||
### Volumes
|
||||
|
||||
* Output of `docker volume ls` is now sorted by volume name ([#20389](https://github.com/docker/docker/pull/20389))
|
||||
* Local volumes can now accepts options similar to the unix `mount` tool ([#20262](https://github.com/docker/docker/pull/20262))
|
||||
- Fix an issue where one letter directory name could not be used as source for volumes ([#21106](https://github.com/docker/docker/pull/21106))
|
||||
+ `docker run -v` now accepts a new flag `nocopy`. This tell the runtime not to copy the container path content into the volume (which is the default behavior) ([#21223](https://github.com/docker/docker/pull/21223))
|
||||
|
||||
## 1.10.3 (2016-03-10)
|
||||
|
||||
### Runtime
|
||||
|
@ -25,9 +180,9 @@ be found.
|
|||
|
||||
### Security
|
||||
|
||||
- Fix linux32 emulation to fail during docker build [#20672](https://github.com/docker/docker/pull/20672)
|
||||
- Fix linux32 emulation to fail during docker build [#20672](https://github.com/docker/docker/pull/20672)
|
||||
It was due to the `personality` syscall being blocked by the default seccomp profile.
|
||||
- Fix Oracle XE 10g failing to start in a container [#20981](https://github.com/docker/docker/pull/20981)
|
||||
- Fix Oracle XE 10g failing to start in a container [#20981](https://github.com/docker/docker/pull/20981)
|
||||
It was due to the `ipc` syscall being blocked by the default seccomp profile.
|
||||
- Fix user namespaces not working on Linux From Scratch [#20685](https://github.com/docker/docker/pull/20685)
|
||||
- Fix issue preventing daemon to start if userns is enabled and the `subuid` or `subgid` files contain comments [#20725](https://github.com/docker/docker/pull/20725)
|
||||
|
@ -113,7 +268,7 @@ Engine 1.10 migrator can be found on Docker Hub: https://hub.docker.com/r/docker
|
|||
+ Add `--tmpfs` flag to `docker run` to create a tmpfs mount in a container [#13587](https://github.com/docker/docker/pull/13587)
|
||||
+ Add `--format` flag to `docker images` command [#17692](https://github.com/docker/docker/pull/17692)
|
||||
+ Allow to set daemon configuration in a file and hot-reload it with the `SIGHUP` signal [#18587](https://github.com/docker/docker/pull/18587)
|
||||
+ Updated docker events to include more meta-data and event types [#18888](https://github.com/docker/docker/pull/18888)
|
||||
+ Updated docker events to include more meta-data and event types [#18888](https://github.com/docker/docker/pull/18888)
|
||||
This change is backward compatible in the API, but not on the CLI.
|
||||
+ Add `--blkio-weight-device` flag to `docker run` [#13959](https://github.com/docker/docker/pull/13959)
|
||||
+ Add `--device-read-bps` and `--device-write-bps` flags to `docker run` [#14466](https://github.com/docker/docker/pull/14466)
|
||||
|
@ -148,18 +303,18 @@ Engine 1.10 migrator can be found on Docker Hub: https://hub.docker.com/r/docker
|
|||
+ Add support for custom seccomp profiles in `--security-opt` [#17989](https://github.com/docker/docker/pull/17989)
|
||||
+ Add default seccomp profile [#18780](https://github.com/docker/docker/pull/18780)
|
||||
+ Add `--authorization-plugin` flag to `daemon` to customize ACLs [#15365](https://github.com/docker/docker/pull/15365)
|
||||
+ Docker Content Trust now supports the ability to read and write user delegations [#18887](https://github.com/docker/docker/pull/18887)
|
||||
This is an optional, opt-in feature that requires the explicit use of the Notary command-line utility in order to be enabled.
|
||||
+ Docker Content Trust now supports the ability to read and write user delegations [#18887](https://github.com/docker/docker/pull/18887)
|
||||
This is an optional, opt-in feature that requires the explicit use of the Notary command-line utility in order to be enabled.
|
||||
Enabling delegation support in a specific repository will break the ability of Docker 1.9 and 1.8 to pull from that repository, if content trust is enabled.
|
||||
* Allow SELinux to run in a container when using the BTRFS storage driver [#16452](https://github.com/docker/docker/pull/16452)
|
||||
|
||||
### Distribution
|
||||
|
||||
* Use content-addressable storage for images and layers [#17924](https://github.com/docker/docker/pull/17924)
|
||||
Note that a migration is performed the first time docker is run; it can take a significant amount of time depending on the number of images and containers present.
|
||||
Images no longer depend on the parent chain but contain a list of layer references.
|
||||
* Use content-addressable storage for images and layers [#17924](https://github.com/docker/docker/pull/17924)
|
||||
Note that a migration is performed the first time docker is run; it can take a significant amount of time depending on the number of images and containers present.
|
||||
Images no longer depend on the parent chain but contain a list of layer references.
|
||||
`docker load`/`docker save` tarballs now also contain content-addressable image configurations.
|
||||
For more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
|
||||
For more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
|
||||
* Add support for the new [manifest format ("schema2")](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-2.md) [#18785](https://github.com/docker/docker/pull/18785)
|
||||
* Lots of improvements for push and pull: performance++, retries on failed downloads, cancelling on client disconnect [#18353](https://github.com/docker/docker/pull/18353), [#18418](https://github.com/docker/docker/pull/18418), [#19109](https://github.com/docker/docker/pull/19109), [#18353](https://github.com/docker/docker/pull/18353)
|
||||
* Limit v1 protocol fallbacks [#18590](https://github.com/docker/docker/pull/18590)
|
||||
|
@ -201,8 +356,8 @@ Engine 1.10 migrator can be found on Docker Hub: https://hub.docker.com/r/docker
|
|||
### Volumes
|
||||
|
||||
+ Add support to set the mount propagation mode for a volume [#17034](https://github.com/docker/docker/pull/17034)
|
||||
* Add `ls` and `inspect` endpoints to volume plugin API [#16534](https://github.com/docker/docker/pull/16534)
|
||||
Existing plugins need to make use of these new APIs to satisfy users' expectation
|
||||
* Add `ls` and `inspect` endpoints to volume plugin API [#16534](https://github.com/docker/docker/pull/16534)
|
||||
Existing plugins need to make use of these new APIs to satisfy users' expectation
|
||||
For that, please use the new MIME type `application/vnd.docker.plugins.v1.2+json` [#19549](https://github.com/docker/docker/pull/19549)
|
||||
- Fix data not being copied to named volumes [#19175](https://github.com/docker/docker/pull/19175)
|
||||
- Fix issues preventing volume drivers from being containerized [#19500](https://github.com/docker/docker/pull/19500)
|
||||
|
@ -1531,7 +1686,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
|
|||
+ Add -rm to docker run for removing a container on exit
|
||||
- Remove error messages which are not actually errors
|
||||
- Fix `docker rm` with volumes
|
||||
- Fix some error cases where a HTTP body might not be closed
|
||||
- Fix some error cases where an HTTP body might not be closed
|
||||
- Fix panic with wrong dockercfg file
|
||||
- Fix the attach behavior with -i
|
||||
* Record termination time in state.
|
||||
|
|
|
@ -152,9 +152,9 @@ However, there might be a way to implement that feature *on top of* Docker.
|
|||
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
|
||||
is for people using Docker containers.
|
||||
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
|
||||
group is for contributors and other people contributing to the Docker
|
||||
project.
|
||||
You can join them without an google account by sending an email to e.g. "docker-user+subscribe@googlegroups.com".
|
||||
group is for contributors and other people contributing to the Docker project.
|
||||
You can join them without a google account by sending an email to
|
||||
<a href="mailto:docker-dev+subscribe@googlegroups.com">docker-dev+subscribe@googlegroups.com</a>.
|
||||
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
|
||||
</td>
|
||||
</tr>
|
||||
|
|
31
Dockerfile
31
Dockerfile
|
@ -33,14 +33,20 @@ RUN echo deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main > /et
|
|||
# add llvm repo
|
||||
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421 \
|
||||
|| apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421
|
||||
RUN echo deb http://llvm.org/apt/trusty/ llvm-toolchain-trusty main > /etc/apt/sources.list.d/llvm.list
|
||||
RUN echo deb http://llvm.org/apt/jessie/ llvm-toolchain-jessie-3.8 main > /etc/apt/sources.list.d/llvm.list
|
||||
|
||||
# allow replacing httpredir mirror
|
||||
ARG APT_MIRROR=httpredir.debian.org
|
||||
RUN sed -i s/httpredir.debian.org/$APT_MIRROR/g /etc/apt/sources.list
|
||||
|
||||
# Packaged dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
apparmor \
|
||||
apt-utils \
|
||||
aufs-tools \
|
||||
automake \
|
||||
bash-completion \
|
||||
bsdmainutils \
|
||||
btrfs-tools \
|
||||
build-essential \
|
||||
clang-3.8 \
|
||||
|
@ -64,12 +70,13 @@ RUN apt-get update && apt-get install -y \
|
|||
python-mock \
|
||||
python-pip \
|
||||
python-websocket \
|
||||
s3cmd=1.5.0* \
|
||||
ubuntu-zfs \
|
||||
xfsprogs \
|
||||
libzfs-dev \
|
||||
tar \
|
||||
zip \
|
||||
--no-install-recommends \
|
||||
&& pip install awscli==1.10.15 \
|
||||
&& ln -snf /usr/bin/clang-3.8 /usr/local/bin/clang \
|
||||
&& ln -snf /usr/bin/clang++-3.8 /usr/local/bin/clang++
|
||||
|
||||
|
@ -96,7 +103,7 @@ RUN set -x \
|
|||
&& export OSXCROSS_PATH="/osxcross" \
|
||||
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
|
||||
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
|
||||
&& curl -sSL https://s3.dockerproject.org/darwin/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
|
||||
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
|
||||
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
|
||||
ENV PATH /osxcross/target/bin:$PATH
|
||||
|
||||
|
@ -119,7 +126,7 @@ RUN set -x \
|
|||
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
|
||||
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
|
||||
# with a heads-up.
|
||||
ENV GO_VERSION 1.6
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fsSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
ENV PATH /go/bin:/usr/local/go/bin:$PATH
|
||||
|
@ -170,12 +177,13 @@ RUN set -x \
|
|||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.11-3
|
||||
RUN set -x \
|
||||
&& export GO15VENDOREXPERIMENT=1 \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
|
@ -186,13 +194,6 @@ RUN git clone https://github.com/docker/docker-py.git /docker-py \
|
|||
&& git checkout -q $DOCKER_PY_COMMIT \
|
||||
&& pip install -r test-requirements.txt
|
||||
|
||||
# Setup s3cmd config
|
||||
RUN { \
|
||||
echo '[default]'; \
|
||||
echo 'access_key=$AWS_ACCESS_KEY'; \
|
||||
echo 'secret_key=$AWS_SECRET_KEY'; \
|
||||
} > ~/.s3cfg
|
||||
|
||||
# Set user.email so crosbymichael's in-container merge commits go smoothly
|
||||
RUN git config --global user.email 'docker-dummy@example.com'
|
||||
|
||||
|
@ -247,7 +248,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -257,7 +258,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -96,7 +96,7 @@ RUN set -x \
|
|||
# We don't have official binary tarballs for ARM64, eigher for Go or bootstrap,
|
||||
# so we use the official armv6 released binaries as a GOROOT_BOOTSTRAP, and
|
||||
# build Go from source code.
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
@ -119,12 +119,13 @@ RUN set -x \
|
|||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.11-3
|
||||
RUN set -x \
|
||||
&& export GO15VENDOREXPERIMENT=1 \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
|
@ -135,13 +136,6 @@ RUN git clone https://github.com/docker/docker-py.git /docker-py \
|
|||
&& git checkout -q $DOCKER_PY_COMMIT \
|
||||
&& pip install -r test-requirements.txt
|
||||
|
||||
# Setup s3cmd config
|
||||
RUN { \
|
||||
echo '[default]'; \
|
||||
echo 'access_key=$AWS_ACCESS_KEY'; \
|
||||
echo 'secret_key=$AWS_SECRET_KEY'; \
|
||||
} > ~/.s3cfg
|
||||
|
||||
# Set user.email so crosbymichael's in-container merge commits go smoothly
|
||||
RUN git config --global user.email 'docker-dummy@example.com'
|
||||
|
||||
|
@ -187,7 +181,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -197,7 +191,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -65,8 +65,10 @@ RUN cd /usr/local/lvm2 \
|
|||
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
|
||||
|
||||
# Install Go
|
||||
ENV GO_VERSION 1.6
|
||||
RUN curl -fsSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-armv6l.tar.gz" \
|
||||
# TODO Update to 1.5.4 once available, or build from source, as these builds
|
||||
# are marked "end of life", see http://dave.cheney.net/unofficial-arm-tarballs
|
||||
ENV GO_VERSION 1.5.3
|
||||
RUN curl -fsSL "http://dave.cheney.net/paste/go${GO_VERSION}.linux-arm.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
ENV PATH /go/bin:/usr/local/go/bin:$PATH
|
||||
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
|
||||
|
@ -128,12 +130,13 @@ RUN set -x \
|
|||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.11-3
|
||||
RUN set -x \
|
||||
&& export GO15VENDOREXPERIMENT=1 \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
|
@ -197,7 +200,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -207,7 +210,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -74,7 +74,7 @@ WORKDIR /go/src/github.com/docker/docker
|
|||
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -84,7 +84,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -74,10 +74,10 @@ RUN cd /usr/local/lvm2 \
|
|||
# TODO install Go, using gccgo as GOROOT_BOOTSTRAP (Go 1.5+ supports ppc64le properly)
|
||||
# possibly a ppc64le/golang image?
|
||||
|
||||
## BUILD GOLANG 1.6
|
||||
ENV GO_VERSION 1.6
|
||||
## BUILD GOLANG
|
||||
ENV GO_VERSION 1.5.4
|
||||
ENV GO_DOWNLOAD_URL https://golang.org/dl/go${GO_VERSION}.src.tar.gz
|
||||
ENV GO_DOWNLOAD_SHA256 a96cce8ce43a9bf9b2a4c7d470bc7ee0cb00410da815980681c8353218dcf146
|
||||
ENV GO_DOWNLOAD_SHA256 002acabce7ddc140d0d55891f9d4fcfbdd806b9332fb8b110c91bc91afb0bc93
|
||||
ENV GOROOT_BOOTSTRAP /usr/local
|
||||
|
||||
RUN curl -fsSL "$GO_DOWNLOAD_URL" -o golang.tar.gz \
|
||||
|
@ -129,12 +129,13 @@ RUN set -x \
|
|||
# Install notary and notary-server
|
||||
ENV NOTARY_VERSION docker-v1.11-3
|
||||
RUN set -x \
|
||||
&& export GO15VENDOREXPERIMENT=1 \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
|
@ -198,17 +199,17 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
&& git checkout -q "$RUNC_COMMIT" \
|
||||
&& make static BUILDTAGS="seccomp apparmor selinux" \
|
||||
&& make static BUILDTAGS="apparmor selinux" \
|
||||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -110,11 +110,12 @@ RUN set -x \
|
|||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.11-3
|
||||
RUN set -x \
|
||||
&& export GO15VENDOREXPERIMENT=1 \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
go build -gccgoflags=-lpthread -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Get the "docker-py" source so we can run their integration tests
|
||||
|
@ -177,7 +178,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -187,7 +188,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -30,7 +30,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
|||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install runc
|
||||
ENV RUNC_COMMIT bbde9c426ff363d813b8722f0744115c13b408b6
|
||||
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
|
||||
|
@ -40,7 +40,7 @@ RUN set -x \
|
|||
&& cp runc /usr/local/bin/docker-runc
|
||||
|
||||
# Install containerd
|
||||
ENV CONTAINERD_COMMIT 142e22a4dce86f3b8ce068a0b043489d21976bb8
|
||||
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
|
||||
|
|
|
@ -38,9 +38,9 @@
|
|||
FROM windowsservercore
|
||||
|
||||
# Environment variable notes:
|
||||
# - GOLANG_VERSION must consistent with 'Dockerfile' used by Linux'.
|
||||
# - GO_VERSION must consistent with 'Dockerfile' used by Linux'.
|
||||
# - FROM_DOCKERFILE is used for detection of building within a container.
|
||||
ENV GOLANG_VERSION=1.6 \
|
||||
ENV GO_VERSION=1.5.4 \
|
||||
GIT_LOCATION=https://github.com/git-for-windows/git/releases/download/v2.7.2.windows.1/Git-2.7.2-64-bit.exe \
|
||||
RSRC_COMMIT=ba14da1f827188454a4591717fff29999010887f \
|
||||
GOPATH=C:/go;C:/go/src/github.com/docker/docker/vendor \
|
||||
|
@ -63,7 +63,7 @@ RUN \
|
|||
Download-File %GIT_LOCATION% gitsetup.exe; \
|
||||
\
|
||||
Write-Host INFO: Downloading go...; \
|
||||
Download-File https://storage.googleapis.com/golang/go%GOLANG_VERSION%.windows-amd64.msi go.msi; \
|
||||
Download-File https://storage.googleapis.com/golang/go%GO_VERSION%.windows-amd64.msi go.msi; \
|
||||
\
|
||||
Write-Host INFO: Downloading compiler 1 of 3...; \
|
||||
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip gcc.zip; \
|
||||
|
|
3
Makefile
3
Makefile
|
@ -69,10 +69,11 @@ bundles:
|
|||
cross: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
|
||||
|
||||
|
||||
win: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh win
|
||||
|
||||
tgz: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross tgz
|
||||
|
||||
deb: build
|
||||
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary build-deb
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
1.11.0-dev
|
||||
1.11.0
|
||||
|
|
|
@ -41,7 +41,7 @@ func (cli *DockerCli) CmdLoad(args ...string) error {
|
|||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
if response.JSON {
|
||||
if response.Body != nil && response.JSON {
|
||||
return jsonmessage.DisplayJSONMessagesStream(response.Body, cli.out, cli.outFd, cli.isTerminalOut, nil)
|
||||
}
|
||||
|
||||
|
|
|
@ -252,7 +252,7 @@ func (cli *DockerCli) trustedReference(ref reference.NamedTagged) (reference.Can
|
|||
// Resolve the Auth config relevant for this server
|
||||
authConfig := cli.resolveAuthConfig(repoInfo.Index)
|
||||
|
||||
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
|
||||
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig, "pull")
|
||||
if err != nil {
|
||||
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
|
||||
return nil, err
|
||||
|
|
|
@ -269,7 +269,17 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
|
|||
return err
|
||||
}
|
||||
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
if !quiet {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
output := ioutils.NewWriteFlusher(w)
|
||||
defer output.Close()
|
||||
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
|
||||
output.Write(streamformatter.NewJSONStreamFormatter().FormatError(err))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return s.backend.LoadImage(r.Body, w, quiet)
|
||||
}
|
||||
|
||||
|
|
|
@ -55,11 +55,10 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
|
|||
return err
|
||||
}
|
||||
|
||||
timer := time.NewTimer(0)
|
||||
timer.Stop()
|
||||
var timeout <-chan time.Time
|
||||
if until > 0 || untilNano > 0 {
|
||||
dur := time.Unix(until, untilNano).Sub(time.Now())
|
||||
timer = time.NewTimer(dur)
|
||||
timeout = time.NewTimer(dur).C
|
||||
}
|
||||
|
||||
ef, err := filters.FromParam(r.Form.Get("filters"))
|
||||
|
@ -99,7 +98,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
|
|||
if err := enc.Encode(jev); err != nil {
|
||||
return err
|
||||
}
|
||||
case <-timer.C:
|
||||
case <-timeout:
|
||||
return nil
|
||||
case <-closeNotify:
|
||||
logrus.Debug("Client disconnected, stop sending events")
|
||||
|
|
|
@ -236,6 +236,17 @@ func (b *Builder) build(config *types.ImageBuildOptions, context builder.Context
|
|||
}
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Commit the layer when there are only one children in
|
||||
// the dockerfile, this is only the `FROM` tag, and
|
||||
// build labels. Otherwise, the new image won't be
|
||||
// labeled properly.
|
||||
// Commit here, so the ID of the final image is reported
|
||||
// properly.
|
||||
if len(b.dockerfile.Children) == 1 && len(b.options.Labels) > 0 {
|
||||
b.commit("", b.runConfig.Cmd, "")
|
||||
}
|
||||
|
||||
shortImgID = stringid.TruncateID(b.image)
|
||||
fmt.Fprintf(b.Stdout, " ---> %s\n", shortImgID)
|
||||
if b.options.Remove {
|
||||
|
|
|
@ -413,7 +413,20 @@ func (b *Builder) processImageFrom(img builder.Image) error {
|
|||
b.image = img.ImageID()
|
||||
|
||||
if img.RunConfig() != nil {
|
||||
b.runConfig = img.RunConfig()
|
||||
imgConfig := *img.RunConfig()
|
||||
// inherit runConfig labels from the current
|
||||
// state if they've been set already.
|
||||
// Ensures that images with only a FROM
|
||||
// get the labels populated properly.
|
||||
if b.runConfig.Labels != nil {
|
||||
if imgConfig.Labels == nil {
|
||||
imgConfig.Labels = make(map[string]string)
|
||||
}
|
||||
for k, v := range b.runConfig.Labels {
|
||||
imgConfig.Labels[k] = v
|
||||
}
|
||||
}
|
||||
b.runConfig = &imgConfig
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -909,6 +909,7 @@ func (container *Container) FullHostname() string {
|
|||
func (container *Container) RestartManager(reset bool) restartmanager.RestartManager {
|
||||
if reset {
|
||||
container.RestartCount = 0
|
||||
container.restartManager = nil
|
||||
}
|
||||
if container.restartManager == nil {
|
||||
container.restartManager = restartmanager.New(container.HostConfig.RestartPolicy)
|
||||
|
|
|
@ -6,10 +6,11 @@ FROM debian:jessie
|
|||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor selinux
|
||||
ENV RUNC_BUILDTAGS apparmor selinux
|
||||
|
|
|
@ -6,10 +6,11 @@ FROM debian:stretch
|
|||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
|
||||
ENV RUNC_BUILDTAGS apparmor seccomp selinux
|
||||
|
|
|
@ -7,10 +7,11 @@ FROM debian:wheezy-backports
|
|||
RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor selinux
|
||||
ENV RUNC_BUILDTAGS apparmor selinux
|
||||
|
|
|
@ -42,6 +42,7 @@ for version in "${versions[@]}"; do
|
|||
echo >> "$version/Dockerfile"
|
||||
|
||||
extraBuildTags=
|
||||
runcBuildTags=
|
||||
|
||||
# this list is sorted alphabetically; please keep it that way
|
||||
packages=(
|
||||
|
@ -64,7 +65,7 @@ for version in "${versions[@]}"; do
|
|||
# packaging for "sd-journal.h" and libraries varies
|
||||
case "$suite" in
|
||||
precise|wheezy) ;;
|
||||
sid|stretch|wily) packages+=( libsystemd-dev );;
|
||||
sid|stretch|wily|xenial) packages+=( libsystemd-dev );;
|
||||
*) packages+=( libsystemd-journal-dev );;
|
||||
esac
|
||||
|
||||
|
@ -73,9 +74,11 @@ for version in "${versions[@]}"; do
|
|||
case "$suite" in
|
||||
precise|wheezy|jessie|trusty)
|
||||
packages=( "${packages[@]/libseccomp-dev}" )
|
||||
runcBuildTags="apparmor selinux"
|
||||
;;
|
||||
*)
|
||||
extraBuildTags+=' seccomp'
|
||||
runcBuildTags="apparmor seccomp selinux"
|
||||
;;
|
||||
esac
|
||||
|
||||
|
@ -124,4 +127,5 @@ for version in "${versions[@]}"; do
|
|||
buildTags=$( echo "apparmor selinux $extraBuildTags" | xargs -n1 | sort -n | tr '\n' ' ' | sed -e 's/[[:space:]]*$//' )
|
||||
|
||||
echo "ENV DOCKER_BUILDTAGS $buildTags" >> "$version/Dockerfile"
|
||||
echo "ENV RUNC_BUILDTAGS $runcBuildTags" >> "$version/Dockerfile"
|
||||
done
|
||||
|
|
|
@ -6,10 +6,11 @@ FROM ubuntu:precise
|
|||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential curl ca-certificates debhelper dh-apparmor git libapparmor-dev libltdl-dev libsqlite3-dev pkg-config --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor exclude_graphdriver_btrfs exclude_graphdriver_devicemapper selinux
|
||||
ENV RUNC_BUILDTAGS apparmor selinux
|
||||
|
|
|
@ -6,10 +6,11 @@ FROM ubuntu:trusty
|
|||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor selinux
|
||||
ENV RUNC_BUILDTAGS apparmor selinux
|
||||
|
|
|
@ -6,10 +6,11 @@ FROM ubuntu:wily
|
|||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
|
||||
ENV RUNC_BUILDTAGS apparmor seccomp selinux
|
||||
|
|
16
contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile
Normal file
16
contrib/builder/deb/amd64/ubuntu-xenial/Dockerfile
Normal file
|
@ -0,0 +1,16 @@
|
|||
#
|
||||
# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/deb/amd64/generate.sh"!
|
||||
#
|
||||
|
||||
FROM ubuntu:xenial
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
|
||||
ENV RUNC_BUILDTAGS apparmor seccomp selinux
|
|
@ -6,13 +6,14 @@ FROM centos:7
|
|||
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS selinux
|
||||
ENV RUNC_BUILDTAGS selinux
|
||||
|
||||
|
|
|
@ -5,13 +5,14 @@
|
|||
FROM fedora:22
|
||||
|
||||
RUN dnf install -y @development-tools fedora-packager
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS seccomp selinux
|
||||
ENV RUNC_BUILDTAGS seccomp selinux
|
||||
|
||||
|
|
|
@ -5,13 +5,14 @@
|
|||
FROM fedora:23
|
||||
|
||||
RUN dnf install -y @development-tools fedora-packager
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS seccomp selinux
|
||||
ENV RUNC_BUILDTAGS seccomp selinux
|
||||
|
||||
|
|
|
@ -39,6 +39,7 @@ for version in "${versions[@]}"; do
|
|||
echo >> "$version/Dockerfile"
|
||||
|
||||
extraBuildTags=
|
||||
runcBuildTags=
|
||||
|
||||
case "$from" in
|
||||
centos:*)
|
||||
|
@ -77,6 +78,7 @@ for version in "${versions[@]}"; do
|
|||
sqlite-devel # for "sqlite3.h"
|
||||
systemd-devel # for "sd-journal.h" and libraries
|
||||
tar # older versions of dev-tools do not have tar
|
||||
git # required for containerd and runc clone
|
||||
)
|
||||
|
||||
case "$from" in
|
||||
|
@ -98,9 +100,11 @@ for version in "${versions[@]}"; do
|
|||
case "$from" in
|
||||
opensuse:*|oraclelinux:*|centos:7)
|
||||
packages=( "${packages[@]/libseccomp-devel}" )
|
||||
runcBuildTags="selinux"
|
||||
;;
|
||||
*)
|
||||
extraBuildTags+=' seccomp'
|
||||
runcBuildTags="seccomp selinux"
|
||||
;;
|
||||
esac
|
||||
|
||||
|
@ -148,6 +152,7 @@ for version in "${versions[@]}"; do
|
|||
buildTags=$( echo "selinux $extraBuildTags" | xargs -n1 | sort -n | tr '\n' ' ' | sed -e 's/[[:space:]]*$//' )
|
||||
|
||||
echo "ENV DOCKER_BUILDTAGS $buildTags" >> "$version/Dockerfile"
|
||||
echo "ENV RUNC_BUILDTAGS $runcBuildTags" >> "$version/Dockerfile"
|
||||
echo >> "$version/Dockerfile"
|
||||
|
||||
case "$from" in
|
||||
|
|
|
@ -5,13 +5,14 @@
|
|||
FROM opensuse:13.2
|
||||
|
||||
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
|
||||
RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar systemd-rpm-macros
|
||||
RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git systemd-rpm-macros
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS selinux
|
||||
ENV RUNC_BUILDTAGS selinux
|
||||
|
||||
|
|
|
@ -5,18 +5,19 @@
|
|||
FROM oraclelinux:6
|
||||
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel tar
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel tar git
|
||||
|
||||
RUN yum install -y yum-utils && curl -o /etc/yum.repos.d/public-yum-ol6.repo http://yum.oracle.com/public-yum-ol6.repo && yum-config-manager -q --enable ol6_UEKR4
|
||||
RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS selinux
|
||||
ENV RUNC_BUILDTAGS selinux
|
||||
|
||||
ENV CGO_CPPFLAGS -D__EXPORTED_HEADERS__ \
|
||||
-I/usr/src/kernels/4.1.12-32.el6uek.x86_64/arch/x86/include/generated/uapi \
|
||||
|
|
|
@ -5,13 +5,14 @@
|
|||
FROM oraclelinux:7
|
||||
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
|
||||
RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
|
||||
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.5.4
|
||||
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
ENV AUTO_GOPATH 1
|
||||
|
||||
ENV DOCKER_BUILDTAGS selinux
|
||||
ENV RUNC_BUILDTAGS selinux
|
||||
|
||||
|
|
|
@ -410,12 +410,12 @@ __docker_complete_log_drivers() {
|
|||
__docker_complete_log_options() {
|
||||
# see docs/reference/logging/index.md
|
||||
local awslogs_options="awslogs-region awslogs-group awslogs-stream"
|
||||
local fluentd_options="env fluentd-address labels tag"
|
||||
local fluentd_options="env fluentd-address fluentd-async-connect fluentd-buffer-limit fluentd-retry-wait fluentd-max-retries labels tag"
|
||||
local gcplogs_options="env gcp-log-cmd gcp-project labels"
|
||||
local gelf_options="env gelf-address gelf-compression-level gelf-compression-type labels tag"
|
||||
local journald_options="env labels tag"
|
||||
local json_file_options="env labels max-file max-size"
|
||||
local syslog_options="syslog-address syslog-tls-ca-cert syslog-tls-cert syslog-tls-key syslog-tls-skip-verify syslog-facility tag"
|
||||
local syslog_options="syslog-address syslog-format syslog-tls-ca-cert syslog-tls-cert syslog-tls-key syslog-tls-skip-verify syslog-facility tag"
|
||||
local splunk_options="env labels splunk-caname splunk-capath splunk-index splunk-insecureskipverify splunk-source splunk-sourcetype splunk-token splunk-url tag"
|
||||
|
||||
local all_options="$fluentd_options $gcplogs_options $gelf_options $journald_options $json_file_options $syslog_options $splunk_options"
|
||||
|
@ -459,6 +459,10 @@ __docker_complete_log_options() {
|
|||
__docker_complete_log_driver_options() {
|
||||
local key=$(__docker_map_key_of_current_option '--log-opt')
|
||||
case "$key" in
|
||||
fluentd-async-connect)
|
||||
COMPREPLY=( $( compgen -W "false true" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
gelf-address)
|
||||
COMPREPLY=( $( compgen -W "udp" -S "://" -- "${cur##*=}" ) )
|
||||
__docker_nospace
|
||||
|
@ -503,6 +507,10 @@ __docker_complete_log_driver_options() {
|
|||
" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
syslog-format)
|
||||
COMPREPLY=( $( compgen -W "rfc3164 rfc5424" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
syslog-tls-@(ca-cert|cert|key))
|
||||
_filedir
|
||||
return
|
||||
|
@ -611,7 +619,7 @@ _docker_attach() {
|
|||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--detach-keys --help --no-stdin --sig-proxy" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--detach-keys --help --no-stdin --sig-proxy=false" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
local counter=$(__docker_pos_first_nonflag '--detach-keys')
|
||||
|
@ -633,6 +641,7 @@ _docker_build() {
|
|||
--cpu-quota
|
||||
--file -f
|
||||
--isolation
|
||||
--label
|
||||
--memory -m
|
||||
--memory-swap
|
||||
--shm-size
|
||||
|
@ -697,7 +706,7 @@ _docker_commit() {
|
|||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--author -a --change -c --help --message -m --pause -p" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--author -a --change -c --help --message -m --pause=false -p=false" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
local counter=$(__docker_pos_first_nonflag '--author|-a|--change|-c|--message|-m')
|
||||
|
@ -789,6 +798,7 @@ _docker_daemon() {
|
|||
--cluster-advertise
|
||||
--cluster-store
|
||||
--cluster-store-opt
|
||||
--containerd
|
||||
--default-gateway
|
||||
--default-gateway-v6
|
||||
--default-ulimit
|
||||
|
@ -865,7 +875,7 @@ _docker_daemon() {
|
|||
__docker_complete_log_drivers
|
||||
return
|
||||
;;
|
||||
--pidfile|-p|--tlscacert|--tlscert|--tlskey)
|
||||
--containerd|--pidfile|-p|--tlscacert|--tlscert|--tlskey)
|
||||
_filedir
|
||||
return
|
||||
;;
|
||||
|
@ -881,6 +891,7 @@ _docker_daemon() {
|
|||
dm.fs
|
||||
dm.loopdatasize
|
||||
dm.loopmetadatasize
|
||||
dm.min_free_space
|
||||
dm.mkfsarg
|
||||
dm.mountopt
|
||||
dm.override_udev_sync_check
|
||||
|
@ -1071,7 +1082,7 @@ _docker_help() {
|
|||
_docker_history() {
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--help --no-trunc --quiet -q" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--help --human=false -H=false --no-trunc --quiet -q" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
local counter=$(__docker_pos_first_nonflag)
|
||||
|
@ -1211,7 +1222,7 @@ _docker_load() {
|
|||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--help --input -i" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--help --input -i --quiet -q" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
@ -1320,11 +1331,14 @@ _docker_network_create() {
|
|||
COMPREPLY=( $(compgen -W "$plugins" -- "$cur") )
|
||||
return
|
||||
;;
|
||||
--label)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --ipv6 --opt -o --subnet" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --ipv6 --label --opt -o --subnet" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
@ -1476,6 +1490,11 @@ _docker_ps() {
|
|||
COMPREPLY=( $( compgen -W "created dead exited paused restarting running" -- "${cur##*=}" ) )
|
||||
return
|
||||
;;
|
||||
volume)
|
||||
cur="${cur##*=}"
|
||||
__docker_complete_volumes
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$prev" in
|
||||
|
@ -1483,7 +1502,7 @@ _docker_ps() {
|
|||
__docker_complete_containers_all
|
||||
;;
|
||||
--filter|-f)
|
||||
COMPREPLY=( $( compgen -S = -W "ancestor exited id label name status" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -S = -W "ancestor exited id label name status volume" -- "$cur" ) )
|
||||
__docker_nospace
|
||||
return
|
||||
;;
|
||||
|
@ -1654,6 +1673,7 @@ _docker_run() {
|
|||
--tmpfs
|
||||
--ulimit
|
||||
--user -u
|
||||
--userns
|
||||
--uts
|
||||
--volume-driver
|
||||
--volumes-from
|
||||
|
@ -1690,6 +1710,24 @@ _docker_run() {
|
|||
__docker_complete_log_driver_options && return
|
||||
__docker_complete_restart && return
|
||||
|
||||
local key=$(__docker_map_key_of_current_option '--security-opt')
|
||||
case "$key" in
|
||||
label)
|
||||
[[ $cur == *: ]] && return
|
||||
COMPREPLY=( $( compgen -W "user: role: type: level: disable" -- "${cur##*=}") )
|
||||
if [ "${COMPREPLY[*]}" != "disable" ] ; then
|
||||
__docker_nospace
|
||||
fi
|
||||
return
|
||||
;;
|
||||
seccomp)
|
||||
local cur=${cur##*=}
|
||||
_filedir
|
||||
COMPREPLY+=( $( compgen -W "unconfined" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$prev" in
|
||||
--add-host)
|
||||
case "$cur" in
|
||||
|
@ -1787,32 +1825,20 @@ _docker_run() {
|
|||
return
|
||||
;;
|
||||
--security-opt)
|
||||
case "$cur" in
|
||||
label=*:*)
|
||||
;;
|
||||
label=*)
|
||||
local cur=${cur##*=}
|
||||
COMPREPLY=( $( compgen -W "user: role: type: level: disable" -- "$cur") )
|
||||
if [ "${COMPREPLY[*]}" != "disable" ] ; then
|
||||
__docker_nospace
|
||||
fi
|
||||
;;
|
||||
seccomp=*)
|
||||
local cur=${cur##*=}
|
||||
_filedir
|
||||
COMPREPLY+=( $( compgen -W "unconfined" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "label apparmor seccomp" -S ":" -- "$cur") )
|
||||
__docker_nospace
|
||||
;;
|
||||
esac
|
||||
COMPREPLY=( $( compgen -W "apparmor= label= no-new-privileges seccomp=" -- "$cur") )
|
||||
if [ "${COMPREPLY[*]}" != "no-new-privileges" ] ; then
|
||||
__docker_nospace
|
||||
fi
|
||||
return
|
||||
;;
|
||||
--user|-u)
|
||||
__docker_complete_user_group
|
||||
return
|
||||
;;
|
||||
--userns)
|
||||
COMPREPLY=( $( compgen -W "host" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--volume-driver)
|
||||
__docker_complete_plugins Volume
|
||||
return
|
||||
|
@ -2015,14 +2041,14 @@ _docker_volume_create() {
|
|||
__docker_complete_plugins Volume
|
||||
return
|
||||
;;
|
||||
--name|--opt|-o)
|
||||
--label|--name|--opt|-o)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--driver -d --help --name --opt -o" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--driver -d --help --label --name --opt -o" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
|
|
@ -50,11 +50,11 @@ __docker_arguments() {
|
|||
__docker_get_containers() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
integer ret=1
|
||||
local kind
|
||||
declare -a running stopped lines args
|
||||
local kind type line s
|
||||
declare -a running stopped lines args names
|
||||
|
||||
kind=$1
|
||||
shift
|
||||
kind=$1; shift
|
||||
type=$1; shift
|
||||
[[ $kind = (stopped|all) ]] && args=($args -a)
|
||||
|
||||
lines=(${(f)"$(_call_program commands docker $docker_options ps --no-trunc $args)"})
|
||||
|
@ -73,39 +73,40 @@ __docker_get_containers() {
|
|||
lines=(${lines[2,-1]})
|
||||
|
||||
# Container ID
|
||||
local line
|
||||
local s
|
||||
for line in $lines; do
|
||||
s="${${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}[0,12]}"
|
||||
s="$s:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
|
||||
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
|
||||
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
|
||||
stopped=($stopped $s)
|
||||
else
|
||||
running=($running $s)
|
||||
fi
|
||||
done
|
||||
if [[ $type = (ids|all) ]]; then
|
||||
for line in $lines; do
|
||||
s="${${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}[0,12]}"
|
||||
s="$s:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
|
||||
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
|
||||
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
|
||||
stopped=($stopped $s)
|
||||
else
|
||||
running=($running $s)
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Names: we only display the one without slash. All other names
|
||||
# are generated and may clutter the completion. However, with
|
||||
# Swarm, all names may be prefixed by the swarm node name.
|
||||
local -a names
|
||||
for line in $lines; do
|
||||
names=(${(ps:,:)${${line[${begin[NAMES]},${end[NAMES]}]}%% *}})
|
||||
# First step: find a common prefix and strip it (swarm node case)
|
||||
(( ${#${(u)names%%/*}} == 1 )) && names=${names#${names[1]%%/*}/}
|
||||
# Second step: only keep the first name without a /
|
||||
s=${${names:#*/*}[1]}
|
||||
# If no name, well give up.
|
||||
(( $#s != 0 )) || continue
|
||||
s="$s:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
|
||||
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
|
||||
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
|
||||
stopped=($stopped $s)
|
||||
else
|
||||
running=($running $s)
|
||||
fi
|
||||
done
|
||||
if [[ $type = (names|all) ]]; then
|
||||
for line in $lines; do
|
||||
names=(${(ps:,:)${${line[${begin[NAMES]},${end[NAMES]}]}%% *}})
|
||||
# First step: find a common prefix and strip it (swarm node case)
|
||||
(( ${#${(u)names%%/*}} == 1 )) && names=${names#${names[1]%%/*}/}
|
||||
# Second step: only keep the first name without a /
|
||||
s=${${names:#*/*}[1]}
|
||||
# If no name, well give up.
|
||||
(( $#s != 0 )) || continue
|
||||
s="$s:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
|
||||
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
|
||||
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
|
||||
stopped=($stopped $s)
|
||||
else
|
||||
running=($running $s)
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
[[ $kind = (running|all) ]] && _describe -t containers-running "running containers" running "$@" && ret=0
|
||||
[[ $kind = (stopped|all) ]] && _describe -t containers-stopped "stopped containers" stopped "$@" && ret=0
|
||||
|
@ -114,17 +115,27 @@ __docker_get_containers() {
|
|||
|
||||
__docker_stoppedcontainers() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
__docker_get_containers stopped "$@"
|
||||
__docker_get_containers stopped all "$@"
|
||||
}
|
||||
|
||||
__docker_runningcontainers() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
__docker_get_containers running "$@"
|
||||
__docker_get_containers running all "$@"
|
||||
}
|
||||
|
||||
__docker_containers() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
__docker_get_containers all "$@"
|
||||
__docker_get_containers all all "$@"
|
||||
}
|
||||
|
||||
__docker_containers_ids() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
__docker_get_containers all ids "$@"
|
||||
}
|
||||
|
||||
__docker_containers_names() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
__docker_get_containers all names "$@"
|
||||
}
|
||||
|
||||
__docker_images() {
|
||||
|
@ -200,12 +211,12 @@ __docker_get_log_options() {
|
|||
local -a awslogs_options fluentd_options gelf_options journald_options json_file_options syslog_options splunk_options
|
||||
|
||||
awslogs_options=("awslogs-region" "awslogs-group" "awslogs-stream")
|
||||
fluentd_options=("env" "fluentd-address" "labels" "tag")
|
||||
fluentd_options=("env" "fluentd-address" "fluentd-async-connect" "fluentd-buffer-limit" "fluentd-retry-wait" "fluentd-max-retries" "labels" "tag")
|
||||
gcplogs_options=("env" "gcp-log-cmd" "gcp-project" "labels")
|
||||
gelf_options=("env" "gelf-address" "labels" "tag")
|
||||
journald_options=("env" "labels")
|
||||
gelf_options=("env" "gelf-address" "gelf-compression-level" "gelf-compression-type" "labels" "tag")
|
||||
journald_options=("env" "labels" "tag")
|
||||
json_file_options=("env" "labels" "max-file" "max-size")
|
||||
syslog_options=("syslog-address" "syslog-tls-ca-cert" "syslog-tls-cert" "syslog-tls-key" "syslog-tls-skip-verify" "syslog-facility" "tag")
|
||||
syslog_options=("syslog-address" "syslog-format" "syslog-tls-ca-cert" "syslog-tls-cert" "syslog-tls-key" "syslog-tls-skip-verify" "syslog-facility" "tag")
|
||||
splunk_options=("env" "labels" "splunk-caname" "splunk-capath" "splunk-index" "splunk-insecureskipverify" "splunk-source" "splunk-sourcetype" "splunk-token" "splunk-url" "tag")
|
||||
|
||||
[[ $log_driver = (awslogs|all) ]] && _describe -t awslogs-options "awslogs options" awslogs_options "$@" && ret=0
|
||||
|
@ -225,7 +236,15 @@ __docker_log_options() {
|
|||
integer ret=1
|
||||
|
||||
if compset -P '*='; then
|
||||
_message 'value' && ret=0
|
||||
case "${${words[-1]%=*}#*=}" in
|
||||
(syslog-format)
|
||||
syslog_format_opts=('rfc3164' 'rfc5424')
|
||||
_describe -t syslog-format-opts "Syslog format Options" syslog_format_opts && ret=0
|
||||
;;
|
||||
*)
|
||||
_message 'value' && ret=0
|
||||
;;
|
||||
esac
|
||||
else
|
||||
__docker_get_log_options -qS "=" && ret=0
|
||||
fi
|
||||
|
@ -244,6 +263,43 @@ __docker_complete_detach_keys() {
|
|||
_describe -t detach_keys-ctrl "'ctrl-' + 'a-z @ [ \\\\ ] ^ _'" ctrl_keys -qS "," && ret=0
|
||||
}
|
||||
|
||||
__docker_complete_ps_filters() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
integer ret=1
|
||||
|
||||
if compset -P '*='; then
|
||||
case "${${words[-1]%=*}#*=}" in
|
||||
(ancestor)
|
||||
__docker_images && ret=0
|
||||
;;
|
||||
(before|since)
|
||||
__docker_containers && ret=0
|
||||
;;
|
||||
(id)
|
||||
__docker_containers_ids && ret=0
|
||||
;;
|
||||
(name)
|
||||
__docker_containers_names && ret=0
|
||||
;;
|
||||
(status)
|
||||
status_opts=('created' 'dead' 'exited' 'paused' 'restarting' 'running')
|
||||
_describe -t status-filter-opts "Status Filter Options" status_opts && ret=0
|
||||
;;
|
||||
(volume)
|
||||
__docker_volumes && ret=0
|
||||
;;
|
||||
*)
|
||||
_message 'value' && ret=0
|
||||
;;
|
||||
esac
|
||||
else
|
||||
opts=('ancestor' 'before' 'exited' 'id' 'label' 'name' 'since' 'status' 'volume')
|
||||
_describe -t filter-opts "Filter Options" opts -qS "=" && ret=0
|
||||
fi
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
__docker_networks() {
|
||||
[[ $PREFIX = -* ]] && return 1
|
||||
integer ret=1
|
||||
|
@ -335,6 +391,7 @@ __docker_network_subcommand() {
|
|||
"($help)--ipam-driver=[IP Address Management Driver]:driver:(default)" \
|
||||
"($help)*--ipam-opt=[Custom IPAM plugin options]:opt=value: " \
|
||||
"($help)--ipv6[Enable IPv6 networking]" \
|
||||
"($help)*--label=[Set metadata on a network]:label=value: " \
|
||||
"($help)*"{-o=,--opt=}"[Driver specific options]:opt=value: " \
|
||||
"($help)*--subnet=[Subnet in CIDR format that represents a network segment]:IP/mask: " \
|
||||
"($help -)1:Network Name: " && ret=0
|
||||
|
@ -425,6 +482,7 @@ __docker_volume_subcommand() {
|
|||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -d --driver)"{-d=,--driver=}"[Volume driver name]:Driver name:(local)" \
|
||||
"($help)*--label=[Set metadata for a volume]:label=value: " \
|
||||
"($help)--name=[Volume name]" \
|
||||
"($help)*"{-o=,--opt=}"[Driver specific options]:Driver option: " && ret=0
|
||||
;;
|
||||
|
@ -489,6 +547,7 @@ __docker_subcommand() {
|
|||
"($help)--isolation=[Container isolation technology]:isolation:(default hyperv process)"
|
||||
"($help)*--shm-size=[Size of '/dev/shm' (format is '<number><unit>')]:shm size: "
|
||||
"($help)*--ulimit=[ulimit options]:ulimit: "
|
||||
"($help)--userns=[Container user namespace]:user namespace:(host)"
|
||||
)
|
||||
opts_build_create_run_update=(
|
||||
"($help)--cpu-shares=[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
|
||||
|
@ -526,7 +585,7 @@ __docker_subcommand() {
|
|||
"($help)--ipc=[IPC namespace to use]:IPC namespace: "
|
||||
"($help)*--link=[Add link to another container]:link:->link"
|
||||
"($help)*"{-l=,--label=}"[Container metadata]:label: "
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs splunk none)"
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(awslogs etwlogs fluentd gcplogs gelf journald json-file none splunk syslog)"
|
||||
"($help)*--log-opt=[Log driver specific options]:log driver options:__docker_log_options"
|
||||
"($help)--mac-address=[Container MAC address]:MAC address: "
|
||||
"($help)--name=[Container name]:name: "
|
||||
|
@ -540,7 +599,6 @@ __docker_subcommand() {
|
|||
"($help)--pid=[PID namespace to use]:PID: "
|
||||
"($help)--privileged[Give extended privileges to this container]"
|
||||
"($help)--read-only[Mount the container's root filesystem as read only]"
|
||||
"($help)--restart=[Restart policy]:restart policy:(no on-failure always unless-stopped)"
|
||||
"($help)*--security-opt=[Security options]:security option: "
|
||||
"($help -t --tty)"{-t,--tty}"[Allocate a pseudo-tty]"
|
||||
"($help -u --user)"{-u=,--user=}"[Username or UID]:user:_users"
|
||||
|
@ -554,6 +612,7 @@ __docker_subcommand() {
|
|||
"($help)--blkio-weight=[Block IO (relative weight), between 10 and 1000]:Block IO weight:(10 100 500 1000)"
|
||||
"($help)--kernel-memory=[Kernel memory limit in bytes]:Memory limit: "
|
||||
"($help)--memory-reservation=[Memory soft limit]:Memory limit: "
|
||||
"($help)--restart=[Restart policy]:restart policy:(no on-failure always unless-stopped)"
|
||||
)
|
||||
opts_attach_exec_run_start=(
|
||||
"($help)--detach-keys=[Escape key sequence used to detach a container]:sequence:__docker_complete_detach_keys"
|
||||
|
@ -576,6 +635,7 @@ __docker_subcommand() {
|
|||
"($help)*--build-arg[Build-time variables]:<varname>=<value>: " \
|
||||
"($help -f --file)"{-f=,--file=}"[Name of the Dockerfile]:Dockerfile:_files" \
|
||||
"($help)--force-rm[Always remove intermediate containers]" \
|
||||
"($help)*--label=[Set metadata for an image]:label=value: " \
|
||||
"($help)--no-cache[Do not use cache when building the image]" \
|
||||
"($help)--pull[Attempt to pull a newer version of the image]" \
|
||||
"($help -q --quiet)"{-q,--quiet}"[Suppress verbose build output]" \
|
||||
|
@ -639,6 +699,7 @@ __docker_subcommand() {
|
|||
"($help -b --bridge)"{-b=,--bridge=}"[Attach containers to a network bridge]:bridge:_net_interfaces" \
|
||||
"($help)--bip=[Network bridge IP]:IP address: " \
|
||||
"($help)--cgroup-parent=[Parent cgroup for all containers]:cgroup: " \
|
||||
"($help)--containerd=[Path to containerd socket]:socket:_files -g \"*.sock\"" \
|
||||
"($help -D --debug)"{-D,--debug}"[Enable debug mode]" \
|
||||
"($help)--default-gateway[Container default gateway IPv4 address]:IPv4 address: " \
|
||||
"($help)--default-gateway-v6[Container default gateway IPv6 address]:IPv6 address: " \
|
||||
|
@ -666,7 +727,7 @@ __docker_subcommand() {
|
|||
"($help)--ipv6[Enable IPv6 networking]" \
|
||||
"($help -l --log-level)"{-l=,--log-level=}"[Logging level]:level:(debug info warn error fatal)" \
|
||||
"($help)*--label=[Key=value labels]:label: " \
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs splunk none)" \
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(awslogs etwlogs fluentd gcplogs gelf journald json-file none splunk syslog)" \
|
||||
"($help)*--log-opt=[Log driver specific options]:log driver options:__docker_log_options" \
|
||||
"($help)--mtu=[Network MTU]:mtu:(0 576 1420 1500 9000)" \
|
||||
"($help -p --pidfile)"{-p=,--pidfile=}"[Path to use for daemon PID file]:PID file:_files" \
|
||||
|
@ -676,9 +737,9 @@ __docker_subcommand() {
|
|||
"($help)--selinux-enabled[Enable selinux support]" \
|
||||
"($help)*--storage-opt=[Storage driver options]:storage driver options: " \
|
||||
"($help)--tls[Use TLS]" \
|
||||
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
"($help)--tlskey=[Path to TLS key file]:Key file:_files -g "*.(pem|key)"" \
|
||||
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g \"*.(pem|crt)\"" \
|
||||
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g \"*.(pem|crt)\"" \
|
||||
"($help)--tlskey=[Path to TLS key file]:Key file:_files -g \"*.(pem|key)\"" \
|
||||
"($help)--tlsverify[Use TLS and verify the remote]" \
|
||||
"($help)--userns-remap=[User/Group setting for user namespaces]:user\:group:->users-groups" \
|
||||
"($help)--userland-proxy[Use userland proxy for loopback traffic]" && ret=0
|
||||
|
@ -810,7 +871,8 @@ __docker_subcommand() {
|
|||
(load)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -i --input)"{-i=,--input=}"[Read from tar archive file]:archive file:_files -g "*.((tar|TAR)(.gz|.GZ|.Z|.bz2|.lzma|.xz|)|(tbz|tgz|txz))(-.)"" && ret=0
|
||||
"($help -i --input)"{-i=,--input=}"[Read from tar archive file]:archive file:_files -g \"*.((tar|TAR)(.gz|.GZ|.Z|.bz2|.lzma|.xz|)|(tbz|tgz|txz))(-.)\"" \
|
||||
"($help -q --quiet)"{-q,--quiet}"[Suppress the load output]" && ret=0
|
||||
;;
|
||||
(login)
|
||||
_arguments $(__docker_arguments) \
|
||||
|
@ -866,7 +928,7 @@ __docker_subcommand() {
|
|||
$opts_help \
|
||||
"($help -a --all)"{-a,--all}"[Show all containers]" \
|
||||
"($help)--before=[Show only container created before...]:containers:__docker_containers" \
|
||||
"($help)*"{-f=,--filter=}"[Filter values]:filter: " \
|
||||
"($help)*"{-f=,--filter=}"[Filter values]:filter:->filter-options" \
|
||||
"($help)--format[Pretty-print containers using a Go template]:format: " \
|
||||
"($help -l --latest)"{-l,--latest}"[Show only the latest created container]" \
|
||||
"($help)-n[Show n last created containers, include non-running one]:n:(1 5 10 25 50)" \
|
||||
|
@ -874,16 +936,24 @@ __docker_subcommand() {
|
|||
"($help -q --quiet)"{-q,--quiet}"[Only show numeric IDs]" \
|
||||
"($help -s --size)"{-s,--size}"[Display total file sizes]" \
|
||||
"($help)--since=[Show only containers created since...]:containers:__docker_containers" && ret=0
|
||||
|
||||
case $state in
|
||||
(filter-options)
|
||||
__docker_complete_ps_filters && ret=0
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
(pull)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -a --all-tags)"{-a,--all-tags}"[Download all tagged images]" \
|
||||
"($help)--disable-content-trust[Skip image verification]" \
|
||||
"($help -):name:__docker_search" && ret=0
|
||||
;;
|
||||
(push)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help)--disable-content-trust[Skip image signing]" \
|
||||
"($help -): :__docker_images" && ret=0
|
||||
;;
|
||||
(rename)
|
||||
|
|
14
contrib/docker-device-tool/README.md
Normal file
14
contrib/docker-device-tool/README.md
Normal file
|
@ -0,0 +1,14 @@
|
|||
Docker device tool for devicemapper storage driver backend
|
||||
===================
|
||||
|
||||
The ./contrib/docker-device-tool contains a tool to manipulate devicemapper thin-pool.
|
||||
|
||||
Compile
|
||||
========
|
||||
|
||||
$ make shell
|
||||
## inside build container
|
||||
$ go build contrib/docker-device-tool/device_tool.go
|
||||
|
||||
# if devicemapper version is old and compliation fails, compile with `libdm_no_deferred_remove` tag
|
||||
$ go build -tags libdm_no_deferred_remove contrib/docker-device-tool/device_tool.go
|
|
@ -222,6 +222,7 @@ func (daemon *Daemon) exportContainerRw(container *container.Container) (archive
|
|||
|
||||
archive, err := container.RWLayer.TarStream()
|
||||
if err != nil {
|
||||
daemon.Unmount(container) // logging is already handled in the `Unmount` function
|
||||
return nil, err
|
||||
}
|
||||
return ioutils.NewReadCloserWrapper(archive, func() error {
|
||||
|
|
|
@ -82,7 +82,7 @@ func (config *Config) InstallFlags(cmd *flag.FlagSet, usageFn func(string) strin
|
|||
cmd.StringVar(&config.CorsHeaders, []string{"-api-cors-header"}, "", usageFn("Set CORS headers in the remote API"))
|
||||
cmd.StringVar(&config.CgroupParent, []string{"-cgroup-parent"}, "", usageFn("Set parent cgroup for all containers"))
|
||||
cmd.StringVar(&config.RemappedRoot, []string{"-userns-remap"}, "", usageFn("User/Group setting for user namespaces"))
|
||||
cmd.StringVar(&config.ContainerdAddr, []string{"-containerd"}, "", usageFn("Path to containerD socket"))
|
||||
cmd.StringVar(&config.ContainerdAddr, []string{"-containerd"}, "", usageFn("Path to containerd socket"))
|
||||
|
||||
config.attachExperimentalFlags(cmd, usageFn)
|
||||
}
|
||||
|
|
|
@ -720,7 +720,7 @@ func (daemon *Daemon) releaseNetwork(container *container.Container) {
|
|||
|
||||
sb, err := daemon.netController.SandboxByID(sid)
|
||||
if err != nil {
|
||||
logrus.Errorf("error locating sandbox id %s: %v", sid, err)
|
||||
logrus.Warnf("error locating sandbox id %s: %v", sid, err)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -295,7 +295,18 @@ func specDevice(d *configs.Device) specs.Device {
|
|||
}
|
||||
}
|
||||
|
||||
func getDevicesFromPath(deviceMapping containertypes.DeviceMapping) (devs []specs.Device, err error) {
|
||||
func specDeviceCgroup(d *configs.Device) specs.DeviceCgroup {
|
||||
t := string(d.Type)
|
||||
return specs.DeviceCgroup{
|
||||
Allow: true,
|
||||
Type: &t,
|
||||
Major: &d.Major,
|
||||
Minor: &d.Minor,
|
||||
Access: &d.Permissions,
|
||||
}
|
||||
}
|
||||
|
||||
func getDevicesFromPath(deviceMapping containertypes.DeviceMapping) (devs []specs.Device, devPermissions []specs.DeviceCgroup, err error) {
|
||||
resolvedPathOnHost := deviceMapping.PathOnHost
|
||||
|
||||
// check if it is a symbolic link
|
||||
|
@ -309,7 +320,7 @@ func getDevicesFromPath(deviceMapping containertypes.DeviceMapping) (devs []spec
|
|||
// if there was no error, return the device
|
||||
if err == nil {
|
||||
device.Path = deviceMapping.PathInContainer
|
||||
return append(devs, specDevice(device)), nil
|
||||
return append(devs, specDevice(device)), append(devPermissions, specDeviceCgroup(device)), nil
|
||||
}
|
||||
|
||||
// if the device is not a device node
|
||||
|
@ -330,6 +341,7 @@ func getDevicesFromPath(deviceMapping containertypes.DeviceMapping) (devs []spec
|
|||
// add the device to userSpecified devices
|
||||
childDevice.Path = strings.Replace(dpath, resolvedPathOnHost, deviceMapping.PathInContainer, 1)
|
||||
devs = append(devs, specDevice(childDevice))
|
||||
devPermissions = append(devPermissions, specDeviceCgroup(childDevice))
|
||||
|
||||
return nil
|
||||
})
|
||||
|
@ -337,10 +349,10 @@ func getDevicesFromPath(deviceMapping containertypes.DeviceMapping) (devs []spec
|
|||
}
|
||||
|
||||
if len(devs) > 0 {
|
||||
return devs, nil
|
||||
return devs, devPermissions, nil
|
||||
}
|
||||
|
||||
return devs, fmt.Errorf("error gathering device information while adding custom device %q: %s", deviceMapping.PathOnHost, err)
|
||||
return devs, devPermissions, fmt.Errorf("error gathering device information while adding custom device %q: %s", deviceMapping.PathOnHost, err)
|
||||
}
|
||||
|
||||
func mergeDevices(defaultDevices, userDevices []*configs.Device) []*configs.Device {
|
||||
|
|
|
@ -293,6 +293,11 @@ func (daemon *Daemon) restore() error {
|
|||
go func(c *container.Container) {
|
||||
defer wg.Done()
|
||||
if c.IsRunning() || c.IsPaused() {
|
||||
// Fix activityCount such that graph mounts can be unmounted later
|
||||
if err := daemon.layerStore.ReinitRWLayer(c.RWLayer); err != nil {
|
||||
logrus.Errorf("Failed to ReinitRWLayer for %s due to %s", c.ID, err)
|
||||
return
|
||||
}
|
||||
if err := daemon.containerd.Restore(c.ID, libcontainerd.WithRestartManager(c.RestartManager(true))); err != nil {
|
||||
logrus.Errorf("Failed to restore with containerd: %q", err)
|
||||
return
|
||||
|
@ -304,10 +309,6 @@ func (daemon *Daemon) restore() error {
|
|||
mapLock.Lock()
|
||||
restartContainers[c] = make(chan struct{})
|
||||
mapLock.Unlock()
|
||||
} else if !c.IsRunning() && !c.IsPaused() {
|
||||
if mountid, err := daemon.layerStore.GetMountID(c.ID); err == nil {
|
||||
daemon.cleanupMountsByID(mountid)
|
||||
}
|
||||
}
|
||||
|
||||
// if c.hostConfig.Links is nil (not just empty), then it is using the old sqlite links and needs to be migrated
|
||||
|
|
|
@ -5,7 +5,7 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
|
@ -28,91 +28,53 @@ func (daemon *Daemon) cleanupMountsFromReaderByID(reader io.Reader, id string, u
|
|||
return nil
|
||||
}
|
||||
var errors []string
|
||||
mountRoot := ""
|
||||
shmSuffix := "/" + id + "/shm"
|
||||
mergedSuffix := "/" + id + "/merged"
|
||||
|
||||
regexps := getCleanPatterns(id)
|
||||
sc := bufio.NewScanner(reader)
|
||||
for sc.Scan() {
|
||||
line := sc.Text()
|
||||
fields := strings.Fields(line)
|
||||
if strings.HasPrefix(fields[4], daemon.root) {
|
||||
logrus.Debugf("Mount base: %v", fields[4])
|
||||
mnt := fields[4]
|
||||
if strings.HasSuffix(mnt, shmSuffix) || strings.HasSuffix(mnt, mergedSuffix) {
|
||||
logrus.Debugf("Unmounting %v", mnt)
|
||||
if err := unmount(mnt); err != nil {
|
||||
logrus.Error(err)
|
||||
errors = append(errors, err.Error())
|
||||
if fields := strings.Fields(sc.Text()); len(fields) >= 4 {
|
||||
if mnt := fields[4]; strings.HasPrefix(mnt, daemon.root) {
|
||||
for _, p := range regexps {
|
||||
if p.MatchString(mnt) {
|
||||
if err := unmount(mnt); err != nil {
|
||||
logrus.Error(err)
|
||||
errors = append(errors, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if mountBase := filepath.Base(mnt); mountBase == id {
|
||||
mountRoot = mnt
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if mountRoot != "" {
|
||||
logrus.Debugf("Unmounting %v", mountRoot)
|
||||
if err := unmount(mountRoot); err != nil {
|
||||
logrus.Error(err)
|
||||
errors = append(errors, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if err := sc.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("Error cleaningup mounts:\n%v", strings.Join(errors, "\n"))
|
||||
return fmt.Errorf("Error cleaning up mounts:\n%v", strings.Join(errors, "\n"))
|
||||
}
|
||||
|
||||
logrus.Debugf("Cleaning up old container shm/mqueue/rootfs mounts: done.")
|
||||
logrus.Debugf("Cleaning up old mountid %v: done.", id)
|
||||
return nil
|
||||
}
|
||||
|
||||
// cleanupMounts umounts shm/mqueue mounts for old containers
|
||||
func (daemon *Daemon) cleanupMounts() error {
|
||||
logrus.Debugf("Cleaning up old container shm/mqueue/rootfs mounts: start.")
|
||||
f, err := os.Open("/proc/self/mountinfo")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
return daemon.cleanupMountsFromReader(f, mount.Unmount)
|
||||
return daemon.cleanupMountsByID("")
|
||||
}
|
||||
|
||||
func (daemon *Daemon) cleanupMountsFromReader(reader io.Reader, unmount func(target string) error) error {
|
||||
if daemon.root == "" {
|
||||
return nil
|
||||
func getCleanPatterns(id string) (regexps []*regexp.Regexp) {
|
||||
var patterns []string
|
||||
if id == "" {
|
||||
id = "[0-9a-f]{64}"
|
||||
patterns = append(patterns, "containers/"+id+"/shm")
|
||||
}
|
||||
sc := bufio.NewScanner(reader)
|
||||
var errors []string
|
||||
for sc.Scan() {
|
||||
line := sc.Text()
|
||||
fields := strings.Fields(line)
|
||||
if strings.HasPrefix(fields[4], daemon.root) {
|
||||
logrus.Debugf("Mount base: %v", fields[4])
|
||||
mnt := fields[4]
|
||||
mountBase := filepath.Base(mnt)
|
||||
if mountBase == "shm" || mountBase == "merged" {
|
||||
logrus.Debugf("Unmounting %v", mnt)
|
||||
if err := unmount(mnt); err != nil {
|
||||
logrus.Error(err)
|
||||
errors = append(errors, err.Error())
|
||||
}
|
||||
}
|
||||
patterns = append(patterns, "aufs/mnt/"+id+"$", "overlay/"+id+"/merged$", "zfs/graph/"+id+"$")
|
||||
for _, p := range patterns {
|
||||
r, err := regexp.Compile(p)
|
||||
if err == nil {
|
||||
regexps = append(regexps, r)
|
||||
}
|
||||
}
|
||||
|
||||
if err := sc.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("Error cleaningup mounts:\n%v", strings.Join(errors, "\n"))
|
||||
}
|
||||
|
||||
logrus.Debugf("Cleaning up old container shm/mqueue/rootfs mounts: done.")
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
|
|
@ -59,7 +59,7 @@ func TestCleanupMounts(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
d.cleanupMountsFromReader(strings.NewReader(mountsFixture), unmount)
|
||||
d.cleanupMountsFromReaderByID(strings.NewReader(mountsFixture), "", unmount)
|
||||
|
||||
if unmounted != 1 {
|
||||
t.Fatalf("Expected to unmount the shm (and the shm only)")
|
||||
|
@ -97,7 +97,7 @@ func TestNotCleanupMounts(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
mountInfo := `234 232 0:59 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k`
|
||||
d.cleanupMountsFromReader(strings.NewReader(mountInfo), unmount)
|
||||
d.cleanupMountsFromReaderByID(strings.NewReader(mountInfo), "", unmount)
|
||||
if unmounted {
|
||||
t.Fatalf("Expected not to clean up /dev/shm")
|
||||
}
|
||||
|
|
|
@ -130,7 +130,7 @@ func getBlkioWeightDevices(config containertypes.Resources) ([]specs.WeightDevic
|
|||
weight := weightDevice.Weight
|
||||
d := specs.WeightDevice{Weight: &weight}
|
||||
d.Major = int64(stat.Rdev / 256)
|
||||
d.Major = int64(stat.Rdev % 256)
|
||||
d.Minor = int64(stat.Rdev % 256)
|
||||
blkioWeightDevices = append(blkioWeightDevices, d)
|
||||
}
|
||||
|
||||
|
@ -187,7 +187,7 @@ func getBlkioReadIOpsDevices(config containertypes.Resources) ([]specs.ThrottleD
|
|||
rate := iopsDevice.Rate
|
||||
d := specs.ThrottleDevice{Rate: &rate}
|
||||
d.Major = int64(stat.Rdev / 256)
|
||||
d.Major = int64(stat.Rdev % 256)
|
||||
d.Minor = int64(stat.Rdev % 256)
|
||||
blkioReadIOpsDevice = append(blkioReadIOpsDevice, d)
|
||||
}
|
||||
|
||||
|
@ -205,7 +205,7 @@ func getBlkioWriteIOpsDevices(config containertypes.Resources) ([]specs.Throttle
|
|||
rate := iopsDevice.Rate
|
||||
d := specs.ThrottleDevice{Rate: &rate}
|
||||
d.Major = int64(stat.Rdev / 256)
|
||||
d.Major = int64(stat.Rdev % 256)
|
||||
d.Minor = int64(stat.Rdev % 256)
|
||||
blkioWriteIOpsDevice = append(blkioWriteIOpsDevice, d)
|
||||
}
|
||||
|
||||
|
@ -223,7 +223,7 @@ func getBlkioReadBpsDevices(config containertypes.Resources) ([]specs.ThrottleDe
|
|||
rate := bpsDevice.Rate
|
||||
d := specs.ThrottleDevice{Rate: &rate}
|
||||
d.Major = int64(stat.Rdev / 256)
|
||||
d.Major = int64(stat.Rdev % 256)
|
||||
d.Minor = int64(stat.Rdev % 256)
|
||||
blkioReadBpsDevice = append(blkioReadBpsDevice, d)
|
||||
}
|
||||
|
||||
|
@ -241,7 +241,7 @@ func getBlkioWriteBpsDevices(config containertypes.Resources) ([]specs.ThrottleD
|
|||
rate := bpsDevice.Rate
|
||||
d := specs.ThrottleDevice{Rate: &rate}
|
||||
d.Major = int64(stat.Rdev / 256)
|
||||
d.Major = int64(stat.Rdev % 256)
|
||||
d.Minor = int64(stat.Rdev % 256)
|
||||
blkioWriteBpsDevice = append(blkioWriteBpsDevice, d)
|
||||
}
|
||||
|
||||
|
@ -466,28 +466,36 @@ func verifyContainerResources(resources *containertypes.Resources, sysInfo *sysi
|
|||
func (daemon *Daemon) getCgroupDriver() string {
|
||||
cgroupDriver := cgroupFsDriver
|
||||
|
||||
// No other cgroup drivers are supported at the moment. Warn the
|
||||
// user if they tried to set one other than cgroupfs
|
||||
for _, option := range daemon.configStore.ExecOptions {
|
||||
if UsingSystemd(daemon.configStore) {
|
||||
cgroupDriver = cgroupSystemdDriver
|
||||
}
|
||||
return cgroupDriver
|
||||
}
|
||||
|
||||
// getCD gets the raw value of the native.cgroupdriver option, if set.
|
||||
func getCD(config *Config) string {
|
||||
for _, option := range config.ExecOptions {
|
||||
key, val, err := parsers.ParseKeyValueOpt(option)
|
||||
if err != nil || !strings.EqualFold(key, "native.cgroupdriver") {
|
||||
continue
|
||||
}
|
||||
if val != cgroupFsDriver {
|
||||
logrus.Warnf("cgroupdriver '%s' is not supported", val)
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
return cgroupDriver
|
||||
return ""
|
||||
}
|
||||
|
||||
func usingSystemd(config *Config) bool {
|
||||
// No support for systemd cgroup atm
|
||||
return false
|
||||
// VerifyCgroupDriver validates native.cgroupdriver
|
||||
func VerifyCgroupDriver(config *Config) error {
|
||||
cd := getCD(config)
|
||||
if cd == "" || cd == cgroupFsDriver || cd == cgroupSystemdDriver {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("native.cgroupdriver option %s not supported", cd)
|
||||
}
|
||||
|
||||
func (daemon *Daemon) usingSystemd() bool {
|
||||
return daemon.getCgroupDriver() == cgroupSystemdDriver
|
||||
// UsingSystemd returns true if cli option includes native.cgroupdriver=systemd
|
||||
func UsingSystemd(config *Config) bool {
|
||||
return getCD(config) == cgroupSystemdDriver
|
||||
}
|
||||
|
||||
// verifyPlatformContainerSettings performs platform-specific validation of the
|
||||
|
@ -533,7 +541,7 @@ func verifyPlatformContainerSettings(daemon *Daemon, hostConfig *containertypes.
|
|||
return warnings, fmt.Errorf("Cannot use the --read-only option when user namespaces are enabled")
|
||||
}
|
||||
}
|
||||
if hostConfig.CgroupParent != "" && daemon.usingSystemd() {
|
||||
if hostConfig.CgroupParent != "" && UsingSystemd(daemon.configStore) {
|
||||
// CgroupParent for systemd cgroup should be named as "xxx.slice"
|
||||
if len(hostConfig.CgroupParent) <= 6 || !strings.HasSuffix(hostConfig.CgroupParent, ".slice") {
|
||||
return warnings, fmt.Errorf("cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"")
|
||||
|
@ -554,7 +562,10 @@ func verifyDaemonSettings(config *Config) error {
|
|||
if !config.bridgeConfig.EnableIPTables && config.bridgeConfig.EnableIPMasq {
|
||||
config.bridgeConfig.EnableIPMasq = false
|
||||
}
|
||||
if config.CgroupParent != "" && usingSystemd(config) {
|
||||
if err := VerifyCgroupDriver(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if config.CgroupParent != "" && UsingSystemd(config) {
|
||||
if len(config.CgroupParent) <= 6 || !strings.HasSuffix(config.CgroupParent, ".slice") {
|
||||
return fmt.Errorf("cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"")
|
||||
}
|
||||
|
@ -1081,6 +1092,11 @@ func (daemon *Daemon) stats(c *container.Container) (*types.StatsJSON, error) {
|
|||
MaxUsage: mem.MaxUsage,
|
||||
Stats: cgs.MemoryStats.Stats,
|
||||
Failcnt: mem.Failcnt,
|
||||
Limit: mem.Limit,
|
||||
}
|
||||
// if the container does not set memory limit, use the machineMemory
|
||||
if mem.Limit > daemon.statsCollector.machineMemory && daemon.statsCollector.machineMemory > 0 {
|
||||
s.MemoryStats.Limit = daemon.statsCollector.machineMemory
|
||||
}
|
||||
if cgs.PidsStats != nil {
|
||||
s.PidsStats = types.PidsStats{
|
||||
|
|
|
@ -102,7 +102,7 @@ func (daemon *Daemon) cleanupContainer(container *container.Container, forceRemo
|
|||
// Save container state to disk. So that if error happens before
|
||||
// container meta file got removed from disk, then a restart of
|
||||
// docker should not make a dead container alive.
|
||||
if err := container.ToDiskLocking(); err != nil {
|
||||
if err := container.ToDiskLocking(); err != nil && !os.IsNotExist(err) {
|
||||
logrus.Errorf("Error saving dying container to disk: %v", err)
|
||||
}
|
||||
|
||||
|
@ -123,10 +123,14 @@ func (daemon *Daemon) cleanupContainer(container *container.Container, forceRemo
|
|||
return fmt.Errorf("Unable to remove filesystem for %v: %v", container.ID, err)
|
||||
}
|
||||
|
||||
metadata, err := daemon.layerStore.ReleaseRWLayer(container.RWLayer)
|
||||
layer.LogReleaseMetadata(metadata)
|
||||
if err != nil && err != layer.ErrMountDoesNotExist {
|
||||
return fmt.Errorf("Driver %s failed to remove root filesystem %s: %s", daemon.GraphDriverName(), container.ID, err)
|
||||
// When container creation fails and `RWLayer` has not been created yet, we
|
||||
// do not call `ReleaseRWLayer`
|
||||
if container.RWLayer != nil {
|
||||
metadata, err := daemon.layerStore.ReleaseRWLayer(container.RWLayer)
|
||||
layer.LogReleaseMetadata(metadata)
|
||||
if err != nil && err != layer.ErrMountDoesNotExist {
|
||||
return fmt.Errorf("Driver %s failed to remove root filesystem %s: %s", daemon.GraphDriverName(), container.ID, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
@ -29,6 +29,7 @@ import (
|
|||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
@ -64,21 +65,14 @@ func init() {
|
|||
graphdriver.Register("aufs", Init)
|
||||
}
|
||||
|
||||
type data struct {
|
||||
referenceCount int
|
||||
path string
|
||||
}
|
||||
|
||||
// Driver contains information about the filesystem mounted.
|
||||
// root of the filesystem
|
||||
// sync.Mutex to protect against concurrent modifications
|
||||
// active maps mount id to the count
|
||||
type Driver struct {
|
||||
root string
|
||||
uidMaps []idtools.IDMap
|
||||
gidMaps []idtools.IDMap
|
||||
sync.Mutex // Protects concurrent modification to active
|
||||
active map[string]*data
|
||||
sync.Mutex
|
||||
root string
|
||||
uidMaps []idtools.IDMap
|
||||
gidMaps []idtools.IDMap
|
||||
pathCacheLock sync.Mutex
|
||||
pathCache map[string]string
|
||||
}
|
||||
|
||||
// Init returns a new AUFS driver.
|
||||
|
@ -111,10 +105,10 @@ func Init(root string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
|
|||
}
|
||||
|
||||
a := &Driver{
|
||||
root: root,
|
||||
active: make(map[string]*data),
|
||||
uidMaps: uidMaps,
|
||||
gidMaps: gidMaps,
|
||||
root: root,
|
||||
uidMaps: uidMaps,
|
||||
gidMaps: gidMaps,
|
||||
pathCache: make(map[string]string),
|
||||
}
|
||||
|
||||
rootUID, rootGID, err := idtools.GetRootUIDGID(uidMaps, gidMaps)
|
||||
|
@ -228,9 +222,7 @@ func (a *Driver) Create(id, parent, mountLabel string) error {
|
|||
}
|
||||
}
|
||||
}
|
||||
a.Lock()
|
||||
a.active[id] = &data{}
|
||||
a.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -259,108 +251,91 @@ func (a *Driver) createDirsFor(id string) error {
|
|||
|
||||
// Remove will unmount and remove the given id.
|
||||
func (a *Driver) Remove(id string) error {
|
||||
// Protect the a.active from concurrent access
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
m := a.active[id]
|
||||
if m != nil {
|
||||
if m.referenceCount > 0 {
|
||||
return nil
|
||||
}
|
||||
// Make sure the dir is umounted first
|
||||
if err := a.unmount(m); err != nil {
|
||||
return err
|
||||
}
|
||||
a.pathCacheLock.Lock()
|
||||
mountpoint, exists := a.pathCache[id]
|
||||
a.pathCacheLock.Unlock()
|
||||
if !exists {
|
||||
mountpoint = a.getMountpoint(id)
|
||||
}
|
||||
tmpDirs := []string{
|
||||
"mnt",
|
||||
"diff",
|
||||
if err := a.unmount(mountpoint); err != nil {
|
||||
// no need to return here, we can still try to remove since the `Rename` will fail below if still mounted
|
||||
logrus.Debugf("aufs: error while unmounting %s: %v", mountpoint, err)
|
||||
}
|
||||
|
||||
// Atomically remove each directory in turn by first moving it out of the
|
||||
// way (so that docker doesn't find it anymore) before doing removal of
|
||||
// the whole tree.
|
||||
for _, p := range tmpDirs {
|
||||
realPath := path.Join(a.rootPath(), p, id)
|
||||
tmpPath := path.Join(a.rootPath(), p, fmt.Sprintf("%s-removing", id))
|
||||
if err := os.Rename(realPath, tmpPath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
defer os.RemoveAll(tmpPath)
|
||||
tmpMntPath := path.Join(a.mntPath(), fmt.Sprintf("%s-removing", id))
|
||||
if err := os.Rename(mountpoint, tmpMntPath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
defer os.RemoveAll(tmpMntPath)
|
||||
|
||||
tmpDiffpath := path.Join(a.diffPath(), fmt.Sprintf("%s-removing", id))
|
||||
if err := os.Rename(a.getDiffPath(id), tmpDiffpath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
defer os.RemoveAll(tmpDiffpath)
|
||||
|
||||
// Remove the layers file for the id
|
||||
if err := os.Remove(path.Join(a.rootPath(), "layers", id)); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
if m != nil {
|
||||
delete(a.active, id)
|
||||
}
|
||||
|
||||
a.pathCacheLock.Lock()
|
||||
delete(a.pathCache, id)
|
||||
a.pathCacheLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the rootfs path for the id.
|
||||
// This will mount the dir at it's given path
|
||||
func (a *Driver) Get(id, mountLabel string) (string, error) {
|
||||
// Protect the a.active from concurrent access
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
m := a.active[id]
|
||||
if m == nil {
|
||||
m = &data{}
|
||||
a.active[id] = m
|
||||
}
|
||||
|
||||
parents, err := a.getParentLayerPaths(id)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// If a dir does not have a parent ( no layers )do not try to mount
|
||||
// just return the diff path to the data
|
||||
m.path = path.Join(a.rootPath(), "diff", id)
|
||||
if len(parents) > 0 {
|
||||
m.path = path.Join(a.rootPath(), "mnt", id)
|
||||
if m.referenceCount == 0 {
|
||||
if err := a.mount(id, m, mountLabel, parents); err != nil {
|
||||
return "", err
|
||||
}
|
||||
a.pathCacheLock.Lock()
|
||||
m, exists := a.pathCache[id]
|
||||
a.pathCacheLock.Unlock()
|
||||
|
||||
if !exists {
|
||||
m = a.getDiffPath(id)
|
||||
if len(parents) > 0 {
|
||||
m = a.getMountpoint(id)
|
||||
}
|
||||
}
|
||||
m.referenceCount++
|
||||
return m.path, nil
|
||||
|
||||
// If a dir does not have a parent ( no layers )do not try to mount
|
||||
// just return the diff path to the data
|
||||
if len(parents) > 0 {
|
||||
if err := a.mount(id, m, mountLabel, parents); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
a.pathCacheLock.Lock()
|
||||
a.pathCache[id] = m
|
||||
a.pathCacheLock.Unlock()
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// Put unmounts and updates list of active mounts.
|
||||
func (a *Driver) Put(id string) error {
|
||||
// Protect the a.active from concurrent access
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
a.pathCacheLock.Lock()
|
||||
m, exists := a.pathCache[id]
|
||||
if !exists {
|
||||
m = a.getMountpoint(id)
|
||||
a.pathCache[id] = m
|
||||
}
|
||||
a.pathCacheLock.Unlock()
|
||||
|
||||
m := a.active[id]
|
||||
if m == nil {
|
||||
// but it might be still here
|
||||
if a.Exists(id) {
|
||||
path := path.Join(a.rootPath(), "mnt", id)
|
||||
err := Unmount(path)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to unmount %s aufs: %v", id, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
err := a.unmount(m)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to unmount %s aufs: %v", id, err)
|
||||
}
|
||||
if count := m.referenceCount; count > 1 {
|
||||
m.referenceCount = count - 1
|
||||
} else {
|
||||
ids, _ := getParentIds(a.rootPath(), id)
|
||||
// We only mounted if there are any parents
|
||||
if ids != nil && len(ids) > 0 {
|
||||
a.unmount(m)
|
||||
}
|
||||
delete(a.active, id)
|
||||
}
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
// Diff produces an archive of the changes between the specified
|
||||
|
@ -443,16 +418,16 @@ func (a *Driver) getParentLayerPaths(id string) ([]string, error) {
|
|||
return layers, nil
|
||||
}
|
||||
|
||||
func (a *Driver) mount(id string, m *data, mountLabel string, layers []string) error {
|
||||
func (a *Driver) mount(id string, target string, mountLabel string, layers []string) error {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
// If the id is mounted or we get an error return
|
||||
if mounted, err := a.mounted(m); err != nil || mounted {
|
||||
if mounted, err := a.mounted(target); err != nil || mounted {
|
||||
return err
|
||||
}
|
||||
|
||||
var (
|
||||
target = m.path
|
||||
rw = path.Join(a.rootPath(), "diff", id)
|
||||
)
|
||||
rw := a.getDiffPath(id)
|
||||
|
||||
if err := a.aufsMount(layers, rw, target, mountLabel); err != nil {
|
||||
return fmt.Errorf("error creating aufs mount to %s: %v", target, err)
|
||||
|
@ -460,26 +435,42 @@ func (a *Driver) mount(id string, m *data, mountLabel string, layers []string) e
|
|||
return nil
|
||||
}
|
||||
|
||||
func (a *Driver) unmount(m *data) error {
|
||||
if mounted, err := a.mounted(m); err != nil || !mounted {
|
||||
func (a *Driver) unmount(mountPath string) error {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
if mounted, err := a.mounted(mountPath); err != nil || !mounted {
|
||||
return err
|
||||
}
|
||||
return Unmount(m.path)
|
||||
if err := Unmount(mountPath); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Driver) mounted(m *data) (bool, error) {
|
||||
var buf syscall.Statfs_t
|
||||
if err := syscall.Statfs(m.path, &buf); err != nil {
|
||||
return false, nil
|
||||
}
|
||||
return graphdriver.FsMagic(buf.Type) == graphdriver.FsMagicAufs, nil
|
||||
func (a *Driver) mounted(mountpoint string) (bool, error) {
|
||||
return graphdriver.Mounted(graphdriver.FsMagicAufs, mountpoint)
|
||||
}
|
||||
|
||||
// Cleanup aufs and unmount all mountpoints
|
||||
func (a *Driver) Cleanup() error {
|
||||
for id, m := range a.active {
|
||||
var dirs []string
|
||||
if err := filepath.Walk(a.mntPath(), func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !info.IsDir() {
|
||||
return nil
|
||||
}
|
||||
dirs = append(dirs, path)
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, m := range dirs {
|
||||
if err := a.unmount(m); err != nil {
|
||||
logrus.Errorf("Unmounting %s: %s", stringid.TruncateID(id), err)
|
||||
logrus.Debugf("aufs error unmounting %s: %s", stringid.TruncateID(m), err)
|
||||
}
|
||||
}
|
||||
return mountpk.Unmount(a.root)
|
||||
|
|
|
@ -200,7 +200,7 @@ func TestMountedFalseResponse(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
response, err := d.mounted(d.active["1"])
|
||||
response, err := d.mounted(d.getDiffPath("1"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -227,7 +227,7 @@ func TestMountedTrueReponse(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
response, err := d.mounted(d.active["2"])
|
||||
response, err := d.mounted(d.pathCache["2"])
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -293,7 +293,7 @@ func TestRemoveMountedDir(t *testing.T) {
|
|||
t.Fatal("mntPath should not be empty string")
|
||||
}
|
||||
|
||||
mounted, err := d.mounted(d.active["2"])
|
||||
mounted, err := d.mounted(d.pathCache["2"])
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
|
|
@ -46,3 +46,19 @@ func getParentIds(root, id string) ([]string, error) {
|
|||
}
|
||||
return out, s.Err()
|
||||
}
|
||||
|
||||
func (a *Driver) getMountpoint(id string) string {
|
||||
return path.Join(a.mntPath(), id)
|
||||
}
|
||||
|
||||
func (a *Driver) mntPath() string {
|
||||
return path.Join(a.rootPath(), "mnt")
|
||||
}
|
||||
|
||||
func (a *Driver) getDiffPath(id string) string {
|
||||
return path.Join(a.diffPath(), id)
|
||||
}
|
||||
|
||||
func (a *Driver) diffPath() string {
|
||||
return path.Join(a.rootPath(), "diff")
|
||||
}
|
||||
|
|
|
@ -7,6 +7,10 @@ package btrfs
|
|||
#include <dirent.h>
|
||||
#include <btrfs/ioctl.h>
|
||||
#include <btrfs/ctree.h>
|
||||
|
||||
static void set_name_btrfs_ioctl_vol_args_v2(struct btrfs_ioctl_vol_args_v2* btrfs_struct, const char* value) {
|
||||
snprintf(btrfs_struct->name, BTRFS_SUBVOL_NAME_MAX, "%s", value);
|
||||
}
|
||||
*/
|
||||
import "C"
|
||||
|
||||
|
@ -159,9 +163,10 @@ func subvolSnapshot(src, dest, name string) error {
|
|||
|
||||
var args C.struct_btrfs_ioctl_vol_args_v2
|
||||
args.fd = C.__s64(getDirFd(srcDir))
|
||||
for i, c := range []byte(name) {
|
||||
args.name[i] = C.char(c)
|
||||
}
|
||||
|
||||
var cs = C.CString(name)
|
||||
C.set_name_btrfs_ioctl_vol_args_v2(&args, cs)
|
||||
C.free(unsafe.Pointer(cs))
|
||||
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_IOCTL, getDirFd(destDir), C.BTRFS_IOC_SNAP_CREATE_V2,
|
||||
uintptr(unsafe.Pointer(&args)))
|
||||
|
|
|
@ -22,6 +22,7 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
|
||||
"github.com/docker/docker/daemon/graphdriver"
|
||||
"github.com/docker/docker/dockerversion"
|
||||
"github.com/docker/docker/pkg/devicemapper"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
"github.com/docker/docker/pkg/loopback"
|
||||
|
@ -69,9 +70,6 @@ type devInfo struct {
|
|||
Deleted bool `json:"deleted"`
|
||||
devices *DeviceSet
|
||||
|
||||
mountCount int
|
||||
mountPath string
|
||||
|
||||
// The global DeviceSet lock guarantees that we serialize all
|
||||
// the calls to libdevmapper (which is not threadsafe), but we
|
||||
// sometimes release that lock while sleeping. In that case
|
||||
|
@ -1659,7 +1657,12 @@ func (devices *DeviceSet) initDevmapper(doInit bool) error {
|
|||
|
||||
// https://github.com/docker/docker/issues/4036
|
||||
if supported := devicemapper.UdevSetSyncSupport(true); !supported {
|
||||
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
|
||||
if dockerversion.IAmStatic == "true" {
|
||||
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a dynamic binary to use devicemapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
|
||||
} else {
|
||||
logrus.Errorf("devmapper: Udev sync is not supported. This will lead to data loss and unexpected behavior. Install a more recent version of libdevmapper or select a different storage driver. For more information, see https://docs.docker.com/engine/reference/commandline/daemon/#daemon-storage-driver-option")
|
||||
}
|
||||
|
||||
if !devices.overrideUdevSyncCheck {
|
||||
return graphdriver.ErrNotSupported
|
||||
}
|
||||
|
@ -1991,13 +1994,6 @@ func (devices *DeviceSet) DeleteDevice(hash string, syncDelete bool) error {
|
|||
devices.Lock()
|
||||
defer devices.Unlock()
|
||||
|
||||
// If mountcount is not zero, that means devices is still in use
|
||||
// or has not been Put() properly. Fail device deletion.
|
||||
|
||||
if info.mountCount != 0 {
|
||||
return fmt.Errorf("devmapper: Can't delete device %v as it is still mounted. mntCount=%v", info.Hash, info.mountCount)
|
||||
}
|
||||
|
||||
return devices.deleteDevice(info, syncDelete)
|
||||
}
|
||||
|
||||
|
@ -2116,13 +2112,11 @@ func (devices *DeviceSet) cancelDeferredRemoval(info *devInfo) error {
|
|||
}
|
||||
|
||||
// Shutdown shuts down the device by unmounting the root.
|
||||
func (devices *DeviceSet) Shutdown() error {
|
||||
func (devices *DeviceSet) Shutdown(home string) error {
|
||||
logrus.Debugf("devmapper: [deviceset %s] Shutdown()", devices.devicePrefix)
|
||||
logrus.Debugf("devmapper: Shutting down DeviceSet: %s", devices.root)
|
||||
defer logrus.Debugf("devmapper: [deviceset %s] Shutdown() END", devices.devicePrefix)
|
||||
|
||||
var devs []*devInfo
|
||||
|
||||
// Stop deletion worker. This should start delivering new events to
|
||||
// ticker channel. That means no new instance of cleanupDeletedDevice()
|
||||
// will run after this call. If one instance is already running at
|
||||
|
@ -2139,30 +2133,46 @@ func (devices *DeviceSet) Shutdown() error {
|
|||
// metadata. Hence save this early before trying to deactivate devices.
|
||||
devices.saveDeviceSetMetaData()
|
||||
|
||||
for _, info := range devices.Devices {
|
||||
devs = append(devs, info)
|
||||
// ignore the error since it's just a best effort to not try to unmount something that's mounted
|
||||
mounts, _ := mount.GetMounts()
|
||||
mounted := make(map[string]bool, len(mounts))
|
||||
for _, mnt := range mounts {
|
||||
mounted[mnt.Mountpoint] = true
|
||||
}
|
||||
devices.Unlock()
|
||||
|
||||
for _, info := range devs {
|
||||
info.lock.Lock()
|
||||
if info.mountCount > 0 {
|
||||
if err := filepath.Walk(path.Join(home, "mnt"), func(p string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !info.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if mounted[p] {
|
||||
// We use MNT_DETACH here in case it is still busy in some running
|
||||
// container. This means it'll go away from the global scope directly,
|
||||
// and the device will be released when that container dies.
|
||||
if err := syscall.Unmount(info.mountPath, syscall.MNT_DETACH); err != nil {
|
||||
logrus.Debugf("devmapper: Shutdown unmounting %s, error: %s", info.mountPath, err)
|
||||
if err := syscall.Unmount(p, syscall.MNT_DETACH); err != nil {
|
||||
logrus.Debugf("devmapper: Shutdown unmounting %s, error: %s", p, err)
|
||||
}
|
||||
|
||||
devices.Lock()
|
||||
if err := devices.deactivateDevice(info); err != nil {
|
||||
logrus.Debugf("devmapper: Shutdown deactivate %s , error: %s", info.Hash, err)
|
||||
}
|
||||
devices.Unlock()
|
||||
}
|
||||
info.lock.Unlock()
|
||||
|
||||
if devInfo, err := devices.lookupDevice(path.Base(p)); err != nil {
|
||||
logrus.Debugf("devmapper: Shutdown lookup device %s, error: %s", path.Base(p), err)
|
||||
} else {
|
||||
if err := devices.deactivateDevice(devInfo); err != nil {
|
||||
logrus.Debugf("devmapper: Shutdown deactivate %s , error: %s", devInfo.Hash, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil && !os.IsNotExist(err) {
|
||||
devices.Unlock()
|
||||
return err
|
||||
}
|
||||
|
||||
devices.Unlock()
|
||||
|
||||
info, _ := devices.lookupDeviceWithLock("")
|
||||
if info != nil {
|
||||
info.lock.Lock()
|
||||
|
@ -2202,15 +2212,6 @@ func (devices *DeviceSet) MountDevice(hash, path, mountLabel string) error {
|
|||
devices.Lock()
|
||||
defer devices.Unlock()
|
||||
|
||||
if info.mountCount > 0 {
|
||||
if path != info.mountPath {
|
||||
return fmt.Errorf("devmapper: Trying to mount devmapper device in multiple places (%s, %s)", info.mountPath, path)
|
||||
}
|
||||
|
||||
info.mountCount++
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := devices.activateDeviceIfNeeded(info, false); err != nil {
|
||||
return fmt.Errorf("devmapper: Error activating devmapper device for '%s': %s", hash, err)
|
||||
}
|
||||
|
@ -2234,9 +2235,6 @@ func (devices *DeviceSet) MountDevice(hash, path, mountLabel string) error {
|
|||
return fmt.Errorf("devmapper: Error mounting '%s' on '%s': %s", info.DevName(), path, err)
|
||||
}
|
||||
|
||||
info.mountCount = 1
|
||||
info.mountPath = path
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -2256,20 +2254,6 @@ func (devices *DeviceSet) UnmountDevice(hash, mountPath string) error {
|
|||
devices.Lock()
|
||||
defer devices.Unlock()
|
||||
|
||||
// If there are running containers when daemon crashes, during daemon
|
||||
// restarting, it will kill running containers and will finally call
|
||||
// Put() without calling Get(). So info.MountCount may become negative.
|
||||
// if info.mountCount goes negative, we do the unmount and assign
|
||||
// it to 0.
|
||||
|
||||
info.mountCount--
|
||||
if info.mountCount > 0 {
|
||||
return nil
|
||||
} else if info.mountCount < 0 {
|
||||
logrus.Warnf("devmapper: Mount count of device went negative. Put() called without matching Get(). Resetting count to 0")
|
||||
info.mountCount = 0
|
||||
}
|
||||
|
||||
logrus.Debugf("devmapper: Unmount(%s)", mountPath)
|
||||
if err := syscall.Unmount(mountPath, syscall.MNT_DETACH); err != nil {
|
||||
return err
|
||||
|
@ -2280,8 +2264,6 @@ func (devices *DeviceSet) UnmountDevice(hash, mountPath string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
info.mountPath = ""
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -108,7 +108,7 @@ func (d *Driver) GetMetadata(id string) (map[string]string, error) {
|
|||
|
||||
// Cleanup unmounts a device.
|
||||
func (d *Driver) Cleanup() error {
|
||||
err := d.DeviceSet.Shutdown()
|
||||
err := d.DeviceSet.Shutdown(d.home)
|
||||
|
||||
if err2 := mount.Unmount(d.home); err == nil {
|
||||
err = err2
|
||||
|
|
|
@ -1,8 +1,19 @@
|
|||
package graphdriver
|
||||
|
||||
import "syscall"
|
||||
|
||||
var (
|
||||
// Slice of drivers that should be used in an order
|
||||
priority = []string{
|
||||
"zfs",
|
||||
}
|
||||
)
|
||||
|
||||
// Mounted checks if the given path is mounted as the fs type
|
||||
func Mounted(fsType FsMagic, mountPath string) (bool, error) {
|
||||
var buf syscall.Statfs_t
|
||||
if err := syscall.Statfs(mountPath, &buf); err != nil {
|
||||
return false, err
|
||||
}
|
||||
return FsMagic(buf.Type) == fsType, nil
|
||||
}
|
||||
|
|
|
@ -42,6 +42,8 @@ const (
|
|||
FsMagicXfs = FsMagic(0x58465342)
|
||||
// FsMagicZfs filesystem id for Zfs
|
||||
FsMagicZfs = FsMagic(0x2fc12fc1)
|
||||
// FsMagicOverlay filesystem id for overlay
|
||||
FsMagicOverlay = FsMagic(0x794C7630)
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -86,3 +88,12 @@ func GetFSMagic(rootpath string) (FsMagic, error) {
|
|||
}
|
||||
return FsMagic(buf.Type), nil
|
||||
}
|
||||
|
||||
// Mounted checks if the given path is mounted as the fs type
|
||||
func Mounted(fsType FsMagic, mountPath string) (bool, error) {
|
||||
var buf syscall.Statfs_t
|
||||
if err := syscall.Statfs(mountPath, &buf); err != nil {
|
||||
return false, err
|
||||
}
|
||||
return FsMagic(buf.Type) == fsType, nil
|
||||
}
|
||||
|
|
|
@ -88,21 +88,13 @@ func (d *naiveDiffDriverWithApply) ApplyDiff(id, parent string, diff archive.Rea
|
|||
// of that. This means all child images share file (but not directory)
|
||||
// data with the parent.
|
||||
|
||||
// ActiveMount contains information about the count, path and whether is mounted or not.
|
||||
// This information is part of the Driver, that contains list of active mounts that are part of this overlay.
|
||||
type ActiveMount struct {
|
||||
count int
|
||||
path string
|
||||
mounted bool
|
||||
}
|
||||
|
||||
// Driver contains information about the home directory and the list of active mounts that are created using this driver.
|
||||
type Driver struct {
|
||||
home string
|
||||
sync.Mutex // Protects concurrent modification to active
|
||||
active map[string]*ActiveMount
|
||||
uidMaps []idtools.IDMap
|
||||
gidMaps []idtools.IDMap
|
||||
home string
|
||||
pathCacheLock sync.Mutex
|
||||
pathCache map[string]string
|
||||
uidMaps []idtools.IDMap
|
||||
gidMaps []idtools.IDMap
|
||||
}
|
||||
|
||||
var backingFs = "<unknown>"
|
||||
|
@ -151,10 +143,10 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
|
|||
}
|
||||
|
||||
d := &Driver{
|
||||
home: home,
|
||||
active: make(map[string]*ActiveMount),
|
||||
uidMaps: uidMaps,
|
||||
gidMaps: gidMaps,
|
||||
home: home,
|
||||
pathCache: make(map[string]string),
|
||||
uidMaps: uidMaps,
|
||||
gidMaps: gidMaps,
|
||||
}
|
||||
|
||||
return NaiveDiffDriverWithApply(d, uidMaps, gidMaps), nil
|
||||
|
@ -325,23 +317,14 @@ func (d *Driver) Remove(id string) error {
|
|||
if err := os.RemoveAll(d.dir(id)); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
d.pathCacheLock.Lock()
|
||||
delete(d.pathCache, id)
|
||||
d.pathCacheLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get creates and mounts the required file system for the given id and returns the mount path.
|
||||
func (d *Driver) Get(id string, mountLabel string) (string, error) {
|
||||
// Protect the d.active from concurrent access
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
mount := d.active[id]
|
||||
if mount != nil {
|
||||
mount.count++
|
||||
return mount.path, nil
|
||||
}
|
||||
|
||||
mount = &ActiveMount{count: 1}
|
||||
|
||||
dir := d.dir(id)
|
||||
if _, err := os.Stat(dir); err != nil {
|
||||
return "", err
|
||||
|
@ -350,9 +333,10 @@ func (d *Driver) Get(id string, mountLabel string) (string, error) {
|
|||
// If id has a root, just return it
|
||||
rootDir := path.Join(dir, "root")
|
||||
if _, err := os.Stat(rootDir); err == nil {
|
||||
mount.path = rootDir
|
||||
d.active[id] = mount
|
||||
return mount.path, nil
|
||||
d.pathCacheLock.Lock()
|
||||
d.pathCache[id] = rootDir
|
||||
d.pathCacheLock.Unlock()
|
||||
return rootDir, nil
|
||||
}
|
||||
|
||||
lowerID, err := ioutil.ReadFile(path.Join(dir, "lower-id"))
|
||||
|
@ -365,6 +349,16 @@ func (d *Driver) Get(id string, mountLabel string) (string, error) {
|
|||
mergedDir := path.Join(dir, "merged")
|
||||
|
||||
opts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lowerDir, upperDir, workDir)
|
||||
|
||||
// if it's mounted already, just return
|
||||
mounted, err := d.mounted(mergedDir)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if mounted {
|
||||
return mergedDir, nil
|
||||
}
|
||||
|
||||
if err := syscall.Mount("overlay", mergedDir, "overlay", 0, label.FormatMountLabel(opts, mountLabel)); err != nil {
|
||||
return "", fmt.Errorf("error creating overlay mount to %s: %v", mergedDir, err)
|
||||
}
|
||||
|
@ -378,42 +372,38 @@ func (d *Driver) Get(id string, mountLabel string) (string, error) {
|
|||
if err := os.Chown(path.Join(workDir, "work"), rootUID, rootGID); err != nil {
|
||||
return "", err
|
||||
}
|
||||
mount.path = mergedDir
|
||||
mount.mounted = true
|
||||
d.active[id] = mount
|
||||
|
||||
return mount.path, nil
|
||||
d.pathCacheLock.Lock()
|
||||
d.pathCache[id] = mergedDir
|
||||
d.pathCacheLock.Unlock()
|
||||
|
||||
return mergedDir, nil
|
||||
}
|
||||
|
||||
func (d *Driver) mounted(dir string) (bool, error) {
|
||||
return graphdriver.Mounted(graphdriver.FsMagicOverlay, dir)
|
||||
}
|
||||
|
||||
// Put unmounts the mount path created for the give id.
|
||||
func (d *Driver) Put(id string) error {
|
||||
// Protect the d.active from concurrent access
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
d.pathCacheLock.Lock()
|
||||
mountpoint, exists := d.pathCache[id]
|
||||
d.pathCacheLock.Unlock()
|
||||
|
||||
mount := d.active[id]
|
||||
if mount == nil {
|
||||
if !exists {
|
||||
logrus.Debugf("Put on a non-mounted device %s", id)
|
||||
// but it might be still here
|
||||
if d.Exists(id) {
|
||||
mergedDir := path.Join(d.dir(id), "merged")
|
||||
err := syscall.Unmount(mergedDir, 0)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to unmount %s overlay: %v", id, err)
|
||||
}
|
||||
mountpoint = path.Join(d.dir(id), "merged")
|
||||
}
|
||||
return nil
|
||||
|
||||
d.pathCacheLock.Lock()
|
||||
d.pathCache[id] = mountpoint
|
||||
d.pathCacheLock.Unlock()
|
||||
}
|
||||
|
||||
mount.count--
|
||||
if mount.count > 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
defer delete(d.active, id)
|
||||
if mount.mounted {
|
||||
err := syscall.Unmount(mount.path, 0)
|
||||
if err != nil {
|
||||
if mounted, err := d.mounted(mountpoint); mounted || err != nil {
|
||||
if err = syscall.Unmount(mountpoint, 0); err != nil {
|
||||
logrus.Debugf("Failed to unmount %s overlay: %v", id, err)
|
||||
}
|
||||
return err
|
||||
|
|
|
@ -13,7 +13,6 @@ import (
|
|||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
|
@ -47,10 +46,6 @@ const (
|
|||
type Driver struct {
|
||||
// info stores the shim driver information
|
||||
info hcsshim.DriverInfo
|
||||
// Mutex protects concurrent modification to active
|
||||
sync.Mutex
|
||||
// active stores references to the activated layers
|
||||
active map[string]int
|
||||
}
|
||||
|
||||
var _ graphdriver.DiffGetterDriver = &Driver{}
|
||||
|
@ -63,7 +58,6 @@ func InitFilter(home string, options []string, uidMaps, gidMaps []idtools.IDMap)
|
|||
HomeDir: home,
|
||||
Flavour: filterDriver,
|
||||
},
|
||||
active: make(map[string]int),
|
||||
}
|
||||
return d, nil
|
||||
}
|
||||
|
@ -76,7 +70,6 @@ func InitDiff(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (
|
|||
HomeDir: home,
|
||||
Flavour: diffDriver,
|
||||
},
|
||||
active: make(map[string]int),
|
||||
}
|
||||
return d, nil
|
||||
}
|
||||
|
@ -189,9 +182,6 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
logrus.Debugf("WindowsGraphDriver Get() id %s mountLabel %s", id, mountLabel)
|
||||
var dir string
|
||||
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
rID, err := d.resolveID(id)
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
@ -203,16 +193,14 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
|
||||
if d.active[rID] == 0 {
|
||||
if err := hcsshim.ActivateLayer(d.info, rID); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
|
||||
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", id, err)
|
||||
}
|
||||
return "", err
|
||||
if err := hcsshim.ActivateLayer(d.info, rID); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
|
||||
if err2 := hcsshim.DeactivateLayer(d.info, rID); err2 != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", id, err)
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
|
||||
mountPath, err := hcsshim.GetLayerMountPath(d.info, rID)
|
||||
|
@ -223,8 +211,6 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
|
||||
d.active[rID]++
|
||||
|
||||
// If the layer has a mount path, use that. Otherwise, use the
|
||||
// folder path.
|
||||
if mountPath != "" {
|
||||
|
@ -245,22 +231,10 @@ func (d *Driver) Put(id string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if d.active[rID] > 1 {
|
||||
d.active[rID]--
|
||||
} else if d.active[rID] == 1 {
|
||||
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := hcsshim.DeactivateLayer(d.info, rID); err != nil {
|
||||
return err
|
||||
}
|
||||
delete(d.active, rID)
|
||||
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
return hcsshim.DeactivateLayer(d.info, rID)
|
||||
}
|
||||
|
||||
// Cleanup ensures the information the driver stores is properly removed.
|
||||
|
@ -270,62 +244,40 @@ func (d *Driver) Cleanup() error {
|
|||
|
||||
// Diff produces an archive of the changes between the specified
|
||||
// layer and its parent layer which may be "".
|
||||
// The layer should be mounted when calling this function
|
||||
func (d *Driver) Diff(id, parent string) (_ archive.Archive, err error) {
|
||||
rID, err := d.resolveID(id)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Getting the layer paths must be done outside of the lock.
|
||||
layerChain, err := d.getLayerChain(rID)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var undo func()
|
||||
|
||||
d.Lock()
|
||||
|
||||
// To support export, a layer must be activated but not prepared.
|
||||
if d.info.Flavour == filterDriver {
|
||||
if d.active[rID] == 0 {
|
||||
if err = hcsshim.ActivateLayer(d.info, rID); err != nil {
|
||||
d.Unlock()
|
||||
return
|
||||
}
|
||||
undo = func() {
|
||||
if err := hcsshim.DeactivateLayer(d.info, rID); err != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err = hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
d.Unlock()
|
||||
return
|
||||
}
|
||||
undo = func() {
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
|
||||
logrus.Warnf("Failed to re-PrepareLayer %s: %s", rID, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
// this is assuming that the layer is unmounted
|
||||
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
d.Unlock()
|
||||
defer func() {
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, layerChain); err != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
|
||||
}
|
||||
}()
|
||||
|
||||
arch, err := d.exportLayer(rID, layerChain)
|
||||
if err != nil {
|
||||
undo()
|
||||
return
|
||||
}
|
||||
return ioutils.NewReadCloserWrapper(arch, func() error {
|
||||
defer undo()
|
||||
return arch.Close()
|
||||
}), nil
|
||||
}
|
||||
|
||||
// Changes produces a list of changes between the specified layer
|
||||
// and its parent layer. If parent is "", then all changes will be ADD changes.
|
||||
// The layer should be mounted when calling this function
|
||||
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
|
||||
rID, err := d.resolveID(id)
|
||||
if err != nil {
|
||||
|
@ -336,31 +288,15 @@ func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
if d.info.Flavour == filterDriver {
|
||||
if d.active[rID] == 0 {
|
||||
if err = hcsshim.ActivateLayer(d.info, rID); err != nil {
|
||||
d.Unlock()
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if err := hcsshim.DeactivateLayer(d.info, rID); err != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
|
||||
}
|
||||
}()
|
||||
} else {
|
||||
if err = hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
d.Unlock()
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, parentChain); err != nil {
|
||||
logrus.Warnf("Failed to re-PrepareLayer %s: %s", rID, err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
// this is assuming that the layer is unmounted
|
||||
if err := hcsshim.UnprepareLayer(d.info, rID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
d.Unlock()
|
||||
defer func() {
|
||||
if err := hcsshim.PrepareLayer(d.info, rID, parentChain); err != nil {
|
||||
logrus.Warnf("Failed to Deactivate %s: %s", rID, err)
|
||||
}
|
||||
}()
|
||||
|
||||
r, err := hcsshim.NewLayerReader(d.info, id, parentChain)
|
||||
if err != nil {
|
||||
|
@ -391,6 +327,7 @@ func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
|
|||
// ApplyDiff extracts the changeset from the given diff into the
|
||||
// layer with the specified id and parent, returning the size of the
|
||||
// new layer in bytes.
|
||||
// The layer should not be mounted when calling this function
|
||||
func (d *Driver) ApplyDiff(id, parent string, diff archive.Reader) (size int64, err error) {
|
||||
rPId, err := d.resolveID(parent)
|
||||
if err != nil {
|
||||
|
|
|
@ -22,12 +22,6 @@ import (
|
|||
"github.com/opencontainers/runc/libcontainer/label"
|
||||
)
|
||||
|
||||
type activeMount struct {
|
||||
count int
|
||||
path string
|
||||
mounted bool
|
||||
}
|
||||
|
||||
type zfsOptions struct {
|
||||
fsName string
|
||||
mountPath string
|
||||
|
@ -109,7 +103,6 @@ func Init(base string, opt []string, uidMaps, gidMaps []idtools.IDMap) (graphdri
|
|||
dataset: rootDataset,
|
||||
options: options,
|
||||
filesystemsCache: filesystemsCache,
|
||||
active: make(map[string]*activeMount),
|
||||
uidMaps: uidMaps,
|
||||
gidMaps: gidMaps,
|
||||
}
|
||||
|
@ -166,7 +159,6 @@ type Driver struct {
|
|||
options zfsOptions
|
||||
sync.Mutex // protects filesystem cache against concurrent access
|
||||
filesystemsCache map[string]bool
|
||||
active map[string]*activeMount
|
||||
uidMaps []idtools.IDMap
|
||||
gidMaps []idtools.IDMap
|
||||
}
|
||||
|
@ -302,17 +294,6 @@ func (d *Driver) Remove(id string) error {
|
|||
|
||||
// Get returns the mountpoint for the given id after creating the target directories if necessary.
|
||||
func (d *Driver) Get(id, mountLabel string) (string, error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
mnt := d.active[id]
|
||||
if mnt != nil {
|
||||
mnt.count++
|
||||
return mnt.path, nil
|
||||
}
|
||||
|
||||
mnt = &activeMount{count: 1}
|
||||
|
||||
mountpoint := d.mountPath(id)
|
||||
filesystem := d.zfsPath(id)
|
||||
options := label.FormatMountLabel("", mountLabel)
|
||||
|
@ -335,48 +316,29 @@ func (d *Driver) Get(id, mountLabel string) (string, error) {
|
|||
if err := os.Chown(mountpoint, rootUID, rootGID); err != nil {
|
||||
return "", fmt.Errorf("error modifying zfs mountpoint (%s) directory ownership: %v", mountpoint, err)
|
||||
}
|
||||
mnt.path = mountpoint
|
||||
mnt.mounted = true
|
||||
d.active[id] = mnt
|
||||
|
||||
return mountpoint, nil
|
||||
}
|
||||
|
||||
// Put removes the existing mountpoint for the given id if it exists.
|
||||
func (d *Driver) Put(id string) error {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
mnt := d.active[id]
|
||||
if mnt == nil {
|
||||
logrus.Debugf("[zfs] Put on a non-mounted device %s", id)
|
||||
// but it might be still here
|
||||
if d.Exists(id) {
|
||||
err := mount.Unmount(d.mountPath(id))
|
||||
if err != nil {
|
||||
logrus.Debugf("[zfs] Failed to unmount %s zfs fs: %v", id, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
mountpoint := d.mountPath(id)
|
||||
mounted, err := graphdriver.Mounted(graphdriver.FsMagicZfs, mountpoint)
|
||||
if err != nil || !mounted {
|
||||
return err
|
||||
}
|
||||
|
||||
mnt.count--
|
||||
if mnt.count > 0 {
|
||||
return nil
|
||||
}
|
||||
logrus.Debugf(`[zfs] unmount("%s")`, mountpoint)
|
||||
|
||||
defer delete(d.active, id)
|
||||
if mnt.mounted {
|
||||
logrus.Debugf(`[zfs] unmount("%s")`, mnt.path)
|
||||
|
||||
if err := mount.Unmount(mnt.path); err != nil {
|
||||
return fmt.Errorf("error unmounting to %s: %v", mnt.path, err)
|
||||
}
|
||||
if err := mount.Unmount(mountpoint); err != nil {
|
||||
return fmt.Errorf("error unmounting to %s: %v", mountpoint, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Exists checks to see if the cache entry exists for the given id.
|
||||
func (d *Driver) Exists(id string) bool {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
return d.filesystemsCache[d.zfsPath(id)] == true
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@ package daemon
|
|||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
|
@ -81,7 +82,14 @@ func (daemon *Daemon) killWithSignal(container *container.Container, sig int) er
|
|||
}
|
||||
|
||||
if err := daemon.kill(container, sig); err != nil {
|
||||
return fmt.Errorf("Cannot kill container %s: %s", container.ID, err)
|
||||
err = fmt.Errorf("Cannot kill container %s: %s", container.ID, err)
|
||||
// if container or process not exists, ignore the error
|
||||
if strings.Contains(err.Error(), "container not found") ||
|
||||
strings.Contains(err.Error(), "no such process") {
|
||||
logrus.Warnf("%s", err.Error())
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
attributes := map[string]string{
|
||||
|
|
|
@ -63,11 +63,11 @@ package journald
|
|||
// fds[0].events = POLLHUP;
|
||||
// fds[1].fd = sd_journal_get_fd(j);
|
||||
// if (fds[1].fd < 0) {
|
||||
// return -1;
|
||||
// return fds[1].fd;
|
||||
// }
|
||||
// jevents = sd_journal_get_events(j);
|
||||
// if (jevents < 0) {
|
||||
// return -1;
|
||||
// return jevents;
|
||||
// }
|
||||
// fds[1].events = jevents;
|
||||
// sd_journal_get_timeout(j, &when);
|
||||
|
@ -81,7 +81,7 @@ package journald
|
|||
// i = poll(fds, 2, timeout);
|
||||
// if ((i == -1) && (errno != EINTR)) {
|
||||
// /* An unexpected error. */
|
||||
// return -1;
|
||||
// return (errno != 0) ? -errno : -EINTR;
|
||||
// }
|
||||
// if (fds[0].revents & POLLHUP) {
|
||||
// /* The close notification pipe was closed. */
|
||||
|
@ -101,6 +101,7 @@ import (
|
|||
"time"
|
||||
"unsafe"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/coreos/go-systemd/journal"
|
||||
"github.com/docker/docker/daemon/logger"
|
||||
)
|
||||
|
@ -177,9 +178,18 @@ func (s *journald) followJournal(logWatcher *logger.LogWatcher, config logger.Re
|
|||
s.readers.readers[logWatcher] = logWatcher
|
||||
s.readers.mu.Unlock()
|
||||
go func() {
|
||||
// Keep copying journal data out until we're notified to stop.
|
||||
for C.wait_for_data_or_close(j, pfd[0]) == 1 {
|
||||
// Keep copying journal data out until we're notified to stop
|
||||
// or we hit an error.
|
||||
status := C.wait_for_data_or_close(j, pfd[0])
|
||||
for status == 1 {
|
||||
cursor = s.drainJournal(logWatcher, config, j, cursor)
|
||||
status = C.wait_for_data_or_close(j, pfd[0])
|
||||
}
|
||||
if status < 0 {
|
||||
cerrstr := C.strerror(C.int(-status))
|
||||
errstr := C.GoString(cerrstr)
|
||||
fmtstr := "error %q while attempting to follow journal for container %q"
|
||||
logrus.Errorf(fmtstr, errstr, s.vars["CONTAINER_ID_FULL"])
|
||||
}
|
||||
// Clean up.
|
||||
C.close(pfd[0])
|
||||
|
@ -293,14 +303,21 @@ func (s *journald) readLogs(logWatcher *logger.LogWatcher, config logger.ReadCon
|
|||
}
|
||||
cursor = s.drainJournal(logWatcher, config, j, "")
|
||||
if config.Follow {
|
||||
// Create a pipe that we can poll at the same time as the journald descriptor.
|
||||
if C.pipe(&pipes[0]) == C.int(-1) {
|
||||
logWatcher.Err <- fmt.Errorf("error opening journald close notification pipe")
|
||||
// Allocate a descriptor for following the journal, if we'll
|
||||
// need one. Do it here so that we can report if it fails.
|
||||
if fd := C.sd_journal_get_fd(j); fd < C.int(0) {
|
||||
logWatcher.Err <- fmt.Errorf("error opening journald follow descriptor: %q", C.GoString(C.strerror(-fd)))
|
||||
} else {
|
||||
s.followJournal(logWatcher, config, j, pipes, cursor)
|
||||
// Let followJournal handle freeing the journal context
|
||||
// object and closing the channel.
|
||||
following = true
|
||||
// Create a pipe that we can poll at the same time as
|
||||
// the journald descriptor.
|
||||
if C.pipe(&pipes[0]) == C.int(-1) {
|
||||
logWatcher.Err <- fmt.Errorf("error opening journald close notification pipe")
|
||||
} else {
|
||||
s.followJournal(logWatcher, config, j, pipes, cursor)
|
||||
// Let followJournal handle freeing the journal context
|
||||
// object and closing the channel.
|
||||
following = true
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
|
|
|
@ -77,6 +77,7 @@ func (daemon *Daemon) StateChanged(id string, e libcontainerd.StateInfo) error {
|
|||
c.Reset(false)
|
||||
return err
|
||||
}
|
||||
daemon.LogContainerEvent(c, "start")
|
||||
case libcontainerd.StatePause:
|
||||
c.Paused = true
|
||||
daemon.LogContainerEvent(c, "pause")
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/container"
|
||||
"github.com/docker/docker/daemon/caps"
|
||||
"github.com/docker/docker/libcontainerd"
|
||||
|
@ -78,6 +79,7 @@ func setResources(s *specs.Spec, r containertypes.Resources) error {
|
|||
func setDevices(s *specs.Spec, c *container.Container) error {
|
||||
// Build lists of devices allowed and created within the container.
|
||||
var devs []specs.Device
|
||||
devPermissions := s.Linux.Resources.Devices
|
||||
if c.HostConfig.Privileged {
|
||||
hostDevices, err := devices.HostDevices()
|
||||
if err != nil {
|
||||
|
@ -86,18 +88,26 @@ func setDevices(s *specs.Spec, c *container.Container) error {
|
|||
for _, d := range hostDevices {
|
||||
devs = append(devs, specDevice(d))
|
||||
}
|
||||
rwm := "rwm"
|
||||
devPermissions = []specs.DeviceCgroup{
|
||||
{
|
||||
Allow: true,
|
||||
Access: &rwm,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
for _, deviceMapping := range c.HostConfig.Devices {
|
||||
d, err := getDevicesFromPath(deviceMapping)
|
||||
d, dPermissions, err := getDevicesFromPath(deviceMapping)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
devs = append(devs, d...)
|
||||
devPermissions = append(devPermissions, dPermissions...)
|
||||
}
|
||||
}
|
||||
|
||||
s.Linux.Devices = append(s.Linux.Devices, devs...)
|
||||
s.Linux.Resources.Devices = devPermissions
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -526,6 +536,8 @@ func setMounts(daemon *Daemon, s *specs.Spec, c *container.Container, mounts []c
|
|||
}
|
||||
}
|
||||
}
|
||||
s.Linux.ReadonlyPaths = nil
|
||||
s.Linux.MaskedPaths = nil
|
||||
}
|
||||
|
||||
// TODO: until a kernel/mount solution exists for handling remount in a user namespace,
|
||||
|
@ -574,16 +586,24 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
|
|||
}
|
||||
|
||||
var cgroupsPath string
|
||||
scopePrefix := "docker"
|
||||
parent := "/docker"
|
||||
useSystemd := UsingSystemd(daemon.configStore)
|
||||
if useSystemd {
|
||||
parent = "system.slice"
|
||||
}
|
||||
|
||||
if c.HostConfig.CgroupParent != "" {
|
||||
cgroupsPath = filepath.Join(c.HostConfig.CgroupParent, c.ID)
|
||||
parent = c.HostConfig.CgroupParent
|
||||
} else if daemon.configStore.CgroupParent != "" {
|
||||
parent = daemon.configStore.CgroupParent
|
||||
}
|
||||
|
||||
if useSystemd {
|
||||
cgroupsPath = parent + ":" + scopePrefix + ":" + c.ID
|
||||
logrus.Debugf("createSpec: cgroupsPath: %s", cgroupsPath)
|
||||
} else {
|
||||
defaultCgroupParent := "/docker"
|
||||
if daemon.configStore.CgroupParent != "" {
|
||||
defaultCgroupParent = daemon.configStore.CgroupParent
|
||||
} else if daemon.usingSystemd() {
|
||||
defaultCgroupParent = "system.slice"
|
||||
}
|
||||
cgroupsPath = filepath.Join(defaultCgroupParent, c.ID)
|
||||
cgroupsPath = filepath.Join(parent, c.ID)
|
||||
}
|
||||
s.Linux.CgroupsPath = &cgroupsPath
|
||||
|
||||
|
@ -642,10 +662,10 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
|
|||
|
||||
if apparmor.IsEnabled() {
|
||||
appArmorProfile := "docker-default"
|
||||
if c.HostConfig.Privileged {
|
||||
appArmorProfile = "unconfined"
|
||||
} else if len(c.AppArmorProfile) > 0 {
|
||||
if len(c.AppArmorProfile) > 0 {
|
||||
appArmorProfile = c.AppArmorProfile
|
||||
} else if c.HostConfig.Privileged {
|
||||
appArmorProfile = "unconfined"
|
||||
}
|
||||
s.Process.ApparmorProfile = appArmorProfile
|
||||
}
|
||||
|
|
|
@ -131,7 +131,6 @@ func (daemon *Daemon) containerStart(container *container.Container) (err error)
|
|||
return err
|
||||
}
|
||||
|
||||
defer daemon.LogContainerEvent(container, "start") // this is logged even on error
|
||||
if err := daemon.containerd.Create(container.ID, *spec, libcontainerd.WithRestartManager(container.RestartManager(true))); err != nil {
|
||||
// if we receive an internal error from the initial start of a container then lets
|
||||
// return it instead of entering the restart loop
|
||||
|
@ -149,6 +148,9 @@ func (daemon *Daemon) containerStart(container *container.Container) (err error)
|
|||
}
|
||||
|
||||
container.Reset(false)
|
||||
|
||||
// start event is logged even on error
|
||||
daemon.LogContainerEvent(container, "start")
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -174,8 +176,10 @@ func (daemon *Daemon) Cleanup(container *container.Container) {
|
|||
daemon.unregisterExecCommand(container, eConfig)
|
||||
}
|
||||
|
||||
if err := container.UnmountVolumes(false, daemon.LogVolumeEvent); err != nil {
|
||||
logrus.Warnf("%s cleanup: Failed to umount volumes: %v", container.ID, err)
|
||||
if container.BaseFS != "" {
|
||||
if err := container.UnmountVolumes(false, daemon.LogVolumeEvent); err != nil {
|
||||
logrus.Warnf("%s cleanup: Failed to umount volumes: %v", container.ID, err)
|
||||
}
|
||||
}
|
||||
container.CancelAttachContext()
|
||||
}
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/container"
|
||||
"github.com/docker/docker/pkg/pubsub"
|
||||
sysinfo "github.com/docker/docker/pkg/system"
|
||||
"github.com/docker/engine-api/types"
|
||||
"github.com/opencontainers/runc/libcontainer/system"
|
||||
)
|
||||
|
@ -35,6 +36,11 @@ func (daemon *Daemon) newStatsCollector(interval time.Duration) *statsCollector
|
|||
clockTicksPerSecond: uint64(system.GetClockTicks()),
|
||||
bufReader: bufio.NewReaderSize(nil, 128),
|
||||
}
|
||||
meminfo, err := sysinfo.ReadMemInfo()
|
||||
if err == nil && meminfo.MemTotal > 0 {
|
||||
s.machineMemory = uint64(meminfo.MemTotal)
|
||||
}
|
||||
|
||||
go s.run()
|
||||
return s
|
||||
}
|
||||
|
@ -47,6 +53,7 @@ type statsCollector struct {
|
|||
clockTicksPerSecond uint64
|
||||
publishers map[*container.Container]*pubsub.Publisher
|
||||
bufReader *bufio.Reader
|
||||
machineMemory uint64
|
||||
}
|
||||
|
||||
// collect registers the container with the collector and adds it to
|
||||
|
|
|
@ -330,7 +330,20 @@ func (ld *v1LayerDescriptor) Download(ctx context.Context, progressOutput progre
|
|||
logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), ld.tmpFile.Name())
|
||||
|
||||
ld.tmpFile.Seek(0, 0)
|
||||
return ld.tmpFile, ld.layerSize, nil
|
||||
|
||||
// hand off the temporary file to the download manager, so it will only
|
||||
// be closed once
|
||||
tmpFile := ld.tmpFile
|
||||
ld.tmpFile = nil
|
||||
|
||||
return ioutils.NewReadCloserWrapper(tmpFile, func() error {
|
||||
tmpFile.Close()
|
||||
err := os.RemoveAll(tmpFile.Name())
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
|
||||
}
|
||||
return err
|
||||
}), ld.layerSize, nil
|
||||
}
|
||||
|
||||
func (ld *v1LayerDescriptor) Close() {
|
||||
|
|
|
@ -278,7 +278,19 @@ func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progre
|
|||
ld.verifier = nil
|
||||
return nil, 0, xfer.DoNotRetry{Err: err}
|
||||
}
|
||||
return tmpFile, size, nil
|
||||
|
||||
// hand off the temporary file to the download manager, so it will only
|
||||
// be closed once
|
||||
ld.tmpFile = nil
|
||||
|
||||
return ioutils.NewReadCloserWrapper(tmpFile, func() error {
|
||||
tmpFile.Close()
|
||||
err := os.RemoveAll(tmpFile.Name())
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
|
||||
}
|
||||
return err
|
||||
}), size, nil
|
||||
}
|
||||
|
||||
func (ld *v2LayerDescriptor) Close() {
|
||||
|
|
|
@ -121,6 +121,10 @@ func (ls *mockLayerStore) GetMountID(string) (string, error) {
|
|||
return "", errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (ls *mockLayerStore) ReinitRWLayer(layer.RWLayer) error {
|
||||
return errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (ls *mockLayerStore) Cleanup() error {
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -74,5 +74,9 @@ func (cli *DaemonCli) getPlatformRemoteOptions() []libcontainerd.RemoteOption {
|
|||
} else {
|
||||
opts = append(opts, libcontainerd.WithStartDaemon(true))
|
||||
}
|
||||
if daemon.UsingSystemd(cli.Config) {
|
||||
args := []string{"--systemd-cgroup=true"}
|
||||
opts = append(opts, libcontainerd.WithRuntimeArgs(args))
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
|
|
@ -1,17 +1,8 @@
|
|||
FROM docs/base:latest
|
||||
FROM docs/base:oss
|
||||
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
|
||||
|
||||
RUN svn checkout https://github.com/docker/compose/trunk/docs /docs/content/compose
|
||||
RUN svn checkout https://github.com/docker/swarm/trunk/docs /docs/content/swarm
|
||||
RUN svn checkout https://github.com/docker/machine/trunk/docs /docs/content/machine
|
||||
RUN svn checkout https://github.com/docker/distribution/trunk/docs /docs/content/registry
|
||||
RUN svn checkout https://github.com/docker/notary/trunk/docs /docs/content/notary
|
||||
RUN svn checkout https://github.com/docker/kitematic/trunk/docs /docs/content/kitematic
|
||||
RUN svn checkout https://github.com/docker/toolbox/trunk/docs /docs/content/toolbox
|
||||
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content/opensource
|
||||
|
||||
ENV PROJECT=engine
|
||||
# To get the git info for this repo
|
||||
COPY . /src
|
||||
|
||||
RUN rm -r /docs/content/$PROJECT/
|
||||
COPY . /docs/content/$PROJECT/
|
||||
|
|
|
@ -24,9 +24,8 @@ HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER
|
|||
HUGO_BIND_IP=0.0.0.0
|
||||
|
||||
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
|
||||
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
|
||||
DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
|
||||
|
||||
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
|
||||
|
||||
|
|
|
@ -1,150 +0,0 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
aliases = ["/engine/articles/cfengine/"]
|
||||
title = "Process management with CFEngine"
|
||||
description = "Managing containerized processes with CFEngine"
|
||||
keywords = ["cfengine, process, management, usage, docker, documentation"]
|
||||
[menu.main]
|
||||
parent = "engine_admin"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Process management with CFEngine
|
||||
|
||||
Create Docker containers with managed processes.
|
||||
|
||||
Docker monitors one process in each running container and the container
|
||||
lives or dies with that process. By introducing CFEngine inside Docker
|
||||
containers, we can alleviate a few of the issues that may arise:
|
||||
|
||||
- It is possible to easily start multiple processes within a
|
||||
container, all of which will be managed automatically, with the
|
||||
normal `docker run` command.
|
||||
- If a managed process dies or crashes, CFEngine will start it again
|
||||
within 1 minute.
|
||||
- The container itself will live as long as the CFEngine scheduling
|
||||
daemon (cf-execd) lives. With CFEngine, we are able to decouple the
|
||||
life of the container from the uptime of the service it provides.
|
||||
|
||||
## How it works
|
||||
|
||||
CFEngine, together with the cfe-docker integration policies, are
|
||||
installed as part of the Dockerfile. This builds CFEngine into our
|
||||
Docker image.
|
||||
|
||||
The Dockerfile's `ENTRYPOINT` takes an arbitrary
|
||||
amount of commands (with any desired arguments) as parameters. When we
|
||||
run the Docker container these parameters get written to CFEngine
|
||||
policies and CFEngine takes over to ensure that the desired processes
|
||||
are running in the container.
|
||||
|
||||
CFEngine scans the process table for the `basename` of the commands given
|
||||
to the `ENTRYPOINT` and runs the command to start the process if the `basename`
|
||||
is not found. For example, if we start the container with
|
||||
`docker run "/path/to/my/application parameters"`, CFEngine will look for a
|
||||
process named `application` and run the command. If an entry for `application`
|
||||
is not found in the process table at any point in time, CFEngine will execute
|
||||
`/path/to/my/application parameters` to start the application once again. The
|
||||
check on the process table happens every minute.
|
||||
|
||||
Note that it is therefore important that the command to start your
|
||||
application leaves a process with the basename of the command. This can
|
||||
be made more flexible by making some minor adjustments to the CFEngine
|
||||
policies, if desired.
|
||||
|
||||
## Usage
|
||||
|
||||
This example assumes you have Docker installed and working. We will
|
||||
install and manage `apache2` and `sshd`
|
||||
in a single container.
|
||||
|
||||
There are three steps:
|
||||
|
||||
1. Install CFEngine into the container.
|
||||
2. Copy the CFEngine Docker process management policy into the
|
||||
containerized CFEngine installation.
|
||||
3. Start your application processes as part of the `docker run` command.
|
||||
|
||||
### Building the image
|
||||
|
||||
The first two steps can be done as part of a Dockerfile, as follows.
|
||||
|
||||
FROM ubuntu
|
||||
MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
|
||||
|
||||
RUN apt-get update && apt-get install -y wget lsb-release unzip ca-certificates
|
||||
|
||||
# install latest CFEngine
|
||||
RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
|
||||
RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
|
||||
RUN apt-get update && apt-get install -y cfengine-community
|
||||
|
||||
# install cfe-docker process management policy
|
||||
RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
|
||||
RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
|
||||
RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
|
||||
RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
|
||||
|
||||
# apache2 and openssh are just for testing purposes, install your own apps here
|
||||
RUN apt-get update && apt-get install -y openssh-server apache2
|
||||
RUN mkdir -p /var/run/sshd
|
||||
RUN echo "root:password" | chpasswd # need a password for ssh
|
||||
|
||||
ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
|
||||
|
||||
By saving this file as Dockerfile to a working directory, you can then build
|
||||
your image with the docker build command, e.g.,
|
||||
`docker build -t managed_image`.
|
||||
|
||||
### Testing the container
|
||||
|
||||
Start the container with `apache2` and `sshd` running and managed, forwarding
|
||||
a port to our SSH instance:
|
||||
|
||||
$ docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
|
||||
|
||||
We now clearly see one of the benefits of the cfe-docker integration: it
|
||||
allows to start several processes as part of a normal `docker run` command.
|
||||
|
||||
We can now log in to our new container and see that both `apache2` and `sshd`
|
||||
are running. We have set the root password to "password" in the Dockerfile
|
||||
above and can use that to log in with ssh:
|
||||
|
||||
ssh -p222 root@127.0.0.1
|
||||
|
||||
ps -ef
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
|
||||
root 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F
|
||||
root 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd
|
||||
root 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
root 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0
|
||||
root 105 93 0 07:48 pts/0 00:00:00 -bash
|
||||
root 112 105 0 07:49 pts/0 00:00:00 ps -ef
|
||||
|
||||
If we stop apache2, it will be started again within a minute by
|
||||
CFEngine.
|
||||
|
||||
service apache2 status
|
||||
Apache2 is running (pid 32).
|
||||
service apache2 stop
|
||||
* Stopping web server apache2 ... waiting [ OK ]
|
||||
service apache2 status
|
||||
Apache2 is NOT running.
|
||||
# ... wait up to 1 minute...
|
||||
service apache2 status
|
||||
Apache2 is running (pid 173).
|
||||
|
||||
## Adapting to your applications
|
||||
|
||||
To make sure your applications get managed in the same manner, there are
|
||||
just two things you need to adjust from the above example:
|
||||
|
||||
- In the Dockerfile used above, install your applications instead of
|
||||
`apache2` and `sshd`.
|
||||
- When you start the container with `docker run`,
|
||||
specify the command line arguments to your applications rather than
|
||||
`apache2` and `sshd`.
|
|
@ -48,7 +48,7 @@ Some of the daemon's options are:
|
|||
| `--tls=false` | Enable or disable TLS. By default, this is false. |
|
||||
|
||||
|
||||
Here is a an example of running the `docker` daemon with configuration options:
|
||||
Here is an example of running the `docker` daemon with configuration options:
|
||||
|
||||
$ docker daemon -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
|
||||
|
||||
|
|
|
@ -5,7 +5,6 @@ description = "Describes how to use the etwlogs logging driver."
|
|||
keywords = ["ETW, docker, logging, driver"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight=2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -6,7 +6,6 @@ description = "Describes how to use the fluentd logging driver."
|
|||
keywords = ["Fluentd, docker, logging, driver"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight=2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
@ -90,7 +89,7 @@ and [its documents](http://docs.fluentd.org/).
|
|||
To use this logging driver, start the `fluentd` daemon on a host. We recommend
|
||||
that you use [the Fluentd docker
|
||||
image](https://hub.docker.com/r/fluent/fluentd/). This image is
|
||||
especially useful if you want to aggregate multiple container logs on a each
|
||||
especially useful if you want to aggregate multiple container logs on each
|
||||
host then, later, transfer the logs to another Fluentd node to create an
|
||||
aggregate store.
|
||||
|
||||
|
|
|
@ -5,7 +5,6 @@ description = "Describes how to use the Google Cloud Logging driver."
|
|||
keywords = ["gcplogs, google, docker, logging, driver"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight = 2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -1,12 +1,11 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
aliases = ["/engine/reference/logging/journald/"]
|
||||
title = "journald logging driver"
|
||||
title = "Journald logging driver"
|
||||
description = "Describes how to use the fluentd logging driver."
|
||||
keywords = ["Fluentd, docker, logging, driver"]
|
||||
keywords = ["Journald, docker, logging, driver"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight = 2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ description = "Describes how to format tags for."
|
|||
keywords = ["docker, logging, driver, syslog, Fluentd, gelf, journald"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight = 1
|
||||
weight = -1
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ description = "Configure logging driver."
|
|||
keywords = ["docker, logging, driver, Fluentd"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight=-1
|
||||
weight=-99
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
@ -15,10 +15,13 @@ weight=-1
|
|||
|
||||
The container can have a different logging driver than the Docker daemon. Use
|
||||
the `--log-driver=VALUE` with the `docker run` command to configure the
|
||||
container's logging driver. The following options are supported:
|
||||
container's logging driver. If the `--log-driver` option is not set, docker
|
||||
uses the default (`json-file`) logging driver. The following options are
|
||||
supported:
|
||||
|
||||
| `none` | Disables any logging for the container. `docker logs` won't be available with this driver. |
|
||||
| Driver | Description |
|
||||
|-------------|-------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `none` | Disables any logging for the container. `docker logs` won't be available with this driver. |
|
||||
| `json-file` | Default logging driver for Docker. Writes JSON messages to file. |
|
||||
| `syslog` | Syslog logging driver for Docker. Writes log messages to syslog. |
|
||||
| `journald` | Journald logging driver for Docker. Writes log messages to `journald`. |
|
||||
|
@ -32,40 +35,58 @@ container's logging driver. The following options are supported:
|
|||
The `docker logs`command is available only for the `json-file` and `journald`
|
||||
logging drivers.
|
||||
|
||||
The `labels` and `env` options add additional attributes for use with logging drivers that accept them. Each option takes a comma-separated list of keys. If there is collision between `label` and `env` keys, the value of the `env` takes precedence.
|
||||
The `labels` and `env` options add additional attributes for use with logging
|
||||
drivers that accept them. Each option takes a comma-separated list of keys. If
|
||||
there is collision between `label` and `env` keys, the value of the `env` takes
|
||||
precedence.
|
||||
|
||||
To use attributes, specify them when you start the Docker daemon.
|
||||
To use attributes, specify them when you start the Docker daemon. For example,
|
||||
to manually start the daemon with the `json-file` driver, and include additional
|
||||
attributes in the output, run the following command:
|
||||
|
||||
```
|
||||
docker daemon --log-driver=json-file --log-opt labels=foo --log-opt env=foo,fizz
|
||||
```bash
|
||||
$ docker daemon \
|
||||
--log-driver=json-file \
|
||||
--log-opt labels=foo \
|
||||
--log-opt env=foo,fizz
|
||||
```
|
||||
|
||||
Then, run a container and specify values for the `labels` or `env`. For example, you might use this:
|
||||
Then, run a container and specify values for the `labels` or `env`. For
|
||||
example, you might use this:
|
||||
|
||||
```
|
||||
docker run --label foo=bar -e fizz=buzz -d -P training/webapp python app.py
|
||||
```bash
|
||||
$ docker run -dit --label foo=bar -e fizz=buzz alpine sh
|
||||
```
|
||||
|
||||
This adds additional fields to the log depending on the driver, e.g. for
|
||||
`json-file` that looks like:
|
||||
|
||||
"attrs":{"fizz":"buzz","foo":"bar"}
|
||||
```json
|
||||
"attrs":{"fizz":"buzz","foo":"bar"}
|
||||
```
|
||||
|
||||
|
||||
## json-file options
|
||||
|
||||
The following logging options are supported for the `json-file` logging driver:
|
||||
|
||||
--log-opt max-size=[0-9+][k|m|g]
|
||||
--log-opt max-file=[0-9+]
|
||||
--log-opt labels=label1,label2
|
||||
--log-opt env=env1,env2
|
||||
```bash
|
||||
--log-opt max-size=[0-9+][k|m|g]
|
||||
--log-opt max-file=[0-9+]
|
||||
--log-opt labels=label1,label2
|
||||
--log-opt env=env1,env2
|
||||
```
|
||||
|
||||
Logs that reach `max-size` are rolled over. You can set the size in kilobytes(k), megabytes(m), or gigabytes(g). eg `--log-opt max-size=50m`. If `max-size` is not set, then logs are not rolled over.
|
||||
Logs that reach `max-size` are rolled over. You can set the size in
|
||||
kilobytes(k), megabytes(m), or gigabytes(g). eg `--log-opt max-size=50m`. If
|
||||
`max-size` is not set, then logs are not rolled over.
|
||||
|
||||
`max-file` specifies the maximum number of files that a log is rolled over before being discarded. eg `--log-opt max-file=100`. If `max-size` is not set, then `max-file` is not honored.
|
||||
`max-file` specifies the maximum number of files that a log is rolled over
|
||||
before being discarded. eg `--log-opt max-file=100`. If `max-size` is not set,
|
||||
then `max-file` is not honored.
|
||||
|
||||
If `max-size` and `max-file` are set, `docker logs` only returns the log lines from the newest log file.
|
||||
If `max-size` and `max-file` are set, `docker logs` only returns the log lines
|
||||
from the newest log file.
|
||||
|
||||
|
||||
## syslog options
|
||||
|
@ -82,17 +103,20 @@ The following logging options are supported for the `syslog` logging driver:
|
|||
--log-opt tag="mailer"
|
||||
--log-opt syslog-format=[rfc5424|rfc3164]
|
||||
|
||||
`syslog-address` specifies the remote syslog server address where the driver connects to.
|
||||
If not specified it defaults to the local unix socket of the running system.
|
||||
If transport is either `tcp` or `udp` and `port` is not specified it defaults to `514`
|
||||
The following example shows how to have the `syslog` driver connect to a `syslog`
|
||||
remote server at `192.168.0.42` on port `123`
|
||||
`syslog-address` specifies the remote syslog server address where the driver
|
||||
connects to. If not specified it defaults to the local unix socket of the
|
||||
running system. If transport is either `tcp` or `udp` and `port` is not
|
||||
specified it defaults to `514` The following example shows how to have the
|
||||
`syslog` driver connect to a `syslog` remote server at `192.168.0.42` on port
|
||||
`123`
|
||||
|
||||
$ docker run --log-driver=syslog --log-opt syslog-address=tcp://192.168.0.42:123
|
||||
```bash
|
||||
$ docker run --log-driver=syslog --log-opt syslog-address=tcp://192.168.0.42:123
|
||||
```
|
||||
|
||||
The `syslog-facility` option configures the syslog facility. By default, the system uses the
|
||||
`daemon` value. To override this behavior, you can provide an integer of 0 to 23 or any of
|
||||
the following named facilities:
|
||||
The `syslog-facility` option configures the syslog facility. By default, the
|
||||
system uses the `daemon` value. To override this behavior, you can provide an
|
||||
integer of 0 to 23 or any of the following named facilities:
|
||||
|
||||
* `kern`
|
||||
* `user`
|
||||
|
@ -116,18 +140,19 @@ the following named facilities:
|
|||
* `local7`
|
||||
|
||||
`syslog-tls-ca-cert` specifies the absolute path to the trust certificates
|
||||
signed by the CA. This option is ignored if the address protocol is not `tcp+tls`.
|
||||
signed by the CA. This option is ignored if the address protocol is not
|
||||
`tcp+tls`.
|
||||
|
||||
`syslog-tls-cert` specifies the absolute path to the TLS certificate file.
|
||||
`syslog-tls-cert` specifies the absolute path to the TLS certificate file. This
|
||||
option is ignored if the address protocol is not `tcp+tls`.
|
||||
|
||||
`syslog-tls-key` specifies the absolute path to the TLS key file. This option
|
||||
is ignored if the address protocol is not `tcp+tls`.
|
||||
|
||||
`syslog-tls-skip-verify` configures the TLS verification. This verification is
|
||||
enabled by default, but it can be overriden by setting this option to `true`.
|
||||
This option is ignored if the address protocol is not `tcp+tls`.
|
||||
|
||||
`syslog-tls-key` specifies the absolute path to the TLS key file.
|
||||
This option is ignored if the address protocol is not `tcp+tls`.
|
||||
|
||||
`syslog-tls-skip-verify` configures the TLS verification.
|
||||
This verification is enabled by default, but it can be overriden by setting
|
||||
this option to `true`. This option is ignored if the address protocol is not `tcp+tls`.
|
||||
|
||||
By default, Docker uses the first 12 characters of the container ID to tag log messages.
|
||||
Refer to the [log tag option documentation](log_tags.md) for customizing
|
||||
the log tag format.
|
||||
|
@ -137,34 +162,40 @@ If not specified it defaults to the local unix syslog format without hostname sp
|
|||
Specify rfc3164 to perform logging in RFC-3164 compatible format. Specify rfc5424 to perform
|
||||
logging in RFC-5424 compatible format
|
||||
|
||||
|
||||
## journald options
|
||||
|
||||
The `journald` logging driver stores the container id in the journal's `CONTAINER_ID` field. For detailed information on
|
||||
working with this logging driver, see [the journald logging driver](journald.md)
|
||||
reference documentation.
|
||||
The `journald` logging driver stores the container id in the journal's
|
||||
`CONTAINER_ID` field. For detailed information on working with this logging
|
||||
driver, see [the journald logging driver](journald.md) reference documentation.
|
||||
|
||||
## gelf options
|
||||
## GELF options
|
||||
|
||||
The GELF logging driver supports the following options:
|
||||
|
||||
--log-opt gelf-address=udp://host:port
|
||||
--log-opt tag="database"
|
||||
--log-opt labels=label1,label2
|
||||
--log-opt env=env1,env2
|
||||
--log-opt gelf-compression-type=gzip
|
||||
--log-opt gelf-compression-level=1
|
||||
```bash
|
||||
--log-opt gelf-address=udp://host:port
|
||||
--log-opt tag="database"
|
||||
--log-opt labels=label1,label2
|
||||
--log-opt env=env1,env2
|
||||
--log-opt gelf-compression-type=gzip
|
||||
--log-opt gelf-compression-level=1
|
||||
```
|
||||
|
||||
The `gelf-address` option specifies the remote GELF server address that the
|
||||
driver connects to. Currently, only `udp` is supported as the transport and you must
|
||||
specify a `port` value. The following example shows how to connect the `gelf`
|
||||
driver to a GELF remote server at `192.168.0.42` on port `12201`
|
||||
driver connects to. Currently, only `udp` is supported as the transport and you
|
||||
must specify a `port` value. The following example shows how to connect the
|
||||
`gelf` driver to a GELF remote server at `192.168.0.42` on port `12201`
|
||||
|
||||
$ docker run --log-driver=gelf --log-opt gelf-address=udp://192.168.0.42:12201
|
||||
```bash
|
||||
$ docker run -dit \
|
||||
--log-driver=gelf \
|
||||
--log-opt gelf-address=udp://192.168.0.42:12201 \
|
||||
alpine sh
|
||||
```
|
||||
|
||||
By default, Docker uses the first 12 characters of the container ID to tag log messages.
|
||||
Refer to the [log tag option documentation](log_tags.md) for customizing
|
||||
the log tag format.
|
||||
By default, Docker uses the first 12 characters of the container ID to tag log
|
||||
messages. Refer to the [log tag option documentation](log_tags.md) for
|
||||
customizing the log tag format.
|
||||
|
||||
The `labels` and `env` options are supported by the gelf logging
|
||||
driver. It adds additional key on the `extra` fields, prefixed by an
|
||||
|
@ -179,14 +210,15 @@ The `gelf-compression-type` option can be used to change how the GELF driver
|
|||
compresses each log message. The accepted values are `gzip`, `zlib` and `none`.
|
||||
`gzip` is chosen by default.
|
||||
|
||||
The `gelf-compression-level` option can be used to change the level of compresssion
|
||||
when `gzip` or `zlib` is selected as `gelf-compression-type`. Accepted value
|
||||
must be from from -1 to 9 (BestCompression). Higher levels typically
|
||||
run slower but compress more. Default value is 1 (BestSpeed).
|
||||
The `gelf-compression-level` option can be used to change the level of
|
||||
compresssion when `gzip` or `zlib` is selected as `gelf-compression-type`.
|
||||
Accepted value must be from from -1 to 9 (BestCompression). Higher levels
|
||||
typically run slower but compress more. Default value is 1 (BestSpeed).
|
||||
|
||||
## fluentd options
|
||||
## Fluentd options
|
||||
|
||||
You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd logging driver options.
|
||||
You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd
|
||||
logging driver options.
|
||||
|
||||
- `fluentd-address`: specify `host:port` to connect [localhost:24224]
|
||||
- `tag`: specify tag for `fluentd` message
|
||||
|
@ -197,7 +229,13 @@ You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd
|
|||
|
||||
For example, to specify both additional options:
|
||||
|
||||
`docker run --log-driver=fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag=docker.{{.Name}}`
|
||||
```bash
|
||||
$ docker run -dit \
|
||||
--log-driver=fluentd \
|
||||
--log-opt fluentd-address=localhost:24224 \
|
||||
--log-opt tag="docker.{{.Name}}" \
|
||||
alpine sh
|
||||
```
|
||||
|
||||
If container cannot connect to the Fluentd daemon on the specified address and
|
||||
`fluentd-async-connect` is not enabled, the container stops immediately.
|
||||
|
@ -205,42 +243,51 @@ For detailed information on working with this logging driver,
|
|||
see [the fluentd logging driver](fluentd.md)
|
||||
|
||||
|
||||
## Specify Amazon CloudWatch Logs options
|
||||
## Amazon CloudWatch Logs options
|
||||
|
||||
The Amazon CloudWatch Logs logging driver supports the following options:
|
||||
|
||||
--log-opt awslogs-region=<aws_region>
|
||||
--log-opt awslogs-group=<log_group_name>
|
||||
--log-opt awslogs-stream=<log_stream_name>
|
||||
```bash
|
||||
--log-opt awslogs-region=<aws_region>
|
||||
--log-opt awslogs-group=<log_group_name>
|
||||
--log-opt awslogs-stream=<log_stream_name>
|
||||
```
|
||||
|
||||
|
||||
For detailed information on working with this logging driver, see [the awslogs logging driver](awslogs.md) reference documentation.
|
||||
For detailed information on working with this logging driver, see [the awslogs
|
||||
logging driver](awslogs.md) reference documentation.
|
||||
|
||||
## Splunk options
|
||||
|
||||
The Splunk logging driver requires the following options:
|
||||
|
||||
--log-opt splunk-token=<splunk_http_event_collector_token>
|
||||
--log-opt splunk-url=https://your_splunk_instance:8088
|
||||
```bash
|
||||
--log-opt splunk-token=<splunk_http_event_collector_token>
|
||||
--log-opt splunk-url=https://your_splunk_instance:8088
|
||||
```
|
||||
|
||||
For detailed information about working with this logging driver, see the [Splunk logging driver](splunk.md)
|
||||
reference documentation.
|
||||
For detailed information about working with this logging driver, see the
|
||||
[Splunk logging driver](splunk.md) reference documentation.
|
||||
|
||||
## ETW logging driver options
|
||||
|
||||
The etwlogs logging driver does not require any options to be specified. This logging driver will forward each log message
|
||||
as an ETW event. An ETW listener can then be created to listen for these events.
|
||||
The etwlogs logging driver does not require any options to be specified. This
|
||||
logging driver forwards each log message as an ETW event. An ETW listener
|
||||
can then be created to listen for these events.
|
||||
|
||||
For detailed information on working with this logging driver, see [the ETW logging driver](etwlogs.md) reference documentation.
|
||||
The ETW logging driver is only available on Windows. For detailed information
|
||||
on working with this logging driver, see [the ETW logging driver](etwlogs.md)
|
||||
reference documentation.
|
||||
|
||||
## Google Cloud Logging
|
||||
## Google Cloud Logging options
|
||||
|
||||
The Google Cloud Logging driver supports the following options:
|
||||
|
||||
--log-opt gcp-project=<gcp_projext>
|
||||
--log-opt labels=<label1>,<label2>
|
||||
--log-opt env=<envvar1>,<envvar2>
|
||||
--log-opt log-cmd=true
|
||||
```bash
|
||||
--log-opt gcp-project=<gcp_projext>
|
||||
--log-opt labels=<label1>,<label2>
|
||||
--log-opt env=<envvar1>,<envvar2>
|
||||
--log-opt log-cmd=true
|
||||
```
|
||||
|
||||
For detailed information about working with this logging driver, see the [Google Cloud Logging driver](gcplogs.md).
|
||||
reference documentation.
|
||||
For detailed information about working with this logging driver, see the
|
||||
[Google Cloud Logging driver](gcplogs.md). reference documentation.
|
||||
|
|
|
@ -6,7 +6,6 @@ description = "Describes how to use the Splunk logging driver."
|
|||
keywords = ["splunk, docker, logging, driver"]
|
||||
[menu.main]
|
||||
parent = "smn_logging"
|
||||
weight = 2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ more details about the `docker stats` command.
|
|||
## Control groups
|
||||
|
||||
Linux Containers rely on [control groups](
|
||||
https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt)
|
||||
https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
|
||||
which not only track groups of processes, but also expose metrics about
|
||||
CPU, memory, and block I/O usage. You can access those metrics and
|
||||
obtain network usage metrics as well. This is relevant for "pure" LXC
|
||||
|
@ -256,7 +256,7 @@ compatibility reasons.
|
|||
Block I/O is accounted in the `blkio` controller.
|
||||
Different metrics are scattered across different files. While you can
|
||||
find in-depth details in the [blkio-controller](
|
||||
https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt)
|
||||
https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt)
|
||||
file in the kernel documentation, here is a short list of the most
|
||||
relevant ones:
|
||||
|
||||
|
|
|
@ -33,15 +33,19 @@ If you want Docker to start at boot, you should also:
|
|||
There are a number of ways to configure the daemon flags and environment variables
|
||||
for your Docker daemon.
|
||||
|
||||
The recommended way is to use a systemd drop-in file. These are local files in
|
||||
the `/etc/systemd/system/docker.service.d` directory. This could also be
|
||||
`/etc/systemd/system/docker.service`, which also works for overriding the
|
||||
defaults from `/lib/systemd/system/docker.service`.
|
||||
The recommended way is to use a systemd drop-in file (as described in
|
||||
the <a target="_blank"
|
||||
href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html">systemd.unit</a>
|
||||
documentation). These are local files named `<something>.conf` in the
|
||||
`/etc/systemd/system/docker.service.d` directory. This could also be
|
||||
`/etc/systemd/system/docker.service`, which also works for overriding
|
||||
the defaults from `/lib/systemd/system/docker.service`.
|
||||
|
||||
However, if you had previously used a package which had an `EnvironmentFile`
|
||||
(often pointing to `/etc/sysconfig/docker`) then for backwards compatibility,
|
||||
you drop a file in the `/etc/systemd/system/docker.service.d`
|
||||
directory including the following:
|
||||
However, if you had previously used a package which had an
|
||||
`EnvironmentFile` (often pointing to `/etc/sysconfig/docker`) then for
|
||||
backwards compatibility, you drop a file with a `.conf` extension into
|
||||
the `/etc/systemd/system/docker.service.d` directory including the
|
||||
following:
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/sysconfig/docker
|
||||
|
@ -56,14 +60,14 @@ directory including the following:
|
|||
|
||||
To check if the `docker.service` uses an `EnvironmentFile`:
|
||||
|
||||
$ sudo systemctl show docker | grep EnvironmentFile
|
||||
$ systemctl show docker | grep EnvironmentFile
|
||||
EnvironmentFile=-/etc/sysconfig/docker (ignore_errors=yes)
|
||||
|
||||
Alternatively, find out where the service file is located:
|
||||
|
||||
$ sudo systemctl status docker | grep Loaded
|
||||
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
|
||||
$ sudo grep EnvironmentFile /usr/lib/systemd/system/docker.service
|
||||
$ systemctl show --property=FragmentPath docker
|
||||
FragmentPath=/usr/lib/systemd/system/docker.service
|
||||
$ grep EnvironmentFile /usr/lib/systemd/system/docker.service
|
||||
EnvironmentFile=-/etc/sysconfig/docker
|
||||
|
||||
You can customize the Docker daemon options using override files as explained in the
|
||||
|
@ -119,7 +123,7 @@ If you fail to specify an empty configuration, Docker reports an error such as:
|
|||
|
||||
This example overrides the default `docker.service` file.
|
||||
|
||||
If you are behind a HTTP proxy server, for example in corporate settings,
|
||||
If you are behind an HTTP proxy server, for example in corporate settings,
|
||||
you will need to add this configuration in the Docker systemd service file.
|
||||
|
||||
First, create a systemd drop-in directory for the docker service:
|
||||
|
@ -143,7 +147,7 @@ Flush changes:
|
|||
|
||||
Verify that the configuration has been loaded:
|
||||
|
||||
$ sudo systemctl show docker --property Environment
|
||||
$ systemctl show --property=Environment docker
|
||||
Environment=HTTP_PROXY=http://proxy.example.com:80/
|
||||
|
||||
Restart Docker:
|
||||
|
|
|
@ -15,104 +15,140 @@ parent = "engine_admin"
|
|||
> - **If you don't like sudo** then see [*Giving non-root
|
||||
> access*](../installation/binaries.md#giving-non-root-access)
|
||||
|
||||
Traditionally a Docker container runs a single process when it is
|
||||
launched, for example an Apache daemon or a SSH server daemon. Often
|
||||
though you want to run more than one process in a container. There are a
|
||||
number of ways you can achieve this ranging from using a simple Bash
|
||||
script as the value of your container's `CMD` instruction to installing
|
||||
a process management tool.
|
||||
Traditionally a Docker container runs a single process when it is launched, for
|
||||
example an Apache daemon or a SSH server daemon. Often though you want to run
|
||||
more than one process in a container. There are a number of ways you can
|
||||
achieve this ranging from using a simple Bash script as the value of your
|
||||
container's `CMD` instruction to installing a process management tool.
|
||||
|
||||
In this example we're going to make use of the process management tool,
|
||||
[Supervisor](http://supervisord.org/), to manage multiple processes in
|
||||
our container. Using Supervisor allows us to better control, manage, and
|
||||
restart the processes we want to run. To demonstrate this we're going to
|
||||
install and manage both an SSH daemon and an Apache daemon.
|
||||
In this example you're going to make use of the process management tool,
|
||||
[Supervisor](http://supervisord.org/), to manage multiple processes in a
|
||||
container. Using Supervisor allows you to better control, manage, and restart
|
||||
the processes inside the container. To demonstrate this we're going to install
|
||||
and manage both an SSH daemon and an Apache daemon.
|
||||
|
||||
## Creating a Dockerfile
|
||||
|
||||
Let's start by creating a basic `Dockerfile` for our
|
||||
new image.
|
||||
Let's start by creating a basic `Dockerfile` for our new image.
|
||||
|
||||
FROM ubuntu:13.04
|
||||
MAINTAINER examples@docker.com
|
||||
```Dockerfile
|
||||
FROM ubuntu:16.04
|
||||
MAINTAINER examples@docker.com
|
||||
```
|
||||
|
||||
## Installing Supervisor
|
||||
|
||||
We can now install our SSH and Apache daemons as well as Supervisor in
|
||||
our container.
|
||||
You can now install the SSH and Apache daemons as well as Supervisor in the
|
||||
container.
|
||||
|
||||
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
|
||||
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
|
||||
```Dockerfile
|
||||
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
|
||||
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
|
||||
```
|
||||
|
||||
Here we're installing the `openssh-server`,
|
||||
`apache2` and `supervisor`
|
||||
(which provides the Supervisor daemon) packages. We're also creating four
|
||||
new directories that are needed to run our SSH daemon and Supervisor.
|
||||
The first `RUN` instruction installs the `openssh-server`, `apache2` and
|
||||
`supervisor` (which provides the Supervisor daemon) packages. The next `RUN`
|
||||
instruction creates four new directories that are needed to run the SSH daemon
|
||||
and Supervisor.
|
||||
|
||||
## Adding Supervisor's configuration file
|
||||
|
||||
Now let's add a configuration file for Supervisor. The default file is
|
||||
called `supervisord.conf` and is located in
|
||||
`/etc/supervisor/conf.d/`.
|
||||
Now let's add a configuration file for Supervisor. The default file is called
|
||||
`supervisord.conf` and is located in `/etc/supervisor/conf.d/`.
|
||||
|
||||
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
```Dockerfile
|
||||
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
```
|
||||
|
||||
Let's see what is inside our `supervisord.conf`
|
||||
file.
|
||||
Let's see what is inside the `supervisord.conf` file.
|
||||
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
```ini
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
|
||||
[program:sshd]
|
||||
command=/usr/sbin/sshd -D
|
||||
[program:sshd]
|
||||
command=/usr/sbin/sshd -D
|
||||
|
||||
[program:apache2]
|
||||
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
|
||||
[program:apache2]
|
||||
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
|
||||
```
|
||||
|
||||
The `supervisord.conf` configuration file contains
|
||||
directives that configure Supervisor and the processes it manages. The
|
||||
first block `[supervisord]` provides configuration
|
||||
for Supervisor itself. We're using one directive, `nodaemon`
|
||||
which tells Supervisor to run interactively rather than
|
||||
daemonize.
|
||||
The `supervisord.conf` configuration file contains directives that configure
|
||||
Supervisor and the processes it manages. The first block `[supervisord]`
|
||||
provides configuration for Supervisor itself. The `nodaemon` directive is used,
|
||||
which tells Supervisor to run interactively rather than daemonize.
|
||||
|
||||
The next two blocks manage the services we wish to control. Each block
|
||||
controls a separate process. The blocks contain a single directive,
|
||||
`command`, which specifies what command to run to
|
||||
start each process.
|
||||
The next two blocks manage the services we wish to control. Each block controls
|
||||
a separate process. The blocks contain a single directive, `command`, which
|
||||
specifies what command to run to start each process.
|
||||
|
||||
## Exposing ports and running Supervisor
|
||||
|
||||
Now let's finish our `Dockerfile` by exposing some
|
||||
required ports and specifying the `CMD` instruction
|
||||
to start Supervisor when our container launches.
|
||||
|
||||
EXPOSE 22 80
|
||||
CMD ["/usr/bin/supervisord"]
|
||||
|
||||
Here We've exposed ports 22 and 80 on the container and we're running
|
||||
the `/usr/bin/supervisord` binary when the container
|
||||
Now let's finish the `Dockerfile` by exposing some required ports and
|
||||
specifying the `CMD` instruction to start Supervisor when our container
|
||||
launches.
|
||||
|
||||
```Dockerfile
|
||||
EXPOSE 22 80
|
||||
CMD ["/usr/bin/supervisord"]
|
||||
```
|
||||
|
||||
These instructions tell Docker that ports 22 and 80 are exposed by the
|
||||
container and that the `/usr/bin/supervisord` binary should be executed when
|
||||
the container launches.
|
||||
|
||||
## Building our image
|
||||
|
||||
We can now build our new image.
|
||||
Your completed Dockerfile now looks like this:
|
||||
|
||||
$ docker build -t <yourname>/supervisord .
|
||||
```Dockerfile
|
||||
FROM ubuntu:16.04
|
||||
MAINTAINER examples@docker.com
|
||||
|
||||
## Running our Supervisor container
|
||||
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
|
||||
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
|
||||
|
||||
Once We've got a built image we can launch a container from it.
|
||||
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
|
||||
$ docker run -p 22 -p 80 -t -i <yourname>/supervisord
|
||||
2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
|
||||
2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
|
||||
2013-11-25 18:53:22,342 INFO supervisord started with pid 1
|
||||
2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
|
||||
2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
|
||||
. . .
|
||||
EXPOSE 22 80
|
||||
CMD ["/usr/bin/supervisord"]
|
||||
```
|
||||
|
||||
We've launched a new container interactively using the `docker run` command.
|
||||
And your `supervisord.conf` file looks like this;
|
||||
|
||||
```ini
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
|
||||
[program:sshd]
|
||||
command=/usr/sbin/sshd -D
|
||||
|
||||
[program:apache2]
|
||||
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
|
||||
```
|
||||
|
||||
|
||||
You can now build the image using this command;
|
||||
|
||||
```bash
|
||||
$ docker build -t mysupervisord .
|
||||
```
|
||||
|
||||
## Running your Supervisor container
|
||||
|
||||
Once you have built your image you can launch a container from it.
|
||||
|
||||
```bash
|
||||
$ docker run -p 22 -p 80 -t -i mysupervisord
|
||||
2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
|
||||
2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
|
||||
2013-11-25 18:53:22,342 INFO supervisord started with pid 1
|
||||
2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
|
||||
2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
|
||||
...
|
||||
```
|
||||
|
||||
You launched a new container interactively using the `docker run` command.
|
||||
That container has run Supervisor and launched the SSH and Apache daemons with
|
||||
it. We've specified the `-p` flag to expose ports 22 and 80. From here we can
|
||||
now identify the exposed ports and connect to one or both of the SSH and Apache
|
||||
|
|
|
@ -21,6 +21,11 @@ The following list of features are deprecated in Engine.
|
|||
|
||||
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
|
||||
|
||||
### Separator (`:`) of `--security-opt` flag on `docker run`
|
||||
**Deprecated In Release: v1.11**
|
||||
|
||||
**Target For Removal In Release: v1.13**
|
||||
|
||||
The flag `--security-opt` doesn't use the colon separator(`:`) anymore to divide keys and values, it uses the equal symbol(`=`) for consinstency with other similar flags, like `--storage-opt`.
|
||||
|
||||
### Ambiguous event fields in API
|
||||
|
@ -79,15 +84,14 @@ Because of which, the driver specific log tag options `syslog-tag`, `gelf-tag` a
|
|||
### LXC built-in exec driver
|
||||
**Deprecated In Release: v1.8**
|
||||
|
||||
**Target For Removal In Release: v1.10**
|
||||
**Removed In Release: v1.10**
|
||||
|
||||
The built-in LXC execution driver is deprecated for an external implementation.
|
||||
The lxc-conf flag and API fields will also be removed.
|
||||
The built-in LXC execution driver, the lxc-conf flag, and API fields have been removed.
|
||||
|
||||
### Old Command Line Options
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Target For Removal In Release: v1.10**
|
||||
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The flags `-d` and `--daemon` are deprecated in favor of the `daemon` subcommand:
|
||||
|
||||
|
@ -133,17 +137,6 @@ The following double-dash options are deprecated and have no replacement:
|
|||
docker ps --before-id
|
||||
docker search --trusted
|
||||
|
||||
### Auto-creating missing host paths for bind mounts
|
||||
**Deprecated in Release: v1.9**
|
||||
|
||||
**Target for Removal in Release: 1.11**
|
||||
|
||||
When creating a container with a bind-mounted volume-- `docker run -v /host/path:/container/path` --
|
||||
docker was automatically creating the `/host/path` if it didn't already exist.
|
||||
|
||||
This auto-creation of the host path is deprecated and docker will error out if
|
||||
the path does not exist.
|
||||
|
||||
### Interacting with V1 registries
|
||||
|
||||
Version 1.9 adds a flag (`--disable-legacy-registry=false`) which prevents the docker daemon from `pull`, `push`, and `login` operations against v1 registries. Though disabled by default, this signals the intent to deprecate the v1 protocol.
|
||||
|
|
|
@ -39,7 +39,7 @@ Starting Couchbase Server -- Web UI available at http://<ip>:8091
|
|||
> Docker using Docker machine, you can obtain the IP address
|
||||
> of the Docker host using `docker-machine ip <MACHINE-NAME>`.
|
||||
|
||||
The logs show that Couchbase console can be accessed at http://192.168.99.100:8091. The default username is `Administrator` and the password is `password`.
|
||||
The logs show that Couchbase console can be accessed at `http://192.168.99.100:8091`. The default username is `Administrator` and the password is `password`.
|
||||
|
||||
## Configure Couchbase Docker container
|
||||
|
||||
|
@ -228,7 +228,7 @@ cbq> select * from `travel-sample` limit 1;
|
|||
|
||||
[Couchbase Web Console](http://developer.couchbase.com/documentation/server/4.1/admin/ui-intro.html) is a console that allows to manage a Couchbase instance. It can be seen at:
|
||||
|
||||
http://192.168.99.100:8091/
|
||||
`http://192.168.99.100:8091/`
|
||||
|
||||
Make sure to replace the IP address with the IP address of your Docker Machine or `localhost` if Docker is running locally.
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ This section contains the following:
|
|||
* [Dockerizing MongoDB](mongodb.md)
|
||||
* [Dockerizing PostgreSQL](postgresql_service.md)
|
||||
* [Dockerizing a CouchDB service](couchdb_data_volumes.md)
|
||||
* [Dockerizing a Node.js web app](nodejs_web_app.md)
|
||||
* [Dockerizing a Redis service](running_redis_service.md)
|
||||
* [Dockerizing an apt-cacher-ng service](apt-cacher-ng.md)
|
||||
* [Dockerizing applications: A 'Hello world'](../userguide/containers/dockerizing.md)
|
||||
|
|
|
@ -1,199 +0,0 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Dockerizing a Node.js web app"
|
||||
description = "Installing and running a Node.js app with Docker"
|
||||
keywords = ["docker, example, package installation, node, centos"]
|
||||
[menu.main]
|
||||
parent = "engine_dockerize"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Dockerizing a Node.js web app
|
||||
|
||||
> **Note**:
|
||||
> - **If you don't like sudo** then see [*Giving non-root
|
||||
> access*](../installation/binaries.md#giving-non-root-access)
|
||||
|
||||
The goal of this example is to show you how you can build your own
|
||||
Docker images from a parent image using a `Dockerfile`
|
||||
. We will do that by making a simple Node.js hello world web
|
||||
application running on CentOS. You can get the full source code at[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).
|
||||
|
||||
## Create Node.js app
|
||||
|
||||
First, create a directory `src` where all the files
|
||||
would live. Then create a `package.json` file that
|
||||
describes your app and its dependencies:
|
||||
|
||||
{
|
||||
"name": "docker-centos-hello",
|
||||
"private": true,
|
||||
"version": "0.0.1",
|
||||
"description": "Node.js Hello world app on CentOS using docker",
|
||||
"author": "Daniel Gasienica <daniel@gasienica.ch>",
|
||||
"dependencies": {
|
||||
"express": "3.2.4"
|
||||
}
|
||||
}
|
||||
|
||||
Then, create an `index.js` file that defines a web
|
||||
app using the [Express.js](http://expressjs.com/) framework:
|
||||
|
||||
var express = require('express');
|
||||
|
||||
// Constants
|
||||
var PORT = 8080;
|
||||
|
||||
// App
|
||||
var app = express();
|
||||
app.get('/', function (req, res) {
|
||||
res.send('Hello world\n');
|
||||
});
|
||||
|
||||
app.listen(PORT);
|
||||
console.log('Running on http://localhost:' + PORT);
|
||||
|
||||
In the next steps, we'll look at how you can run this app inside a
|
||||
CentOS container using Docker. First, you'll need to build a Docker
|
||||
image of your app.
|
||||
|
||||
## Creating a Dockerfile
|
||||
|
||||
Create an empty file called `Dockerfile`:
|
||||
|
||||
touch Dockerfile
|
||||
|
||||
Open the `Dockerfile` in your favorite text editor
|
||||
|
||||
Define the parent image you want to use to build your own image on
|
||||
top of. Here, we'll use
|
||||
[CentOS](https://hub.docker.com/_/centos/) (tag: `centos6`)
|
||||
available on the [Docker Hub](https://hub.docker.com/):
|
||||
|
||||
FROM centos:centos6
|
||||
|
||||
Since we're building a Node.js app, you'll have to install Node.js as
|
||||
well as npm on your CentOS image. Node.js is required to run your app
|
||||
and npm is required to install your app's dependencies defined in
|
||||
`package.json`. To install the right package for
|
||||
CentOS, we'll use the instructions from the [Node.js wiki](
|
||||
https://github.com/joyent/node/wiki/Installing-Node.js-
|
||||
via-package-manager#rhelcentosscientific-linux-6):
|
||||
|
||||
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
|
||||
RUN yum install -y epel-release
|
||||
# Install Node.js and npm
|
||||
RUN yum install -y nodejs npm
|
||||
|
||||
Install your app dependencies using the `npm` binary:
|
||||
|
||||
# Install app dependencies
|
||||
COPY package.json /src/package.json
|
||||
RUN cd /src; npm install --production
|
||||
|
||||
To bundle your app's source code inside the Docker image, use the `COPY`
|
||||
instruction:
|
||||
|
||||
# Bundle app source
|
||||
COPY . /src
|
||||
|
||||
Your app binds to port `8080` so you'll use the `EXPOSE` instruction to have
|
||||
it mapped by the `docker` daemon:
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
Last but not least, define the command to run your app using `CMD` which
|
||||
defines your runtime, i.e. `node`, and the path to our app, i.e. `src/index.js`
|
||||
(see the step where we added the source to the container):
|
||||
|
||||
CMD ["node", "/src/index.js"]
|
||||
|
||||
Your `Dockerfile` should now look like this:
|
||||
|
||||
FROM centos:centos6
|
||||
|
||||
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
|
||||
RUN yum install -y epel-release
|
||||
# Install Node.js and npm
|
||||
RUN yum install -y nodejs npm
|
||||
|
||||
# Install app dependencies
|
||||
COPY package.json /src/package.json
|
||||
RUN cd /src; npm install --production
|
||||
|
||||
# Bundle app source
|
||||
COPY . /src
|
||||
|
||||
EXPOSE 8080
|
||||
CMD ["node", "/src/index.js"]
|
||||
|
||||
## Building your image
|
||||
|
||||
Go to the directory that has your `Dockerfile` and run the following command
|
||||
to build a Docker image. The `-t` flag lets you tag your image so it's easier
|
||||
to find later using the `docker images` command:
|
||||
|
||||
$ docker build -t <your username>/centos-node-hello .
|
||||
|
||||
Your image will now be listed by Docker:
|
||||
|
||||
$ docker images
|
||||
|
||||
# Example
|
||||
REPOSITORY TAG ID CREATED
|
||||
centos centos6 539c0211cd76 8 weeks ago
|
||||
<your username>/centos-node-hello latest d64d3505b0d2 2 hours ago
|
||||
|
||||
## Run the image
|
||||
|
||||
Running your image with `-d` runs the container in detached mode, leaving the
|
||||
container running in the background. The `-p` flag redirects a public port to
|
||||
a private port in the container. Run the image you previously built:
|
||||
|
||||
$ docker run -p 49160:8080 -d <your username>/centos-node-hello
|
||||
|
||||
Print the output of your app:
|
||||
|
||||
# Get container ID
|
||||
$ docker ps
|
||||
|
||||
# Print app output
|
||||
$ docker logs <container id>
|
||||
|
||||
# Example
|
||||
Running on http://localhost:8080
|
||||
|
||||
## Test
|
||||
|
||||
To test your app, get the port of your app that Docker mapped:
|
||||
|
||||
$ docker ps
|
||||
|
||||
# Example
|
||||
ID IMAGE COMMAND ... PORTS
|
||||
ecce33b30ebf <your username>/centos-node-hello:latest node /src/index.js 49160->8080
|
||||
|
||||
In the example above, Docker mapped the `8080` port of the container to `49160`.
|
||||
|
||||
Now you can call your app using `curl` (install if needed via:
|
||||
`sudo apt-get install curl`):
|
||||
|
||||
$ curl -i localhost:49160
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
X-Powered-By: Express
|
||||
Content-Type: text/html; charset=utf-8
|
||||
Content-Length: 12
|
||||
Date: Sun, 02 Jun 2013 03:53:22 GMT
|
||||
Connection: keep-alive
|
||||
|
||||
Hello world
|
||||
|
||||
If you use Docker Machine on OS X, the port is actually mapped to the Docker
|
||||
host VM, and you should use the following command:
|
||||
|
||||
$ curl $(docker-machine ip VM_NAME):49160
|
||||
|
||||
We hope this tutorial helped you get up and running with Node.js and
|
||||
CentOS on Docker. You can get the full source code at
|
||||
[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).
|
|
@ -1,7 +1,7 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Dockerizing a Redis service"
|
||||
description = "Installing and running an redis service"
|
||||
description = "Installing and running a redis service"
|
||||
keywords = ["docker, example, package installation, networking, redis"]
|
||||
[menu.main]
|
||||
parent = "engine_dockerize"
|
||||
|
|
|
@ -169,9 +169,10 @@ Responds with a list of Docker subsystems which this plugin implements.
|
|||
After activation, the plugin will then be sent events from this subsystem.
|
||||
|
||||
Possible values are:
|
||||
- [`authz`](plugins_authorization.md)
|
||||
- [`NetworkDriver`](plugins_network.md)
|
||||
- [`VolumeDriver`](plugins_volume.md)
|
||||
|
||||
* [`authz`](plugins_authorization.md)
|
||||
* [`NetworkDriver`](plugins_network.md)
|
||||
* [`VolumeDriver`](plugins_volume.md)
|
||||
|
||||
|
||||
## Plugin retries
|
||||
|
|
|
@ -22,7 +22,7 @@ example, a [volume plugin](plugins_volume.md) might enable Docker
|
|||
volumes to persist across multiple Docker hosts and a
|
||||
[network plugin](plugins_network.md) might provide network plumbing.
|
||||
|
||||
Currently Docker supports volume and network driver plugins. In the future it
|
||||
Currently Docker supports authorization, volume and network driver plugins. In the future it
|
||||
will support additional plugin types.
|
||||
|
||||
## Installing a plugin
|
||||
|
@ -31,78 +31,48 @@ Follow the instructions in the plugin's documentation.
|
|||
|
||||
## Finding a plugin
|
||||
|
||||
The following plugins exist:
|
||||
The sections below provide an inexhaustive overview of available plugins.
|
||||
|
||||
* The [Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume)
|
||||
is a volume plugin that provides access to an extensible set of
|
||||
container-based persistent storage options. It supports single and multi-host Docker
|
||||
environments with features that include tenant isolation, automated
|
||||
provisioning, encryption, secure deletion, snapshots and QoS.
|
||||
<style>
|
||||
#content tr td:first-child { white-space: nowrap;}
|
||||
</style>
|
||||
|
||||
* The [Convoy plugin](https://github.com/rancher/convoy) is a volume plugin for a
|
||||
variety of storage back-ends including device mapper and NFS. It's a simple standalone
|
||||
executable written in Go and provides the framework to support vendor-specific extensions
|
||||
such as snapshots, backups and restore.
|
||||
### Network plugins
|
||||
|
||||
* The [Flocker plugin](https://clusterhq.com/docker-plugin/) is a volume plugin
|
||||
which provides multi-host portable volumes for Docker, enabling you to run
|
||||
databases and other stateful containers and move them around across a cluster
|
||||
of machines.
|
||||
Plugin | Description
|
||||
----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Contiv Networking](https://github.com/contiv/netplugin) | An open source network plugin to provide infrastructure and security policies for a multi-tenant micro services deployment, while providing an integration to physical network for non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards.
|
||||
[Kuryr Network Plugin](https://github.com/openstack/kuryr) | A network plugin is developed as part of the OpenStack Kuryr project and implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. It includes an IPAM driver as well.
|
||||
[Weave Network Plugin](https://www.weave.works/docs/net/latest/introducing-weave/) | A network plugin that creates a virtual network that connects your Docker containers - across multiple hosts or clouds and enables automatic discovery of applications. Weave networks are resilient, partition tolerant, secure and work in partially connected networks, and other adverse environments - all configured with delightful simplicity.
|
||||
|
||||
* The [GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) is
|
||||
another volume plugin that provides multi-host volumes management for Docker
|
||||
using GlusterFS.
|
||||
### Volume plugins
|
||||
|
||||
* The [Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) allows on-demand,
|
||||
version controlled access to your data. Horcrux is an open-source plugin,
|
||||
written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
|
||||
Plugin | Description
|
||||
----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Azure File Storage plugin](https://github.com/Azure/azurefile-dockervolumedriver) | Lets you mount Microsoft [Azure File Storage](https://azure.microsoft.com/blog/azure-file-storage-now-generally-available/) shares to Docker containers as volumes using the SMB 3.0 protocol. [Learn more](https://azure.microsoft.com/blog/persistent-docker-volumes-with-azure-file-storage/).
|
||||
[Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume) | A volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS.
|
||||
[Contiv Volume Plugin](https://github.com/contiv/volplugin) | An open source volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption using ceph underneath.
|
||||
[Convoy plugin](https://github.com/rancher/convoy) | A volume plugin for a variety of storage back-ends including device mapper and NFS. It's a simple standalone executable written in Go and provides the framework to support vendor-specific extensions such as snapshots, backups and restore.
|
||||
[DRBD plugin](https://www.drbd.org/en/supported-projects/docker) | A volume plugin that provides highly available storage replicated by [DRBD](https://www.drbd.org). Data written to the docker volume is replicated in a cluster of DRBD nodes.
|
||||
[Flocker plugin](https://clusterhq.com/docker-plugin/) | A volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines.
|
||||
[gce-docker plugin](https://github.com/mcuadros/gce-docker) | A volume plugin able to attach, format and mount Google Compute [persistent-disks](https://cloud.google.com/compute/docs/disks/persistent-disks).
|
||||
[GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) | A volume plugin that provides multi-host volumes management for Docker using GlusterFS.
|
||||
[Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) | A volume plugin that allows on-demand, version controlled access to your data. Horcrux is an open-source plugin, written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
|
||||
[IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs) | An open source volume plugin that allows using an [ipfs](https://ipfs.io/) filesystem as a volume.
|
||||
[Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) | A plugin that provides credentials and secret management using Keywhiz as a central repository.
|
||||
[Local Persist Plugin](https://github.com/CWSpear/local-persist) | A volume plugin that extends the default `local` driver's functionality by allowing you specify a mountpoint anywhere on the host, which enables the files to *always persist*, even if the volume is removed via `docker volume rm`.
|
||||
[NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future.
|
||||
[Netshare plugin](https://github.com/ContainX/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
|
||||
[OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
|
||||
[Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
|
||||
[REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
|
||||
[VMware vSphere Storage Plugin](https://github.com/vmware/docker-volume-vsphere) | Docker Volume Driver for vSphere enables customers to address persistent storage requirements for Docker containers in vSphere environments.
|
||||
|
||||
* The [IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs)
|
||||
is an open source volume plugin that allows using an
|
||||
[ipfs](https://ipfs.io/) filesystem as a volume.
|
||||
### Authorization plugins
|
||||
|
||||
* The [Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) is
|
||||
a plugin that provides credentials and secret management using Keywhiz as
|
||||
a central repository.
|
||||
|
||||
* The [Netshare plugin](https://github.com/gondor/docker-volume-netshare) is a volume plugin
|
||||
that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
|
||||
|
||||
* The [OpenStorage Plugin](https://github.com/libopenstorage/openstorage) is a cluster aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
|
||||
|
||||
* The [Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
|
||||
|
||||
* The [REX-Ray plugin](https://github.com/emccode/rexray) is a volume plugin
|
||||
which is written in Go and provides advanced storage functionality for many
|
||||
platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
|
||||
|
||||
* The [Contiv Volume Plugin](https://github.com/contiv/volplugin) is an open
|
||||
source volume plugin that provides multi-tenant, persistent, distributed storage
|
||||
with intent based consumption using ceph underneath.
|
||||
|
||||
* The [Contiv Networking](https://github.com/contiv/netplugin) is an open source
|
||||
libnetwork plugin to provide infrastructure and security policies for a
|
||||
multi-tenant micro services deployment, while providing an integration to
|
||||
physical network for non-container workload. Contiv Networking implements the
|
||||
remote driver and IPAM APIs available in Docker 1.9 onwards.
|
||||
|
||||
* The [Weave Network Plugin](http://docs.weave.works/weave/latest_release/plugin.html)
|
||||
creates a virtual network that connects your Docker containers -
|
||||
across multiple hosts or clouds and enables automatic discovery of
|
||||
applications. Weave networks are resilient, partition tolerant,
|
||||
secure and work in partially connected networks, and other adverse
|
||||
environments - all configured with delightful simplicity.
|
||||
|
||||
* The [Kuryr Network Plugin](https://github.com/openstack/kuryr) is
|
||||
developed as part of the OpenStack Kuryr project and implements the
|
||||
Docker networking (libnetwork) remote driver API by utilizing
|
||||
Neutron, the OpenStack networking service. It includes an IPAM
|
||||
driver as well.
|
||||
|
||||
* The [Local Persist Plugin](https://github.com/CWSpear/local-persist)
|
||||
extends the default `local` driver's functionality by allowing you specify
|
||||
a mountpoint anywhere on the host, which enables the files to *always persist*,
|
||||
even if the volume is removed via `docker volume rm`.
|
||||
Plugin | Description
|
||||
------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name).
|
||||
|
||||
## Troubleshooting a plugin
|
||||
|
||||
|
|
|
@ -149,10 +149,9 @@ should implement the following two methods:
|
|||
"User": "The user identification",
|
||||
"UserAuthNMethod": "The authentication method used",
|
||||
"RequestMethod": "The HTTP method",
|
||||
"RequestUri": "The HTTP request URI",
|
||||
"RequestURI": "The HTTP request URI",
|
||||
"RequestBody": "Byte array containing the raw HTTP request body",
|
||||
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string ",
|
||||
"RequestStatusCode": "Request status code"
|
||||
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string "
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -174,10 +173,9 @@ should implement the following two methods:
|
|||
"User": "The user identification",
|
||||
"UserAuthNMethod": "The authentication method used",
|
||||
"RequestMethod": "The HTTP method",
|
||||
"RequestUri": "The HTTP request URI",
|
||||
"RequestURI": "The HTTP request URI",
|
||||
"RequestBody": "Byte array containing the raw HTTP request body",
|
||||
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string",
|
||||
"RequestStatusCode": "Request status code",
|
||||
"ResponseBody": "Byte array containing the raw HTTP response body",
|
||||
"ResponseHeader": "Byte array containing the raw HTTP response header as a map[string][]string",
|
||||
"ResponseStatusCode":"Response status code"
|
||||
|
@ -190,17 +188,10 @@ should implement the following two methods:
|
|||
{
|
||||
"Allow": "Determined whether the user is allowed or not",
|
||||
"Msg": "The authorization message",
|
||||
"Err": "The error message if things go wrong",
|
||||
"ModifiedBody": "Byte array containing a modified body of the raw HTTP body (or nil if no changes required)",
|
||||
"ModifiedHeader": "Byte array containing a modified header of the HTTP response (or nil if no changes required)",
|
||||
"ModifiedStatusCode": "int containing the modified version of the status code (or 0 if not change is required)"
|
||||
"Err": "The error message if things go wrong"
|
||||
}
|
||||
```
|
||||
|
||||
The modified response enables the authorization plugin to manipulate the content
|
||||
of the HTTP response. In case of more than one plugin, each subsequent plugin
|
||||
receives a response (optionally) modified by a previous plugin.
|
||||
|
||||
### Request authorization
|
||||
|
||||
Each plugin must support two request authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.
|
||||
|
|
|
@ -13,7 +13,7 @@ parent = "engine_extend"
|
|||
Docker Engine network plugins enable Engine deployments to be extended to
|
||||
support a wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN
|
||||
or something completely different. Network driver plugins are supported via the
|
||||
LibNetwork project. Each plugin is implemented asa "remote driver" for
|
||||
LibNetwork project. Each plugin is implemented as a "remote driver" for
|
||||
LibNetwork, which shares plugin infrastructure with Engine. Effectively, network
|
||||
driver plugins are activated in the same way as other plugins, and use the same
|
||||
kind of protocol.
|
||||
|
|
|
@ -140,7 +140,8 @@ Docker needs reminding of the path to the volume on the host.
|
|||
```
|
||||
|
||||
Respond with the path on the host filesystem where the volume has been made
|
||||
available, and/or a string error if an error occurred.
|
||||
available, and/or a string error if an error occurred. `Mountpoint` is optional,
|
||||
however the plugin may be queried again later if one is not provided.
|
||||
|
||||
### /VolumeDriver.Unmount
|
||||
|
||||
|
@ -188,7 +189,8 @@ Get the volume info.
|
|||
}
|
||||
```
|
||||
|
||||
Respond with a string error if an error occurred.
|
||||
Respond with a string error if an error occurred. `Mountpoint` and `Status` are
|
||||
optional.
|
||||
|
||||
|
||||
### /VolumeDriver.List
|
||||
|
@ -213,4 +215,4 @@ Get the list of volumes registered with the plugin.
|
|||
}
|
||||
```
|
||||
|
||||
Respond with a string error if an error occurred.
|
||||
Respond with a string error if an error occurred. `Mountpoint` is optional.
|
||||
|
|
|
@ -9,7 +9,7 @@ weight = 110
|
|||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Binaries
|
||||
# Installation from binaries
|
||||
|
||||
**This instruction set is meant for hackers who want to try out Docker
|
||||
on a variety of environments.**
|
||||
|
@ -85,90 +85,137 @@ exhibit unexpected behaviour.
|
|||
> vendor for the system, and might break regulations and security
|
||||
> policies in heavily regulated environments.
|
||||
|
||||
## Get the Docker binary
|
||||
## Get the Docker Engine binaries
|
||||
|
||||
You can download either the latest release binary or a specific version.
|
||||
After downloading a binary file, you must set the file's execute bit to run it.
|
||||
You can download either the latest release binaries or a specific version. To get
|
||||
the list of stable release version numbers from GitHub, view the `docker/docker`
|
||||
[releases page](https://github.com/docker/docker/releases). You can get the MD5
|
||||
and SHA256 hashes by appending .md5 and .sha256 to the URLs respectively
|
||||
|
||||
To set the file's execute bit on Linux and OS X:
|
||||
|
||||
$ chmod +x docker
|
||||
|
||||
To get the list of stable release version numbers from GitHub, view the
|
||||
`docker/docker` [releases page](https://github.com/docker/docker/releases).
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1) You can get the MD5 and SHA256 hashes by appending .md5 and .sha256 to the URLs respectively
|
||||
>
|
||||
> 2) You can get the compressed binaries by appending .tgz to the URLs
|
||||
|
||||
### Get the Linux binary
|
||||
### Get the Linux binaries
|
||||
|
||||
To download the latest version for Linux, use the
|
||||
following URLs:
|
||||
|
||||
https://get.docker.com/builds/Linux/i386/docker-latest
|
||||
https://get.docker.com/builds/Linux/i386/docker-latest.tgz
|
||||
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-latest
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz
|
||||
|
||||
To download a specific version for Linux, use the
|
||||
following URL patterns:
|
||||
|
||||
https://get.docker.com/builds/Linux/i386/docker-<version>
|
||||
https://get.docker.com/builds/Linux/i386/docker-<version>.tgz
|
||||
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-<version>
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-<version>.tgz
|
||||
|
||||
For example:
|
||||
|
||||
https://get.docker.com/builds/Linux/i386/docker-1.9.1
|
||||
https://get.docker.com/builds/Linux/i386/docker-1.11.0.tgz
|
||||
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-1.9.1
|
||||
https://get.docker.com/builds/Linux/x86_64/docker-1.11.0.tgz
|
||||
|
||||
> **Note** These instructions are for Docker Engine 1.11 and up. Engine 1.10 and
|
||||
> under consists of a single binary, and instructions for those versions are
|
||||
> different. To install version 1.10 or below, follow the instructions in the
|
||||
> <a href="/v1.10/engine/installation/binaries/" target="_blank">1.10 documentation</a>.
|
||||
|
||||
|
||||
#### Install the Linux binaries
|
||||
|
||||
After downloading, you extract the archive, which puts the binaries in a
|
||||
directory named `docker` in your current location.
|
||||
|
||||
```bash
|
||||
$ tar -xvzf docker-latest.tgz
|
||||
|
||||
docker/
|
||||
docker/docker-containerd-ctr
|
||||
docker/docker
|
||||
docker/docker-containerd
|
||||
docker/docker-runc
|
||||
docker/docker-containerd-shim
|
||||
```
|
||||
|
||||
Engine requires these binaries to be installed in your host's `$PATH`.
|
||||
For example, to install the binaries in `/usr/bin`:
|
||||
|
||||
```bash
|
||||
$ mv docker/* /usr/bin/
|
||||
```
|
||||
|
||||
> **Note**: If you already have Engine installed on your host, make sure you
|
||||
> stop Engine before installing (`killall docker`), and install the binaries
|
||||
> in the same location. You can find the location of the current installation
|
||||
> with `dirname $(which docker)`.
|
||||
|
||||
#### Run the Engine daemon on Linux
|
||||
|
||||
You can manually start the Engine in daemon mode using:
|
||||
|
||||
```bash
|
||||
$ sudo docker daemon &
|
||||
```
|
||||
|
||||
The GitHub repository provides samples of init-scripts you can use to control
|
||||
the daemon through a process manager, such as upstart or systemd. You can find
|
||||
these scripts in the <a href="https://github.com/docker/docker/tree/master/contrib/init">
|
||||
contrib directory</a>.
|
||||
|
||||
For additional information about running the Engine in daemon mode, refer to
|
||||
the [daemon command](../reference/commandline/daemon.md) in the Engine command
|
||||
line reference.
|
||||
|
||||
### Get the Mac OS X binary
|
||||
|
||||
The Mac OS X binary is only a client. You cannot use it to run the `docker`
|
||||
daemon. To download the latest version for Mac OS X, use the following URLs:
|
||||
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-latest
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-latest.tgz
|
||||
|
||||
To download a specific version for Mac OS X, use the
|
||||
following URL patterns:
|
||||
following URL pattern:
|
||||
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-<version>
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-<version>.tgz
|
||||
|
||||
For example:
|
||||
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-1.9.1
|
||||
https://get.docker.com/builds/Darwin/x86_64/docker-1.11.0.tgz
|
||||
|
||||
You can extract the downloaded archive either by double-clicking the downloaded
|
||||
`.tgz` or on the command line, using `tar -xvzf docker-1.11.0.tgz`. The client
|
||||
binary can be executed from any location on your filesystem.
|
||||
|
||||
|
||||
### Get the Windows binary
|
||||
|
||||
You can only download the Windows client binary for version `1.9.1` onwards.
|
||||
Moreover, the binary is only a client, you cannot use it to run the `docker` daemon.
|
||||
You can only download the Windows binary for version `1.9.1` onwards.
|
||||
Moreover, the 32-bit (`i386`) binary is only a client, you cannot use it to
|
||||
run the `docker` daemon. The 64-bit binary (`x86_64`) is both a client and
|
||||
daemon.
|
||||
|
||||
To download the latest version for Windows, use the following URLs:
|
||||
|
||||
https://get.docker.com/builds/Windows/i386/docker-latest.exe
|
||||
https://get.docker.com/builds/Windows/i386/docker-latest.zip
|
||||
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-latest.exe
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-latest.zip
|
||||
|
||||
To download a specific version for Windows, use the following URL pattern:
|
||||
|
||||
https://get.docker.com/builds/Windows/i386/docker-<version>.exe
|
||||
https://get.docker.com/builds/Windows/i386/docker-<version>.zip
|
||||
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-<version>.exe
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-<version>.zip
|
||||
|
||||
For example:
|
||||
|
||||
https://get.docker.com/builds/Windows/i386/docker-1.9.1.exe
|
||||
https://get.docker.com/builds/Windows/i386/docker-1.11.0.zip
|
||||
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-1.9.1.exe
|
||||
https://get.docker.com/builds/Windows/x86_64/docker-1.11.0.zip
|
||||
|
||||
|
||||
## Run the Docker daemon
|
||||
|
||||
# start the docker in daemon mode from the directory you unpacked
|
||||
$ sudo ./docker daemon &
|
||||
> **Note** These instructions are for Engine 1.11 and up. Instructions for older
|
||||
> versions are slightly different. To install version 1.10 or below, follow the
|
||||
> instructions in the <a href="/v1.10/engine/installation/binaries/" target="_blank">1.10 documentation</a>.
|
||||
|
||||
## Giving non-root access
|
||||
|
||||
|
@ -188,21 +235,15 @@ need to add `sudo` to all the client commands.
|
|||
> The *docker* group (or the group specified with `-G`) is root-equivalent;
|
||||
> see [*Docker Daemon Attack Surface*](../security/security.md#docker-daemon-attack-surface) details.
|
||||
|
||||
## Upgrades
|
||||
## Upgrade Docker Engine
|
||||
|
||||
To upgrade your manual installation of Docker, first kill the docker
|
||||
To upgrade your manual installation of Docker Engine on Linux, first kill the docker
|
||||
daemon:
|
||||
|
||||
$ killall docker
|
||||
|
||||
Then follow the regular installation steps.
|
||||
Then follow the [regular installation steps](#get-the-linux-binaries).
|
||||
|
||||
## Run your first container!
|
||||
|
||||
# check your docker version
|
||||
$ sudo ./docker version
|
||||
|
||||
# run a container and open an interactive shell in the container
|
||||
$ sudo ./docker run -i -t ubuntu /bin/bash
|
||||
## Next steps
|
||||
|
||||
Continue with the [User Guide](../userguide/index.md).
|
||||
|
|
|
@ -57,7 +57,7 @@ package manager.
|
|||
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
|
||||
[dockerrepo]
|
||||
name=Docker Repository
|
||||
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
|
||||
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://yum.dockerproject.org/gpg
|
||||
|
|
|
@ -16,7 +16,7 @@ Docker is supported on the following versions of Debian:
|
|||
|
||||
- [*Debian testing stretch (64-bit)*](#debian-wheezy-stable-7-x-64-bit)
|
||||
- [*Debian 8.0 Jessie (64-bit)*](#debian-jessie-80-64-bit)
|
||||
- [*Debian 7.7 Wheezy (64-bit)*](#debian-wheezy-stable-7-x-64-bit)
|
||||
- [*Debian 7.7 Wheezy (64-bit)*](#debian-wheezy-stable-7-x-64-bit) (backports required)
|
||||
|
||||
>**Note**: If you previously installed Docker using `APT`, make sure you update
|
||||
your `APT` sources to the new `APT` repository.
|
||||
|
@ -36,6 +36,26 @@ Docker is supported on the following versions of Debian:
|
|||
|
||||
$ uname -r
|
||||
|
||||
Additionally, for users of Debian Wheezy, backports must be available. To enable backports in Wheezy:
|
||||
|
||||
1. Log into your machine and open a terminal with `sudo` or `root` privileges.
|
||||
|
||||
2. Open the `/etc/apt/sources.list.d/backports.list` file in your favorite editor.
|
||||
|
||||
If the file doesn't exist, create it.
|
||||
|
||||
3. Remove any existing entries.
|
||||
|
||||
4. Add an entry for backports on Debian Wheezy.
|
||||
|
||||
An example entry:
|
||||
|
||||
deb http://http.debian.net/debian wheezy-backports main
|
||||
|
||||
5. Update package information:
|
||||
|
||||
$ apt-get update
|
||||
|
||||
### Update your apt repository
|
||||
|
||||
Docker's `APT` repository contains Docker 1.7.1 and higher. To set `APT` to use
|
||||
|
|
|
@ -26,7 +26,7 @@ version, open a terminal and use `uname -r` to display your kernel version:
|
|||
$ uname -r
|
||||
3.19.5-100.fc21.x86_64
|
||||
|
||||
If your kernel is at a older version, you must update it.
|
||||
If your kernel is at an older version, you must update it.
|
||||
|
||||
Finally, is it recommended that you fully update your system. Please keep in
|
||||
mind that your system should be fully patched to fix any potential kernel bugs. Any
|
||||
|
@ -186,7 +186,7 @@ You can uninstall the Docker software with `dnf`.
|
|||
|
||||
1. List the package you have installed.
|
||||
|
||||
$ dnf list installed | grep docker dnf list installed | grep docker
|
||||
$ dnf list installed | grep docker
|
||||
docker-engine.x86_64 1.7.1-0.1.fc21 @/docker-engine-1.7.1-0.1.fc21.el7.x86_64
|
||||
|
||||
2. Remove the package.
|
||||
|
|
|
@ -29,13 +29,8 @@ btrfs storage engine on both Oracle Linux 6 and 7.
|
|||
> follow the installation instructions provided in the
|
||||
> [Oracle Linux documentation](https://docs.oracle.com/en/operating-systems/?tab=2).
|
||||
>
|
||||
> The installation instructions for Oracle Linux 6 can be found in [Chapter 10 of
|
||||
> the Administrator's
|
||||
> Solutions Guide](https://docs.oracle.com/cd/E37670_01/E37355/html/ol_docker.html)
|
||||
>
|
||||
> The installation instructions for Oracle Linux 7 can be found in [Chapter 29 of
|
||||
> the Administrator's
|
||||
> Guide](https://docs.oracle.com/cd/E52668_01/E54669/html/ol7-docker.html)
|
||||
> The installation instructions for Oracle Linux 6 and 7 can be found in [Chapter 2 of
|
||||
> the Docker User's Guide](https://docs.oracle.com/cd/E52668_01/E75728/html/docker_install_upgrade.html)
|
||||
|
||||
|
||||
1. Log into your machine as a user with `sudo` or `root` privileges.
|
||||
|
|
|
@ -14,6 +14,7 @@ weight = -6
|
|||
|
||||
Docker is supported on these Ubuntu operating systems:
|
||||
|
||||
- Ubuntu Xenial 16.04 (LTS)
|
||||
- Ubuntu Wily 15.10
|
||||
- Ubuntu Trusty 14.04 (LTS)
|
||||
- Ubuntu Precise 12.04 (LTS)
|
||||
|
@ -85,6 +86,10 @@ packages from the new repository:
|
|||
|
||||
deb https://apt.dockerproject.org/repo ubuntu-wily main
|
||||
|
||||
- Ubuntu Xenial 16.04 (LTS)
|
||||
|
||||
deb https://apt.dockerproject.org/repo ubuntu-xenial main
|
||||
|
||||
> **Note**: Docker does not provide packages for all architectures. You can find
|
||||
> nightly built binaries in https://master.dockerproject.org. To install docker on
|
||||
> a multi-architecture system, add an `[arch=...]` clause to the entry. Refer to the
|
||||
|
@ -109,10 +114,11 @@ packages from the new repository:
|
|||
|
||||
### Prerequisites by Ubuntu Version
|
||||
|
||||
- Ubuntu Xenial 16.04 (LTS)
|
||||
- Ubuntu Wily 15.10
|
||||
- Ubuntu Trusty 14.04 (LTS)
|
||||
|
||||
For Ubuntu Trusty and Wily, it's recommended to install the
|
||||
For Ubuntu Trusty, Wily, and Xenial, it's recommended to install the
|
||||
`linux-image-extra` kernel package. The `linux-image-extra` package
|
||||
allows you use the `aufs` storage driver.
|
||||
|
||||
|
@ -385,7 +391,7 @@ To specify a DNS server for use by Docker:
|
|||
|
||||
5. Restart the Docker daemon.
|
||||
|
||||
$ sudo restart docker
|
||||
$ sudo service docker restart
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -85,7 +85,7 @@ Docker container using standard localhost addressing such as `localhost:8000` or
|
|||
|
||||
![Linux Architecture Diagram](images/linux_docker_host.svg)
|
||||
|
||||
In an Windows installation, the `docker` daemon is running inside a Linux virtual
|
||||
In a Windows installation, the `docker` daemon is running inside a Linux virtual
|
||||
machine. You use the Windows Docker client to talk to the Docker host VM. Your
|
||||
Docker containers run inside this host.
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue