After addition of multi-host networking in Docker 1.9, Docker Remote
API is still returning only the network specified during creation
of the container in the “List Containers” (`/containers/json`) endpoint:
...
"HostConfig": {
"NetworkMode": "default"
},
The list of networks containers are attached to is only available at
Get Container (`/containers/<id>/json`) endpoint.
This does not allow applications utilizing multi-host networking to
be built on top of Docker Remote API.
Therefore I added a simple `"NetworkSettings"` section to the
`/containers/json` endpoint. This is not identical to the NetworkSettings
returned in Get Container (`/containers/<id>/json`) endpoint. It only
contains a single field `"Networks"`, which is essentially the same
value shown in inspect output of a container.
This change adds the following section to the `/containers/json`:
"NetworkSettings": {
"Networks": {
"bridge": {
"EndpointID": "2cdc4edb1ded3631c81f57966563e...",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
}
This is of type `SummaryNetworkSettings` type, a minimal version of
`api/types#NetworkSettings`.
Actually all I need is the network name and the IPAddress fields. If folks
find this addition too big, I can create a `SummaryEndpointSettings` field
as well, containing just the IPAddress field.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Allow passing mount propagation option shared, slave, or private as volume
property.
For example.
docker run -ti -v /root/mnt-source:/root/mnt-dest:slave fedora bash
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This commit makes `docker network inspect` print container names as
service discovery is based on container name.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
- So that they comply with docker inspect convention
Which is allowing camel case for json field names
Signed-off-by: Alessandro Boch <aboch@docker.com>
--cluster-advertise daemon option is enahanced to support <interface-name>
in addition to <ip-address> in order to amke it automation friendly using
docker-machine.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
introduced --subnet, --ip-range and --gateway options in docker network
command. Also, user can allocate driver specific ip-address if any using
the --aux-address option.
Supports multiple subnets per network and also sharing ip range
across networks if the network-driver and ipam-driver supports it.
Example, Bridge driver doesnt support sharing same ip range across
networks.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
* Moving Network Remote APIs out of experimental
* --net can now accept user created networks using network drivers/plugins
* Removed the experimental services concept and --default-network option
* Neccessary backend changes to accomodate multiple networks per container
* Integration Tests
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Use `pkg/discovery` to provide nodes discovery between daemon instances.
The functionality is driven by two different command-line flags: the
experimental `--cluster-store` (previously `--kv-store`) and
`--cluster-advertise`. It can be used in two ways by interested
components:
1. Externally by calling the `/info` API and examining the cluster store
field. The `pkg/discovery` package can then be used to hit the same
endpoint and watch for appearing or disappearing nodes. That is the
method that will for example be used by Swarm.
2. Internally by using the `Daemon.discoveryWatcher` instance. That is
the method that will for example be used by libnetwork.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Some structures use int for sizes and UNIX timestamps. On some
platforms, int is 32 bits, so this can lead to the year 2038 issues and
overflows when dealing with large containers or layers.
Consistently use int64 to store sizes and UNIX timestamps in
api/types/types.go. Update related to code accordingly (i.e.
strconv.FormatInt instead of strconv.Itoa).
Use int64 in progressreader package to avoid integer overflow when
dealing with large quantities. Update related code accordingly.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
[pkg/archive] Update archive/copy path handling
- Remove unused TarOptions.Name field.
- Add new TarOptions.RebaseNames field.
- Update some of the logic around path dir/base splitting.
- Update some of the logic behind archive entry name rebasing.
[api/types] Add LinkTarget field to PathStat
[daemon] Fix stat, archive, extract of symlinks
These operations *should* resolve symlinks that are in the path but if the
resource itself is a symlink then it *should not* be resolved. This patch
puts this logic into a common function `resolvePath` which resolves symlinks
of the path's dir in scope of the container rootfs but does not resolve the
final element of the path. Now archive, extract, and stat operations will
return symlinks if the path is indeed a symlink.
[api/client] Update cp path hanling
[docs/reference/api] Update description of stat
Add the linkTarget field to the header of the archive endpoint.
Remove path field.
[integration-cli] Fix/Add cp symlink test cases
Copying a symlink should do just that: copy the symlink NOT
copy the target of the symlink. Also, the resulting file from
the copy should have the name of the symlink NOT the name of
the target file.
Copying to a symlink should copy to the symlink target and not
modify the symlink itself.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
In 1.6.2 we were decoding inspect API response into interface{}.
time.Time fields were JSON encoded as RFC3339Nano in the response
and when decoded into interface{} they were just strings so the inspect
template treated them as just strings.
From 1.7 we are decoding into types.ContainerJSON and when the template
gets executed it now gets a time.Time and it's formatted as
2015-07-22 05:02:38.091530369 +0000 UTC.
This patch brings back the old behavior by typing time.Time fields
as string so they gets formatted as they were encoded in JSON -- RCF3339Nano
Signed-off-by: Antonio Murdaca <runcom@linux.com>