These tests were enabled by changing a config option on the ci
machines, instead of from a patch, so let me disable them
for now on ppc64le and open up another patch to enable them, where I can find
out what the issues are with them.
Signed-off-by: Christopher Jones <tophj@linux.vnet.ibm.com>
Since the update to Debian Stretch, this test fails. The reason is dynamic
binary, which requires i386 ld.so for loading (and apparently it is no longer
installed by default):
> root@09d4b173c3dc:/go/src/github.com/docker/docker# file exit32-test
> exit32-test: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, BuildID[sha1]=a0d3d6cb59788453b983f65f8dc6ac52920147b6, stripped
> root@09d4b173c3dc:/go/src/github.com/docker/docker# ls -l /lib/ld-linux.so.2
> ls: cannot access '/lib/ld-linux.so.2': No such file or directory
To fix, just add -static.
Interestingly, ldd can'f figure it out.
> root@a324f8edfcaa:/go/src/github.com/docker/docker# ldd exit32-test
> not a dynamic executable
Other tools (e.g. objdump) also show it's a dynamic binary.
While at it, remove the extra "id" argument (a copy-paste error I
guess).
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
When running against a remote daemon, we cannot use the local
filesystem to determine configuration.
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
This reverts commit 7e3a596a63.
Unfortunately, it was pointed out in https://github.com/moby/moby/pull/29076#commitcomment-21831387
that the `socketcall` syscall takes a pointer to a struct so it is not possible to
use seccomp profiles to filter it. This means these cannot be blocked as you can
use `socketcall` to call them regardless, as we currently allow 32 bit syscalls.
Users who wish to block these should use a seccomp profile that blocks all
32 bit syscalls and then just block the non socketcall versions.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Add some required command operators to the `cli` package, and update
some tests to use this package, in order to remove a few functions
from `docker_utils_test.go`
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Linux supports many obsolete address families, which are usually available in
common distro kernels, but they are less likely to be properly audited and
may have security issues
This blocks all socket families in the socket (and socketcall where applicable) syscall
except
- AF_UNIX - Unix domain sockets
- AF_INET - IPv4
- AF_INET6 - IPv6
- AF_NETLINK - Netlink sockets for communicating with the ekrnel
- AF_PACKET - raw sockets, which are only allowed with CAP_NET_RAW
All other socket families are blocked, including Appletalk (native, not
over IP), IPX (remember that!), VSOCK and HVSOCK, which should not generally
be used in containers, etc.
Note that users can of course provide a profile per container or in the daemon
config if they have unusual use cases that require these.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
- Remove deprecated buildImage* functions
- Rename buildImageNew to buildImage
- Use *check.C in fakeContext* setup and in getIdByName
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
… to make sure it doesn't fail. It also introduce StartWithError,
StopWithError and RestartWithError in case we care about the
error (and want the error to happen).
This removes the need to check for error and make the intent more
clear : I want a deamon with busybox loaded on it — if an error occur
it should fail the test, but it's not the test code that has the
responsability to check that.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This fix fixes error messages for `--cpus` from daemon.
When `docker run` takes `--cpus`, it will translate into NanoCPUs
and pass the value to daemon. The `NanoCPU` is not visible to the user.
The error message generated from daemon used 'NanoCPU' which may cause
some confusion to the user.
This fix fixes this issue by returning the error in CPUs instead.
This fix fixes 28456.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Until we can support existing behaviour with `sudo` disable
ambient capabilities in runc build.
Add tests that non root user cannot use default capabilities,
and that capabilities are working as expected.
Test for #27590
Update runc.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
This fix tries to address the proposal raised in 27921 and add
`--cpus` flag for `docker run/create`.
Basically, `--cpus` will allow user to specify a number (possibly partial)
about how many CPUs the container will use. For example, on a 2-CPU system
`--cpus 1.5` means the container will take 75% (1.5/2) of the CPU share.
This fix adds a `NanoCPUs` field to `HostConfig` since swarmkit alreay
have a concept of NanoCPUs for tasks. The `--cpus` flag will translate
the number into reused `NanoCPUs` to be consistent.
This fix adds integration tests to cover the changes.
Related docs (`docker run` and Remote APIs) have been updated.
This fix fixes 27921.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Linux kernel 4.3 and later supports "ambient capabilities" which are the
only way to pass capabilities to containers running as a non root uid.
Previously there was no way to allow containers not running as root
capabilities in a useful way.
Fix#8460
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
No substantial code change.
- Api --> API
- Cli --> CLI
- Http, Https --> HTTP, HTTPS
- Id --> ID
- Uid,Gid,Pid --> UID,PID,PID
- Ipam --> IPAM
- Tls --> TLS (TestDaemonNoTlsCliTlsVerifyWithEnv --> TestDaemonTLSVerifyIssue13964)
Didn't touch in this commit:
- Git: because it is officially "Git": https://git-scm.com/
- Tar: because it is officially "Tar": https://www.gnu.org/software/tar/
- Cpu, Nat, Mac, Ipc, Shm: for keeping a consistency with existing production code (not changable, for compatibility)
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
This adds a small C binary for fighting zombies. It is mounted under
`/dev/init` and is prepended to the args specified by the user. You
enable it via a daemon flag, `dockerd --init`, as it is disable by
default for backwards compat.
You can also override the daemon option or specify this on a per
container basis with `docker run --init=true|false`.
You can test this by running a process like this as the pid 1 in a
container and see the extra zombie that appears in the container as it
is running.
```c
int main(int argc, char ** argv) {
pid_t pid = fork();
if (pid == 0) {
pid = fork();
if (pid == 0) {
exit(0);
}
sleep(3);
exit(0);
}
printf("got pid %d and exited\n", pid);
sleep(20);
}
```
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
moves ensure-frozen-images to go
moves ensure-syscall-test to go
moves ensure-nnp-test to go
moves ensure-httpserver to go
Also makes some of the fixtures load only for the required tests.
This makes sure that fixtures that won't be needed for a test run such as
`make TESTFLAGS='-check.f Swarm' test-integration-cli` (for example)
aren't loaded... like the syscall tests.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Fix#24803 as this had been failing sometimes.
As the parallel tests are probably genuine failures, and
had already been cut down, I will re-create these specifically
as a parallel execution test with no seccomp to make the
cause clearer.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
On Ubuntu and Debian there is a sysctl which allows to block
clone(CLONE_NEWUSER) via "sysctl kernel.unprivileged_userns_clone=0"
for unprivileged users that do not have CAP_SYS_ADMIN.
See: https://lists.ubuntu.com/archives/kernel-team/2016-January/067926.html
The DockerSuite.TestRunSeccompUnconfinedCloneUserns testcase fails if
"kernel.unprivileged_userns_clone" is set to 0:
docker_cli_run_unix_test.go:1040:
c.Fatalf("expected clone userns with --security-opt seccomp=unconfined
to succeed, got %s: %v", out, err)
... Error: expected clone userns with --security-opt seccomp=unconfined
to succeed, got clone failed: Operation not permitted
: exit status 1
So add a check and skip the testcase if kernel.unprivileged_userns_clone is 0.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
While testing #24510 I noticed that 32 bit syscalls were incorrectly being
blocked and we did not have a test for this, so adding one.
This is only tested on amd64 as it is the only architecture that
reliably supports 32 bit code execution, others only do sometimes.
There is no 32 bit libc in the buildpack-deps so we cannot build
32 bit C code easily so use the simplest assembly program which
just calls the exit syscall.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
To implement seccomp for s390x the following changes are required:
1) seccomp_default: Add s390 compat mode
On s390x (64 bit) we can run s390 (32 bit) programs in 32 bit
compat mode. Therefore add this information to arches().
2) seccomp_default: Use correct flags parameter for sys_clone on s390x
On s390x the second parameter for the clone system call is the flags
parameter. On all other architectures it is the first one.
See kernel code kernel/fork.c:
#elif defined(CONFIG_CLONE_BACKWARDS2)
SYSCALL_DEFINE5(clone, unsigned long, newsp, unsigned long, clone_flags,
int __user *, parent_tidptr,
So fix the docker default seccomp rule and check for the second
parameter on s390/s390x.
3) seccomp_default: Add s390 specific syscalls
For s390 we currently have three additional system calls that should
be added to the seccomp whitelist:
- Other architectures can read/write unprivileged from/to PCI MMIO memory.
On s390 the instructions are privileged and therefore we need system
calls for that purpose:
* s390_pci_mmio_write()
* s390_pci_mmio_read()
- Runtime instrumentation:
* s390_runtime_instr()
4) test_integration: Do not run seccomp default profile test on s390x
The generated profile that we check in is for amd64 and i386
architectures and does not work correctly on s390x.
See also: 75385dc216 ("Do not run the seccomp tests that use
default.json on non x86 architectures")
5) Dockerfile.s390x: Add "seccomp" to DOCKER_BUILDTAGS
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
If we attach to a running container and stream is closed afterwards, we
can never be sure if the container is stopped or detached. Adding a new
type of `detach` event can explicitly notify client that container is
detached, so client will know that there's no need to wait for its exit
code and it can move forward to next step now.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>