Linux supports many obsolete address families, which are usually available in
common distro kernels, but they are less likely to be properly audited and
may have security issues
This blocks all socket families in the socket (and socketcall where applicable) syscall
except
- AF_UNIX - Unix domain sockets
- AF_INET - IPv4
- AF_INET6 - IPv6
- AF_NETLINK - Netlink sockets for communicating with the ekrnel
- AF_PACKET - raw sockets, which are only allowed with CAP_NET_RAW
All other socket families are blocked, including Appletalk (native, not
over IP), IPX (remember that!), VSOCK and HVSOCK, which should not generally
be used in containers, etc.
Note that users can of course provide a profile per container or in the daemon
config if they have unusual use cases that require these.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
- Remove deprecated buildImage* functions
- Rename buildImageNew to buildImage
- Use *check.C in fakeContext* setup and in getIdByName
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
… to make sure it doesn't fail. It also introduce StartWithError,
StopWithError and RestartWithError in case we care about the
error (and want the error to happen).
This removes the need to check for error and make the intent more
clear : I want a deamon with busybox loaded on it — if an error occur
it should fail the test, but it's not the test code that has the
responsability to check that.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This fix fixes error messages for `--cpus` from daemon.
When `docker run` takes `--cpus`, it will translate into NanoCPUs
and pass the value to daemon. The `NanoCPU` is not visible to the user.
The error message generated from daemon used 'NanoCPU' which may cause
some confusion to the user.
This fix fixes this issue by returning the error in CPUs instead.
This fix fixes 28456.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Until we can support existing behaviour with `sudo` disable
ambient capabilities in runc build.
Add tests that non root user cannot use default capabilities,
and that capabilities are working as expected.
Test for #27590
Update runc.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
This fix tries to address the proposal raised in 27921 and add
`--cpus` flag for `docker run/create`.
Basically, `--cpus` will allow user to specify a number (possibly partial)
about how many CPUs the container will use. For example, on a 2-CPU system
`--cpus 1.5` means the container will take 75% (1.5/2) of the CPU share.
This fix adds a `NanoCPUs` field to `HostConfig` since swarmkit alreay
have a concept of NanoCPUs for tasks. The `--cpus` flag will translate
the number into reused `NanoCPUs` to be consistent.
This fix adds integration tests to cover the changes.
Related docs (`docker run` and Remote APIs) have been updated.
This fix fixes 27921.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Linux kernel 4.3 and later supports "ambient capabilities" which are the
only way to pass capabilities to containers running as a non root uid.
Previously there was no way to allow containers not running as root
capabilities in a useful way.
Fix#8460
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
No substantial code change.
- Api --> API
- Cli --> CLI
- Http, Https --> HTTP, HTTPS
- Id --> ID
- Uid,Gid,Pid --> UID,PID,PID
- Ipam --> IPAM
- Tls --> TLS (TestDaemonNoTlsCliTlsVerifyWithEnv --> TestDaemonTLSVerifyIssue13964)
Didn't touch in this commit:
- Git: because it is officially "Git": https://git-scm.com/
- Tar: because it is officially "Tar": https://www.gnu.org/software/tar/
- Cpu, Nat, Mac, Ipc, Shm: for keeping a consistency with existing production code (not changable, for compatibility)
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
This adds a small C binary for fighting zombies. It is mounted under
`/dev/init` and is prepended to the args specified by the user. You
enable it via a daemon flag, `dockerd --init`, as it is disable by
default for backwards compat.
You can also override the daemon option or specify this on a per
container basis with `docker run --init=true|false`.
You can test this by running a process like this as the pid 1 in a
container and see the extra zombie that appears in the container as it
is running.
```c
int main(int argc, char ** argv) {
pid_t pid = fork();
if (pid == 0) {
pid = fork();
if (pid == 0) {
exit(0);
}
sleep(3);
exit(0);
}
printf("got pid %d and exited\n", pid);
sleep(20);
}
```
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
moves ensure-frozen-images to go
moves ensure-syscall-test to go
moves ensure-nnp-test to go
moves ensure-httpserver to go
Also makes some of the fixtures load only for the required tests.
This makes sure that fixtures that won't be needed for a test run such as
`make TESTFLAGS='-check.f Swarm' test-integration-cli` (for example)
aren't loaded... like the syscall tests.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Fix#24803 as this had been failing sometimes.
As the parallel tests are probably genuine failures, and
had already been cut down, I will re-create these specifically
as a parallel execution test with no seccomp to make the
cause clearer.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
On Ubuntu and Debian there is a sysctl which allows to block
clone(CLONE_NEWUSER) via "sysctl kernel.unprivileged_userns_clone=0"
for unprivileged users that do not have CAP_SYS_ADMIN.
See: https://lists.ubuntu.com/archives/kernel-team/2016-January/067926.html
The DockerSuite.TestRunSeccompUnconfinedCloneUserns testcase fails if
"kernel.unprivileged_userns_clone" is set to 0:
docker_cli_run_unix_test.go:1040:
c.Fatalf("expected clone userns with --security-opt seccomp=unconfined
to succeed, got %s: %v", out, err)
... Error: expected clone userns with --security-opt seccomp=unconfined
to succeed, got clone failed: Operation not permitted
: exit status 1
So add a check and skip the testcase if kernel.unprivileged_userns_clone is 0.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
While testing #24510 I noticed that 32 bit syscalls were incorrectly being
blocked and we did not have a test for this, so adding one.
This is only tested on amd64 as it is the only architecture that
reliably supports 32 bit code execution, others only do sometimes.
There is no 32 bit libc in the buildpack-deps so we cannot build
32 bit C code easily so use the simplest assembly program which
just calls the exit syscall.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
To implement seccomp for s390x the following changes are required:
1) seccomp_default: Add s390 compat mode
On s390x (64 bit) we can run s390 (32 bit) programs in 32 bit
compat mode. Therefore add this information to arches().
2) seccomp_default: Use correct flags parameter for sys_clone on s390x
On s390x the second parameter for the clone system call is the flags
parameter. On all other architectures it is the first one.
See kernel code kernel/fork.c:
#elif defined(CONFIG_CLONE_BACKWARDS2)
SYSCALL_DEFINE5(clone, unsigned long, newsp, unsigned long, clone_flags,
int __user *, parent_tidptr,
So fix the docker default seccomp rule and check for the second
parameter on s390/s390x.
3) seccomp_default: Add s390 specific syscalls
For s390 we currently have three additional system calls that should
be added to the seccomp whitelist:
- Other architectures can read/write unprivileged from/to PCI MMIO memory.
On s390 the instructions are privileged and therefore we need system
calls for that purpose:
* s390_pci_mmio_write()
* s390_pci_mmio_read()
- Runtime instrumentation:
* s390_runtime_instr()
4) test_integration: Do not run seccomp default profile test on s390x
The generated profile that we check in is for amd64 and i386
architectures and does not work correctly on s390x.
See also: 75385dc216 ("Do not run the seccomp tests that use
default.json on non x86 architectures")
5) Dockerfile.s390x: Add "seccomp" to DOCKER_BUILDTAGS
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
If we attach to a running container and stream is closed afterwards, we
can never be sure if the container is stopped or detached. Adding a new
type of `detach` event can explicitly notify client that container is
detached, so client will know that there's no need to wait for its exit
code and it can move forward to next step now.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
The Seccomp tests ran 11 tests in parallel and this appears to be
hitting some sort of bug on CI. Splitting into two tests means that
I can no longer repeoduce the failure on the slow laptop where I could
reproduce the failures before.
Obviously this does not fix the underlying issue, which I will
continue to investigate, but not having the tests failing a lot
before the freeze for 1.12 would be rather helpful.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
This fix tries to address the issue raised in #22420. When
`--tmpfs` is specified with `/tmp`, the default value is
`rw,nosuid,nodev,noexec,relatime,size=65536k`. When `--tmpfs`
is specified with `/tmp:rw`, then the value changed to
`rw,nosuid,nodev,noexec,relatime`.
The reason for such an inconsistency is because docker tries
to add `size=65536k` option only when user provides no option.
This fix tries to address this issue by always pre-progating
`size=65536k` along with `rw,nosuid,nodev,noexec,relatime`.
If user provides a different value (e.g., `size=8192k`), it
will override the `size=65536k` anyway since the combined
options will be parsed and merged to remove any duplicates.
Additional test cases have been added to cover the changes
in this fix.
This fix fixes#22420.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
The generated profile that we check in is for amd64 and i386 architectures
and does not work correctly on arm as it is missing required syscalls,
and also specifies the architectures that are supported. It works on
ppc64le at the moment but better to skip the test as it is likely to
break in future.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Currently, using a custom detach key with an invalid sequence, eats a
part of the sequence, making it weird and difficult to enter some key
sequence.
This fixes by keeping the input read when trying to see if it's the key
sequence or not, and "writing" then is the key sequence is not the right
one, preserving the initial input.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This was not changed when the additional tests were added.
It may be the reason for occasional test failures.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Currently the default seccomp profile is fixed. This changes it
so that it varies depending on the Linux capabilities selected with
the --cap-add and --cap-drop options. Without this, if a user adds
privileges, eg to allow ptrace with --cap-add sys_ptrace then still
cannot actually use ptrace as it is still blocked by seccomp, so
they will probably disable seccomp or use --privileged. With this
change the syscalls that are needed for the capability are also
allowed by the seccomp profile based on the selected capabilities.
While this patch makes it easier to do things with for example
cap_sys_admin enabled, as it will now allow creating new namespaces
and use of mount, it still allows less than --cap-add cap_sys_admin
--security-opt seccomp:unconfined would have previously. It is not
recommended that users run containers with cap_sys_admin as this does
give full access to the host machine.
It also cleans up some architecture specific system calls to be
only selected when needed.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Add the swapMemorySupport requirement to all tests related to the OOM killer. The --memory option has the subtle side effect of defaulting --memory-swap to double the value of --memory. The OOM killer doesn't kick in until the container exhausts memory+swap, and so without the memory swap cgroup the tests will timeout due to swap being effectively unlimited.
Document the default behavior of --memory-swap in the docker run man page.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
This fix tries to address the issue raised in #22271 where
relative symlinks don't work with --device argument.
Previously, the symlinks in --device was implemneted (#20684)
with `os.Readlink()` which does not resolve if the linked
target is a relative path. In this fix, `filepath.EvalSymlinks()`
has been used which will reolve correctly with relative
paths.
An additional test case has been added to the existing
`TestRunDeviceSymlink` to cover changes in this fix.
This fix is related to #13840 and #20684, #22271.
This fix fixes#22271.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
There was an error in validation logic before, should use period
instead of quota, and also add check for negative
number here, if not with that, it would had cpu.cfs_period_us: invalid argument
which is not good for users.
Signed-off-by: Kai Qiang Wu(Kennan) <wkqwu@cn.ibm.com>