Make sure that the cursor value returned by followJournal() is the last
of the values returned by its goroutine's calls to drainJournal() by
waiting for it, rather than returning a value that may be superceded by
another if we're singalling the goroutine that it should exit by closing
a pipe.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This loop is not ever going to return since it's never actually setting
the `err` var except on the first iteration.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
RFC 5424 (https://tools.ietf.org/html/rfc5424#section-6.2) requires that
STRUCTURED-DATA be present, either as NILVALUE (-) or as one or more
SD-ELEMENT items. Because Docker doesn't ever create any SD-ELEMENT items,
the format should output the NILVALUE instead. This resolves parsing issues
in various RFC 5424-compliant syslog servers.
Signed-off-by: Mark Parker <godefroi@gmail.com>
This reduces allocs and bytes used per log entry significantly as well
as some improvement to time per log operation.
Each log driver, however, must put messages back in the pool once they
are finished with the message.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This allows the user to set a logging mode to "blocking" (default), or
"non-blocking", which uses the ring buffer as a proxy to the real log
driver.
This allows a container to never be blocked on stdio at the cost of
dropping log messages.
Introduces 2 new log-opts that works for all drivers, `log-mode` and
`log-size`. `log-mode` takes a value of "blocking", or "non-blocking"
I chose not to implement this as a bool since it is difficult to
determine if the mode was set to false vs just not set... especially
difficult when merging the default daemon config with the container config.
`log-size` takes a size string, e.g. `2MB`, which sets the max size
of the ring buffer. When the max size is reached, it will start
dropping log messages.
```
BenchmarkRingLoggerThroughputNoReceiver-8 2000000000 36.2 ns/op 856.35 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputWithReceiverDelay0-8 300000000 156 ns/op 198.48 MB/s 32 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay1-8 2000000000 36.1 ns/op 857.80 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay10-8 1000000000 36.2 ns/op 856.53 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay50-8 2000000000 34.7 ns/op 894.65 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay100-8 2000000000 35.1 ns/op 883.91 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay300-8 1000000000 35.9 ns/op 863.90 MB/s 0 B/op 0 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay500-8 2000000000 35.8 ns/op 866.88 MB/s 0 B/op 0 allocs/op
```
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix tries to address the issue raised in 29344 where it was
not possible to create log group for awslogs (CloudWatch) on-demand.
Log group has to be created explicitly before container is running.
This behavior is inconsistent with AWS logs agent where log groups
are always created as needed.
There were several concerns previously (See comments in 19617 and 29344):
1. There is a limit of 500 log groups/account/region so resource might
be exhausted if there is any typo or incorrect region.
2. Logs are generated for every container so CreateLogGroup (or equally,
DescribeLogGroups) might be called every time, which is redundant and
potentially surprising.
3. CreateLogStream and CreateLogGroup have different IAM policies.
This fix addresses the issue by add `--log-opt awslogs-create-group`
which by default is `false`. It requires user to explicitly request
that log groups be created as needed.
Related unit test has been updated. And tests have also been done
manually in AWS.
This fix fixes 29334.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
In the journald log driver, attempt to drain the journal 1 more time
after being told to stop following the log. Due to a possible race
condition, sometimes data is written to the journal at almost the same
time the log watch is closed, and depending on the order of operations,
sometimes you miss the last journal entry.
Signed-off-by: Andy Goldstein <agoldste@redhat.com>
This commit addresses 2 issues:
1. in `tailfile()` if somehow the `logWatcher.Msg` were to become full and the watcher closed before space was made into it, we were getting stuck there forever since we were not checking for the watcher getting closed
2. when servicing `docker logs`, if the command was cancelled we were not closing the watcher (and hence notifying it to stop copying data)
Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
Fix#29344
If HOME is not set, the gcplogs logging driver will call os/user.Current() via oauth2/google.
However, in static binary, os/user.Current() leads to segfault due to a glibc issue that won't be fixed
in a short term. (golang/go#13470, https://sourceware.org/bugzilla/show_bug.cgi?id=19341)
So we forcibly set HOME so as to avoid call to os/user/Current().
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
The `docker logs` command performed a
client-side check if the container's
logging driver was supported.
Now that we allow the client to connect
to both "older" and "newer" daemon versions,
this check is best done daemon-side.
This patch remove the check on the client
side, and leaves validation to the daemon,
which should be the source of truth.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
We clean up the journald logger with these four changes.
1. Make field array static
2. Make function name more appropriate
3. Initialize the file descriptors only once
4. Avoid copying the journald cursor
Point 4 is the most significant change: instead of treating the journald
cursor like a Go string we use it as a raw C.char pointer. That way we
avoid the copying by the C.CString and C.GoString functions.
Signed-off-by: Silvan Jegen <s.jegen@gmail.com>
Fixes#27779
Currently `followLogs` can get into a deadlock if we receive an inotify
IN_MODIFY event while we are trying to close the `fileWatcher`. This is
due to the fact that closing the `fileWatcher` happens in the same block
as consumes events from the `fileWatcher`. We are trying to run
`fileWatcher.Close`, which is waiting for an IN_IGNORE event to come in
over inotify to confirm the watch was been removed. But, because an
IN_MODIFY event has appeared after `Close` was entered but before the
IN_IGNORE, the broadcast never comes. The IN_MODIFY cannot be consumed
as the events channel is unbuffered and the only `select` that reads
from it is busy waiting for the IN_IGNORE event.
In order to try and fix this race condition I've moved the removal of
the `fileWatcher` out to a separate go block that waits for a signal to
close, removes the watcher and then signals to the previous selects on
the close signal.
This has introduced a `fileWatcher.Remove` in the final case, but if we
try and remove a watcher that does not exist it will just return an
error saying so. We are not doing any checking on the return of `Remove`
so this shouldn't cause any side-effects.
Signed-off-by: Tom Booth <tombooth@gmail.com>
`golint` had the following issue when linting this file:
```
daemon/logger/jsonfilelog/read.go:116:10: should omit type io.Reader
from declaration of var rdr; it will be inferred from the right-hand
side
```
In order to keep it happy changing it to an indirect assignment will
still maintain the same functionality.
Signed-off-by: Tom Booth <tombooth@gmail.com>