* When async is enabled, this option defines the interval (ms) at which the connection
to the fluentd-address is re-established. This option is useful if the address
may resolve to one or more IP addresses, e.g. a Consul service address.
While the change in #42979 resolves the issue where a Docker container can be stuck
if the fluentd-address is unavailable, this functionality adds an additional benefit
in that a new and healthy fluentd-address can be resolved, allowing logs to flow once again.
This adds a `fluentd-async-reconnect-interval` log-opt for the fluentd logging driver.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Conor Evans <coevans@tcd.ie>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Co-authored-by: Conor Evans <coevans@tcd.ie>
Before this change, if Decode() couldn't read a log record fully,
the subsequent invocation of Decode() would read the record's non-header part
as a header and cause a huge heap allocation.
This change prevents such a case by having the intermediate buffer in
the decoder struct.
Fixes#42125.
Signed-off-by: Kazuyoshi Kato <katokazu@amazon.com>
The flag ForceStopAsyncSend was added to fluent logger lib in v1.5.0 (at
this time named AsyncStop) to tell fluentd to abort sending logs
asynchronously as soon as possible, when its Close() method is called.
However this flag was broken because of the way the lib was handling it
(basically, the lib could be stucked in retry-connect loop without
checking this flag).
Since fluent logger lib v1.7.0, calling Close() (when ForceStopAsyncSend
is true) will really stop all ongoing send/connect procedure,
wherever it's stucked.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Added an option 'awslogs-format' to allow specifying
a log format for the logs sent CloudWatch from the aws log driver.
For now, only the 'json/emf' format is supported.
If no option is provided, the log format header in the
request to CloudWatch will be omitted as before.
Signed-off-by: James Sanders <james3sanders@gmail.com>
The io/ioutil package has been deprecated in Go 1.16. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
daemon/volumes_unix_test.go:228:13: SA4001: &*x will be simplified to x. It will not copy x. (staticcheck)
mp: &(*c.MountPoints["/jambolan"]), // copy the mountpoint, expect no changes
^
daemon/logger/local/local_test.go:214:22: SA4001: &*x will be simplified to x. It will not copy x. (staticcheck)
dst.PLogMetaData = &(*src.PLogMetaData)
^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
daemon/logger/journald/read.go:128:3 comment on exported function `CErr` should be of the form `CErr ...`
daemon/logger/journald/read.go:131:36: unnecessary conversion (unconvert)
return C.GoString(C.strerror(C.int(-ret)))
^
daemon/logger/journald/read.go:380:2: S1023: redundant `return` statement (gosimple)
return
^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Logging to daemon logs every time there's an error with a log driver can be
problematic since daemon logs can grow rapidly, potentially exhausting disk
space.
Instead, it's preferable to limit the rate at which log driver errors are allowed
to be written. By default, this limit is 333 entries per second max.
Signed-off-by: Angel Velazquez <angelcar@amazon.com>
The underlying Loggers Close() function can be called with the the
run() goroutine still writing to the driver. This is causing the
fluentd-golang-logger to panic cause it doesn't defensively check
for the closing of the channel before writing to it.
It relies on the docker daemon to keep the contract of not calling Log()
if Close() has already been called.
Contributions by: James Johnston <james.johnston@thumbtack.com>
Nathan Wong <nathanw@thumbtack.com>
Signed-off-by: Anuj Varma <anujvarma@thumbtack.com>
Tonis mentioned that we can run into issues if there is more error
handling added here. This adds a custom reader implementation which is
like io.MultiReader except it does not cache EOF's.
What got us into trouble in the first place is `io.MultiReader` will
always return EOF once it has received an EOF, however the error
handling that we are going for is to recover from an EOF because the
underlying file is a file which can have more data added to it after
EOF.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When the multireader hits EOF, we will always get EOF from it, so we
cannot store the multrireader fro later error handling, only for the
decoder.
Thanks @tobiasstadler for pointing this error out.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Added an option `awslogs-create-stream` to allow skipping log stream
creation for awslogs log driver. The default value is still true to
keep the behavior be consistent with before.
Signed-off-by: Xia Wu <xwumzn@amazon.com>
Loggers that implement BufSize() (e.g. awslogs) uses the method to
tell Copier about the maximum log line length. However loggerWithCache
and RingBuffer hide the method by wrapping loggers.
As a result, Copier uses its default 16KB limit which breaks log
lines > 16kB even the destinations can handle that.
This change implements BufSize() on loggerWithCache and RingBuffer to
make sure these logger wrappes don't hide the method on the underlying
loggers.
Fixes#41794.
Signed-off-by: Kazuyoshi Kato <katokazu@amazon.com>
This fixes the case where log rotation fails on Windows while there are
clients reading container logs.
Evicts readers if there is an error during rotation and try rotation again.
This is needed for Windows with this scenario:
1. `docker logs -f` is called
2. Log rotation occurs (log.txt -> log.txt.1, truncate and re-open
log.txt)
3. Log rotation occurs again (rm log.txt.1, log.txt -> log.txt.1)
On step 3, before this change, the log rotation will fail with `Access
is denied`.
In this case, what we have is a reader holding a file handle to the
primary log file. The log file is then rotated, but the reader still has
a the handle open. `FILE_SHARE_DELETE` allows this to happen... but then
we try to do it again for the next rotation and it blows up.
So when it blows up we force all the readers to disconnect, close the
log file, and try rotation again, which will succeed based on the added
tests.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This makes sure, on Windows, that all files are opened with
FILE_SHARE_DELETE.
On non-Windows this just calls the same `os.Open()`.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
- Ignore some pointless errors (like not exist on remove)
- Consolidate error handling/logging
- Fix race condition reading last log timestamp in the compression
goroutine. This needs to be done while holding the write lock, which
is not (or may not be) locked while compressing a rotated log file.
- Remove some indentation and consolidate mutex unlocking in
`compressFile`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The cloud logging client should be closed when the log driver is closed. Otherwise dockerd will keep a gRPC connection to the logging endpoint open indefinitely.
This results in a slow leak of tcp sockets (1) and memory (~200Kb) any time that a container using `--log-driver=gcplogs` is terminates.
Signed-off-by: Patrick Haas <patrickhaas@google.com>
add all partial metadata available to journald logs to allow easier reassembly of partial messages in downstream logging systems
fixes#41403
Signed-off-by: Christian Becker <christian.becker@sixt.com>
The test was looking for the wrong file name.
Since compression happens asyncronously, sometimes the test would
succeed and sometimes fail.
This change makes sure to wait for the compressed version of the file
since we can't know when the compression is going to occur.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This prevents getting into a situation where a container log cannot make
progress because we tried to rotate a file, got an error, and now the
file is closed. The next time we try to write a log entry it will try
and rotate again but error that the file is already closed.
I wonder if there is more we can do to beef up this rotation logic.
Found this issue while investigating missing logs with errors in the
docker daemon logs like:
```
Failed to log message for json-file: error closing file: close <file>:
file already closed
```
I'm not sure why the original rotation failed since the data was no
longer available.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this change, the log decoder function provided by the log driver
to logfile would not be able to re-use buffers, causing undeeded
allocations and memory bloat for dockerd.
This change introduces an interface that allows the log driver to manage
it's memory usge more effectively.
This only affects json-file and local log drivers.
`json-file` still is not great just because of how the json decoder in the
stdlib works.
`local` is significantly improved.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Configuration over the API per container is intentionally left out for
the time being, but is supported to configure the default from the
daemon config.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit cbecf48bc352e680a5390a7ca9cff53098cd16d7)
Signed-off-by: Madhu Venugopal <madhu@docker.com>
This supplements any log driver which does not support reads with a
custom read implementation that uses a local file cache.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit d675e2bf2b75865915c7a4552e00802feeb0847f)
Signed-off-by: Madhu Venugopal <madhu@docker.com>
This adds a new `fluentd-request-ack` logging option for the Fluentd
logging driver. If enabled, the server will respond with an acknowledgement.
This option improves the reliability of the message transmission. This
change is not versioned, and affects all API versions.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This extracts parsing the driver's configuration to a
function, and uses the same function both when initializing
the driver, and when validating logging options.
Doing so allows validating if the provided options are in
the correct format when calling `ValidateOpts`, instead
of resulting in an error when initializing the logging driver.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>