Make the allocated buffers bigger to allow better reusability and avoid
frequent reallocations.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The Message is not needed after it is marshalled, so no need to hold it
for the entire function scope.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Moved the buffer pools in json-file and local logging drivers to the
whole driver scope. It is more efficient to have a pool for the whole
driver rather than for each logger instance.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Marshalling log messages by json-file and local drivers involved
serializing the message into a shared buffer. This caused a regression
resulting in log corruption with recent changes where Log may be called
from multiple goroutines at the same time.
Solution is to use a sync.Pool to manage the buffers used for the
serialization. Also removed the MarshalFunc, which the driver had to
expose to the LogFile so that it can marshal the message. This is now
moved entirely to the driver.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
File watches have been a source of complexity and unreliability in the
LogFile follow implementation, especially when combined with file
rotation. File change events can be unreliably delivered, especially on
Windows, and the polling fallback adds latency. Following across
rotations has never worked reliably on Windows. Without synchronization
between the log writer and readers, race conditions abound: readers can
read from the file while a log entry is only partially written, leading
to decode errors and necessitating retries.
In addition to the complexities stemming from file watches, the LogFile
follow implementation had complexity from needing to handle file
truncations, and (due to a now-fixed bug in the polling file watcher
implementation) evictions to unlock the log file so it could be rotated.
Log files are now always rotated, never truncated, so these situations
no longer need to be handled by the follow code.
Rewrite the LogFile follow implementation in terms of waiting until
LogFile notifies it that a new message has been written to the log file.
The LogFile informs the follower of the file offset of the last complete
write so that the follower knows not to read past that, preventing it
from attempting to decode partial messages and making retries
unnecessary. Synchronization between LogFile and its followers is used
at critical points to prevent missed notifications of writes and races
between file rotations and the follower opening files for read.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The refCounter used for sharing temporary decompressed log files and
tracking when the files can be deleted is keyed off the source file's
path. But the path of a log file is not stable: it is renamed on each
rotation. Consequently, when logging is configured with both rotation
and compression, multiple concurrent readers of a container's logs could
read logs out of order, see duplicates or decompress a log file which
has already been decompressed.
Replace refCounter with a new implementation, sharedTempFileConverter,
which is agnostic to the file path, keying off the source file's
identity instead. Additionally, sharedTempFileConverter handles the full
lifecycle of the temporary file, from creation to deletion. This is all
abstracted from the consumer: all the bookkeeping and cleanup is handled
behind the scenes when Close() is called on the returned reader value.
Only one file descriptor is used per temporary file, which is shared by
all readers.
A channel is used for concurrency control so that the lock can be
acquired inside a select statement. While not currently utilized, this
makes it possible to add support for cancellation to
sharedTempFileConverter in the future.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Whether or not the logger has been closed is a property of the logger,
and only of concern to its log reading implementation, not log watchers.
The loggers and their reader implementations can communicate as they see
fit. A single channel per logger which is closed when the logger is
closed is plenty sufficient to broadcast the state to log readers, with
no extra bookeeping or synchronization required.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The asynchronous startup of the log-reading goroutine made the
follow-tail tests nondeterministic. The Log calls in the tests which
were supposed to happen after the reader started reading would sometimes
execute before the reader, throwing off the counts. Tweak the ReadLogs
implementation so that the order of operations is deterministic.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Add an extensive test suite for validating the behavior of any
LogReader. Test the current LogFile-based implementations against it.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The jsonfilelog read benchmark was incorrectly reusing the same message
pointer in the producer loop. The message value would be reset after the
first call to jsonlogger.Log, resulting in all subsequent calls logging
a zero-valued message. This is not a representative workload for
benchmarking and throws off the throughput metric.
Reduce variation between benchmark runs by using a constant timestamp.
Write to the producer goroutine's error channel only on a non-nil error
to eliminate spurious synchronization between producer and consumer
goroutines external to the logger being benchmarked.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Some packages were using `logrus.Fatal()` in init functions (which logs the error,
and (by default) calls `os.Exit(1)` after logging).
Given that logrus formatting and outputs have not yet been configured during the
initialization stage, it does not provide much benefits over a plain `panic()`.
This patch replaces some instances of `logrus.Fatal()` with `panic()`, which has
the added benefits of not introducing logrus as a dependency in some of these
packages, and also produces a stacktrace, which could help locating the problem
in the unlikely event an `init()` fails.
Before this change, an error would look like:
$ dockerd
FATA[0000] something bad happened
After this change, the same error looks like:
$ dockerd
panic: something bad happened
goroutine 1 [running]:
github.com/docker/docker/daemon/logger/awslogs.init.0()
/go/src/github.com/docker/docker/daemon/logger/awslogs/cloudwatchlogs.go:128 +0x89
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The io/ioutil package has been deprecated in Go 1.16. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Tonis mentioned that we can run into issues if there is more error
handling added here. This adds a custom reader implementation which is
like io.MultiReader except it does not cache EOF's.
What got us into trouble in the first place is `io.MultiReader` will
always return EOF once it has received an EOF, however the error
handling that we are going for is to recover from an EOF because the
underlying file is a file which can have more data added to it after
EOF.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When the multireader hits EOF, we will always get EOF from it, so we
cannot store the multrireader fro later error handling, only for the
decoder.
Thanks @tobiasstadler for pointing this error out.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this change, the log decoder function provided by the log driver
to logfile would not be able to re-use buffers, causing undeeded
allocations and memory bloat for dockerd.
This change introduces an interface that allows the log driver to manage
it's memory usge more effectively.
This only affects json-file and local log drivers.
`json-file` still is not great just because of how the json decoder in the
stdlib works.
`local` is significantly improved.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Fix warnings like this one:
> daemon/logger/jsonfilelog/jsonfilelog_test.go:191:3: SA5001: should check returned error before deferring file.Close() (staticcheck)
> defer file.Close()
> ^
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Format the source according to latest goimports.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.
If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.
Apparently, what happens is:
1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).
2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.
3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).
4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).
5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.
Alas, the fix is somewhat complicated:
1. Distinguish between close from logs producer and logs consumer.
To that effect,
- yet another channel is added to LogWatcher();
- {Watch,}Close() are renamed to {Watch,}ProducerGone();
- {Watch,}ConsumerGone() are added;
*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().
2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.
3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.
4. followLogs() are modified accordingly:
- context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
- due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
- function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.
While at it,
1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7 ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490
2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.
Should fix https://github.com/moby/moby/issues/37391
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This makes it so consumers of `LogFile` should pass in how to get an
io.Reader to the requested number of lines to tail.
This is also much more efficient when tailing a large number of lines.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Docker daemon has a 16K buffer for log messages. If a message length
exceeds 16K, it should be split by the logger and merged at the
endpoint.
This change adds `PartialLogMetaData` struct for enhanced partial support
- LastPartial (bool) : indicates if this is the last of all partials.
- ID (string) : unique 32 bit ID. ID is same across all partials.
- Ordinal (int starts at 1) : indicates the position of msg in the series of partials.
Also, the timestamps across partials in the same.
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
This PR adds support for compressibility of log file.
I added a new option conpression for the jsonfile log driver,
this option allows the user to specify compression algorithm to
compress the log files. By default, the log files will be
not compressed. At present, only support 'gzip'.
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
'docker logs' can read from compressed files
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
Add Metadata to the gzip header, optmize 'readlog'
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
This fixes an issue where the container LogPath was empty when the
non-blocking logging mode was enabled. This change sets the LogPath on
the container as soon as the path is generated, instead of setting the
LogPath on a logger struct and then attempting to pull it off that
logger at a later point. That attempt to pull the LogPath off the logger
was error prone since it assumed that the logger would only ever be a
single type.
Prior to this change docker inspect returned an empty string for
LogPath. This caused issues with tools that rely on docker inspect
output to discover container logs, e.g. Kubernetes.
This commit also removes some LogPath methods that are now unnecessary
and are never invoked.
Signed-off-by: junzhe and mnussbaum <code@getbraintree.com>
This fix carries PR 34248: Added tag log option to json-logger
This fix changes to use RawAttrs based on review feedback.
This fix fixes 19803, this fix closes 34248.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Fixes#19803
Updated the json-logger to utilize the common log option
'tag' that can define container/image information to include
as part of logging.
When the 'tag' log option is not included, there is no change
to the log content via the json-logger. When the 'tag' log option
is included, the tag will be parsed as a template and the result
will be stored within each log entry as the attribute 'tag'.
Update: Removing test added to integration_cli as those have been deprecated.
Update: Using proper test calls (require and assert) in jsonfilelog_test.go based on review.
Update: Added new unit test configs for logs with tag. Updated unit test error checking.
Update: Cleanup check in jsonlogbytes_test.go to match pending changes in PR #34946.
Update: Merging to correct conflicts from PR #34946.
Signed-off-by: bonczj <josh.bonczkowski@gmail.com>
This allows much of the read logic to be shared for other things,
especially for the new log driver proposed in
https://github.com/moby/moby/issues/33475
The only logic for reads in the json logger is around decoding log
messages, which gets passed into the log file object.
This also helps with implementing compression as it allows us to
simplify locking strategies.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Make the `*RotateFileWriter` specifically about writing
`logger.Message`'s, which is what it's used for.
This allows for future changes where the log writer can cache details
about log entries such as (e.g.) the timestamps included in a particular
log file, which can be used to optimize reads.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Having a map per log entry seemed heavier than necessary. These
attributes end up being sorted and serialized, so storing them in a map
doesn't add anything (there's no random access element). In SwarmKit,
they originate as a slice, so there's an unnecessary conversion to a map
and back.
This also fixes the sort comparator, which used to inefficiently split
the string on each comparison.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>