TestLogBlocking is intended to test that the Log method blocks by
default. It does this by mocking out the internals of the
awslogs.logStream and replacing one of its internal channels with one
that is controlled by the test. The call to Log occurs inside a
goroutine. Go may or may not schedule the goroutine immediately and the
blocking may or may not be observed outside the goroutine immediately
due to decisions made by the Go runtime. This change adds a small
timeout for test failure so that the Go runtime has the opportunity to
run the goroutine before the test fails.
Signed-off-by: Samuel Karp <skarp@amazon.com>
(cherry picked from commit fd94bae0b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test frequently fails on Windows RS1 (mainly), so skipping it
for now on Windows;
```
ok github.com/docker/docker/daemon/logger 0.525s coverage: 43.0% of statements
time="2019-09-09T20:37:35Z" level=info msg="Trying to get region from EC2 Metadata"
time="2019-09-09T20:37:36Z" level=info msg="Log stream already exists" errorCode=ResourceAlreadyExistsException logGroupName= logStreamName= message= origError="<nil>"
--- FAIL: TestLogBlocking (0.02s)
cloudwatchlogs_test.go:313: Expected to be able to read from stream.messages but was unable to
time="2019-09-09T20:37:36Z" level=error msg=Error
time="2019-09-09T20:37:36Z" level=error msg="Failed to put log events" errorCode=InvalidSequenceTokenException logGroupName=groupName logStreamName=streamName message="use token token" origError="<nil>"
time="2019-09-09T20:37:36Z" level=error msg="Failed to put log events" errorCode=DataAlreadyAcceptedException logGroupName=groupName logStreamName=streamName message="use token token" origError="<nil>"
time="2019-09-09T20:37:36Z" level=info msg="Data already accepted, ignoring error" errorCode=DataAlreadyAcceptedException logGroupName=groupName logStreamName=streamName message="use token token"
FAIL
coverage: 78.2% of statements
FAIL github.com/docker/docker/daemon/logger/awslogs 0.630s
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c75c86240)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
In case jsonlogfile is used with max-file=1 and max-size set,
the log rotation is not perfomed; instead, the log file is closed
and re-open with O_TRUNC.
This situation is not handled by the log reader in follow mode,
leading to an issue of log reader being stuck forever.
This situation (file close/reopen) could be handled in waitRead(),
but fsnotify library chose to not listen to or deliver this event
(IN_CLOSE_WRITE in inotify lingo).
So, we have to handle this by checking the file size upon receiving
io.EOF from the log reader, and comparing the size with the one received
earlier. In case the new size is less than the old one, the file was
truncated and we need to seek to its beginning.
Fixes#39235.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 9cd24ba605)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. Use "in-place" variables for if statements to limit their scope to
the respectful `if` block.
2. Report the error returned from sd_journal_* by using CErr().
3. Use errors.New() instead of fmt.Errorf().
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 20a0e58a79)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
From the first glance, `docker logs --tail 0` does not make sense,
as it is supposed to produce no output, but `tail -n 0` from GNU
coreutils is working like that, plus there is even a test case
(`TestLogsTail` in integration-cli/docker_cli_logs_test.go).
Now, something like `docker logs --follow --tail 0` makes total
sense, so let's make it work.
(NOTE if --tail is not used, config.Tail is set to -1)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit dd4bfe30a8)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
If we take a long time to process log messages, and during that time
journal file rotation occurs, the journald client library will keep
those rotated files open until sd_journal_process() is called.
By periodically calling sd_journal_process() during the processing
loop we shrink the window of time a client instance has open file
descriptors for rotated (deleted) journal files.
This code is modelled after that of journalctl [1]; the above explanation
as well as the value of 1024 is taken from there.
[v2: fix CErr() argument]
[1] https://github.com/systemd/systemd/blob/dc16327c48d/src/journal/journalctl.c#L2676
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b73fb8fd5d)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
TL;DR: simplify the code, fix --follow hanging indefinitely
Do the following to simplify the followJournal() code:
1. Use Go-native select instead of C-native polling.
2. Use Watch{Producer,Consumer}Gone(), eliminating the need
to have journald.closed variable, and an extra goroutine.
3. Use sd_journal_wait(). In the words of its own man page:
> A synchronous alternative for using sd_journal_get_fd(),
> sd_journal_get_events(), sd_journal_get_timeout() and
> sd_journal_process() is sd_journal_wait().
Unfortunately, the logic is still not as simple as it
could be; the reason being, once the container has exited,
journald might still be writing some logs from its internal
buffers onto journal file(s), and there is no way to
figure out whether it's done so we are guaranteed to
read all of it back. This bug can be reproduced with
something like
> $ ID=$(docker run -d busybox seq 1 150000); docker logs --follow $ID
> ...
> 128123
> $
(The last expected output line should be `150000`).
To avoid exiting from followJournal() early, add the
following logic: once the container is gone, keep trying
to drain the journal until there's no new data for at
least `waitTimeout` time period.
Should fix https://github.com/docker/for-linux/issues/575
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f091febc94)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. The journald client library initializes inotify watch(es)
during the first call to sd_journal_get_fd(), and it make sense
to open it earlier in order to not lose any journal file rotation
events.
2. It only makes sense to call this if we're going to use it
later on -- so add a check for config.Follow.
3. Remove the redundant call to sd_journal_get_fd().
NOTE that any subsequent calls to sd_journal_get_fd() return
the same file descriptor, so there's no real need to save it
for later use in wait_for_data_cancelable().
Based on earlier patch by Nalin Dahyabhai <nalin@redhat.com>.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 981c01665b)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
In case the LogConsumer is gone, the code that sends the message can
stuck forever. Wrap the code in select case, as all other loggers do.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 79039720c8)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
In case Tail=N parameter is requested, we need to show N lines.
It does not make sense to walk backwards one by one if we can
do it at once. Now, if Since=T is also provided, make sure we
haven't jumped too far (before T), and if we did, move forward.
The primary motivation for this was to make the code simpler.
This also fixes a tiny bug in the "since" implementation.
Before this commit:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:24.999821000Z 95981
After:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:25.000013000Z 95982
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit ff3cd167ea)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Minor code simplification.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit e8f6166791)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Clean up a deferred function call in the journal reading logic.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
(cherry picked from commit 1ada3e85bf)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Adds the following validations to the syslog logger test:
1. Only supported options are valid
2. Log option syslog-address has to be a valid URI
3. Log option syslog-address if is file has to exist
4. Log option syslog-address if udp/tcp scheme, default to port 513
5. Log-option syslog-facility has to be a valid facility
6. Log-option syslog-format has to be a valid format
Signed-off-by: Joao Trindade <trindade.joao@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Add awslogs-force-flush-interval-seconds and awslogs-max-buffered-events configurable options to aswlogs driver to replace hardcoded values of repsectively 5 seconds and 4K.
Signed-off-by: Maximiliano Maccanti <maccanti@amazon.com>
This fix tries to address the issue raised in 38258
where current RFC5424 sys log format does not zero pad
the time (trailing zeros are removed)
This fix apply the patch to fix the issue. This fix fixes 38258.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
The CloudWatch Logs API defines its limits in terms of bytes, but its
inputs in terms of UTF-8 encoded strings. Byte-sequences which are not
valid UTF-8 encodings are normalized to the Unicode replacement
character U+FFFD, which is a 3-byte sequence in UTF-8. This replacement
can cause the input to grow, exceeding the API limit and causing failed
API calls.
This commit adds logic for counting the effective byte length after
normalization and splitting input without splitting valid UTF-8
byte-sequences into two invalid byte-sequences.
Fixes https://github.com/moby/moby/issues/37747
Signed-off-by: Samuel Karp <skarp@amazon.com>
As in other similar drivers (jsonlog, local), use a set
(i.e. `map[whatever]struct{}`), making the code simpler.
While at it, make sure we remove the reader from the set
after calling `ProducerGone()` on it.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This should eliminate a bunch of new (go-1.11 related) validation
errors telling that the code is not formatted with `gofmt -s`.
No functional change, just whitespace (i.e.
`git show --ignore-space-change` shows nothing).
Patch generated with:
> git ls-files | grep -v ^vendor/ | grep .go$ | xargs gofmt -s -w
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This should test that
- all the messages produced are delivered (i.e. not lost)
- followLogs() exits
Loosely based on the test having the same name by Brian Goff, see
https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.
If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.
Apparently, what happens is:
1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).
2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.
3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).
4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).
5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.
Alas, the fix is somewhat complicated:
1. Distinguish between close from logs producer and logs consumer.
To that effect,
- yet another channel is added to LogWatcher();
- {Watch,}Close() are renamed to {Watch,}ProducerGone();
- {Watch,}ConsumerGone() are added;
*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().
2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.
3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.
4. followLogs() are modified accordingly:
- context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
- due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
- function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.
While at it,
1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7 ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490
2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.
Should fix https://github.com/moby/moby/issues/37391
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This test case checks that followLogs() exits once the reader is gone.
Currently it does not (i.e. this test is supposed to fail) due to #37391.
[kolyshkin@: test case Brian Goff, changelog and all bugs are by me]
Source: https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This driver uses protobuf to store log messages and has better defaults
for log file handling (e.g. compression and file rotation enabled by
default).
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This makes it so consumers of `LogFile` should pass in how to get an
io.Reader to the requested number of lines to tail.
This is also much more efficient when tailing a large number of lines.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>