Commit graph

66 commits

Author SHA1 Message Date
Sebastiaan van Stijn
a9081299dd
logger/journald: fix SA4011: ineffective break statement
This was introduced in 906b979b88, which changed
a `goto` to a `break`, but afaics, the intent was still to break out of the loop.
(linter didn't catch this before because it didn't have the right build-tag set)

    daemon/logger/journald/read.go:238:4: SA4011: ineffective break statement. Did you mean to break out of the outer loop? (staticcheck)
                break // won't be able to write anything anymore
                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 75577fe7a8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-07-20 16:57:22 +02:00
Cory Snider
906b979b88 daemon/logger: remove ProducerGone from LogWatcher
Whether or not the logger has been closed is a property of the logger,
and only of concern to its log reading implementation, not log watchers.
The loggers and their reader implementations can communicate as they see
fit. A single channel per logger which is closed when the logger is
closed is plenty sufficient to broadcast the state to log readers, with
no extra bookeeping or synchronization required.

Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-05-19 15:22:22 -04:00
Sebastiaan van Stijn
df650a1aeb
panic() instead of logrus.Fatal() in init funcs
Some packages were using `logrus.Fatal()` in init functions (which logs the error,
and (by default) calls `os.Exit(1)` after logging).

Given that logrus formatting and outputs have not yet been configured during the
initialization stage, it does not provide much benefits over a plain `panic()`.

This patch replaces some instances of `logrus.Fatal()` with `panic()`, which has
the added benefits of not introducing logrus as a dependency in some of these
packages, and also produces a stacktrace, which could help locating the problem
in the unlikely event an `init()` fails.

Before this change, an error would look like:

    $ dockerd
    FATA[0000] something bad happened

After this change, the same error looks like:

    $ dockerd
    panic: something bad happened

    goroutine 1 [running]:
      github.com/docker/docker/daemon/logger/awslogs.init.0()
        /go/src/github.com/docker/docker/daemon/logger/awslogs/cloudwatchlogs.go:128 +0x89

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-04-21 12:15:20 +02:00
Sebastiaan van Stijn
686be57d0a
Update to Go 1.17.0, and gofmt with Go 1.17
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-08-24 23:33:27 +02:00
Sebastiaan van Stijn
d43bcc8974
daemon/logger/journald: fix linting errors
daemon/logger/journald/read.go:128:3 comment on exported function `CErr` should be of the form `CErr ...`

    daemon/logger/journald/read.go:131:36: unnecessary conversion (unconvert)
            return C.GoString(C.strerror(C.int(-ret)))
	                                  ^
    daemon/logger/journald/read.go:380:2: S1023: redundant `return` statement (gosimple)
        return
        ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-06-10 13:03:21 +02:00
Christian Becker
322c9e6866 add partial metadata to journald logs
add all partial metadata available to journald logs to allow easier reassembly of partial messages in downstream logging systems

fixes #41403

Signed-off-by: Christian Becker <christian.becker@sixt.com>
2020-09-01 12:54:05 +02:00
Akihiro Suda
9a82a9a8ea vendor containerd, BuildKit, protobuf, grpc, and golang.org/x
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-03-03 10:25:20 +09:00
Sebastiaan van Stijn
9c701fdb6b
logger/journald: U1000: field mu is unused (unused)
this looks to be a false positive, but this field is not
used if journald is not supported, which may be the cause

```
daemon/logger/journald/journald.go:21:2: U1000: field `mu` is unused (unused)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-18 12:57:42 +02:00
Kir Kolyshkin
20a0e58a79 journald/read: fix/unify errors
1. Use "in-place" variables for if statements to limit their scope to
   the respectful `if` block.

2. Report the error returned from sd_journal_* by using CErr().

3. Use errors.New() instead of fmt.Errorf().

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:35 -07:00
Kir Kolyshkin
dd4bfe30a8 journald: fix for --tail 0
From the first glance, `docker logs --tail 0` does not make sense,
as it is supposed to produce no output, but `tail -n 0` from GNU
coreutils is working like that, plus there is even a test case
(`TestLogsTail` in integration-cli/docker_cli_logs_test.go).

Now, something like `docker logs --follow --tail 0` makes total
sense, so let's make it work.

(NOTE if --tail is not used, config.Tail is set to -1)

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:35 -07:00
Kir Kolyshkin
b73fb8fd5d journald/read: avoid piling up open files
If we take a long time to process log messages, and during that time
journal file rotation occurs, the journald client library will keep
those rotated files open until sd_journal_process() is called.

By periodically calling sd_journal_process() during the processing
loop we shrink the window of time a client instance has open file
descriptors for rotated (deleted) journal files.

This code is modelled after that of journalctl [1]; the above explanation
as well as the value of 1024 is taken from there.

[v2: fix CErr() argument]

[1] https://github.com/systemd/systemd/blob/dc16327c48d/src/journal/journalctl.c#L2676
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:35 -07:00
Kir Kolyshkin
f091febc94 journald/read: simplify/fix followJournal()
TL;DR: simplify the code, fix --follow hanging indefinitely

Do the following to simplify the followJournal() code:

1. Use Go-native select instead of C-native polling.

2. Use Watch{Producer,Consumer}Gone(), eliminating the need
to have journald.closed variable, and an extra goroutine.

3. Use sd_journal_wait(). In the words of its own man page:

> A synchronous alternative for using sd_journal_get_fd(),
> sd_journal_get_events(), sd_journal_get_timeout() and
> sd_journal_process() is sd_journal_wait().

Unfortunately, the logic is still not as simple as it
could be; the reason being, once the container has exited,
journald might still be writing some logs from its internal
buffers onto journal file(s), and there is no way to
figure out whether it's done so we are guaranteed to
read all of it back. This bug can be reproduced with
something like

> $ ID=$(docker run -d busybox seq 1 150000); docker logs --follow $ID
> ...
> 128123
> $

(The last expected output line should be `150000`).

To avoid exiting from followJournal() early, add the
following logic: once the container is gone, keep trying
to drain the journal until there's no new data for at
least `waitTimeout` time period.

Should fix https://github.com/docker/for-linux/issues/575

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:35 -07:00
Kir Kolyshkin
981c01665b Call sd_journal_get_fd() earlier, only if needed
1. The journald client library initializes inotify watch(es)
during the first call to sd_journal_get_fd(), and it make sense
to open it earlier in order to not lose any journal file rotation
events.

2. It only makes sense to call this if we're going to use it
later on -- so add a check for config.Follow.

3. Remove the redundant call to sd_journal_get_fd().

NOTE that any subsequent calls to sd_journal_get_fd() return
the same file descriptor, so there's no real need to save it
for later use in wait_for_data_cancelable().

Based on earlier patch by Nalin Dahyabhai <nalin@redhat.com>.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:34 -07:00
Kir Kolyshkin
79039720c8 journald/read: avoid being blocked on send
In case the LogConsumer is gone, the code that sends the message can
stuck forever. Wrap the code in select case, as all other loggers do.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:34 -07:00
Kir Kolyshkin
ff3cd167ea journald/read: simplify walking backwards
In case Tail=N parameter is requested, we need to show N lines.
It does not make sense to walk backwards one by one if we can
do it at once. Now, if Since=T is also provided, make sure we
haven't jumped too far (before T), and if we did, move forward.

The primary motivation for this was to make the code simpler.

This also fixes a tiny bug in the "since" implementation.

Before this commit:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:24.999821000Z 95981

After:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:25.000013000Z 95982

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:34 -07:00
Kir Kolyshkin
e8f6166791 journald/read: simplify code
Minor code simplification.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-02 10:02:34 -07:00
Nalin Dahyabhai
1ada3e85bf Small journal cleanup
Clean up a deferred function call in the journal reading logic.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2019-08-02 10:02:34 -07:00
Robin THONI
bc709991b3 Use a regex to match labels
Signed-off-by: Robin THONI <robin@rthoni.com>
2019-01-16 22:17:16 +00:00
Yong Tang
533e07afbe
Merge pull request #38032 from RohitK89/21497-log-image-name
Add IMAGE_NAME attribute to journald log events
2018-10-17 12:18:05 -07:00
Colin Panisset
5cd2bb315a Only add CONTAINER_PARTIAL_MESSAGE if not the last partial
Addresses #38045

Signed-off-by: Colin Panisset <colin.panisset@cevo.com.au>
2018-10-17 07:51:59 +11:00
Rohit Kapur
5f7e102df7 Add IMAGE_NAME as a key to journald log messages
Signed-off-by: Rohit Kapur <rkapur@flatiron.com>
2018-10-12 16:16:31 -04:00
Kir Kolyshkin
b2b169f13f daemon/logger/journald: simplify readers field
As in other similar drivers (jsonlog, local), use a set
(i.e. `map[whatever]struct{}`), making the code simpler.

While at it, make sure we remove the reader from the set
after calling `ProducerGone()` on it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2018-09-10 16:17:05 -07:00
Kir Kolyshkin
916eabd459 daemon.ContainerLogs(): fix resource leak on follow
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.

If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.

Apparently, what happens is:

1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).

2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.

3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).

4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).

5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.

Alas, the fix is somewhat complicated:

1. Distinguish between close from logs producer and logs consumer.
To that effect,
 - yet another channel is added to LogWatcher();
 - {Watch,}Close() are renamed to {Watch,}ProducerGone();
 - {Watch,}ConsumerGone() are added;

*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().

2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.

3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.

4. followLogs() are modified accordingly:
 - context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
 - due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
 - function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.

While at it,

1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7 ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490

2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.

Should fix https://github.com/moby/moby/issues/37391

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2018-09-06 11:47:42 -07:00
Anusha Ragunathan
0b4b0a7b5d Improve partial message support in logger
Docker daemon has a 16K buffer for log messages. If a message length
exceeds 16K, it should be split by the logger and merged at the
endpoint.

This change adds `PartialLogMetaData` struct for enhanced partial support
- LastPartial (bool) : indicates if this is the last of all partials.
- ID (string)        : unique 32 bit ID. ID is same across all partials.
- Ordinal (int starts at 1) : indicates the position of msg in the series of partials.
Also, the timestamps across partials in the same.

Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
2018-04-11 13:26:28 -07:00
Daniel Nephin
4f0d95fa6e Add canonical import comment
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2018-02-05 16:51:57 -05:00
Vlastimil Zeman
ae557f9d85
Add journald tag as SYSLOG_IDENTIFIER
Signed-off-by: Vlastimil Zeman <vlastimil.zeman@diffblue.com>
2017-11-22 08:51:23 +00:00
Jamie Hannaford
e8d9a61f4c Add --until flag for docker logs; closes #32807
Signed-off-by: Jamie Hannaford <jamie.hannaford@rackspace.com>
2017-11-01 10:08:49 +01:00
Derek McGowan
1009e6a40b
Update logrus to v1.0.1
Fixes case sensitivity issue

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2017-07-31 13:16:46 -07:00
Aaron Lehmann
b642b3f21f Avoid using a map for log attributes
Having a map per log entry seemed heavier than necessary. These
attributes end up being sorted and serialized, so storing them in a map
doesn't add anything (there's no random access element). In SwarmKit,
they originate as a slice, so there's an unnecessary conversion to a map
and back.

This also fixes the sort comparator, which used to inefficiently split
the string on each comparison.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2017-07-18 19:01:20 -07:00
David Glasser
917050c572 Fix stderr logging for journald and syslog
logger.PutMessage, added in #28762 (v17.04.0-ce), clears msg.Source. So journald
and syslog were treating stderr messages as if they were stdout.

Signed-off-by: David Glasser <glasser@davidglasser.net>
2017-06-26 14:37:43 -07:00
Jim Minter
4fdb17c777 Prevent ContainerLogs from hanging if container doesn't run for long
Signed-off-by: Jim Minter <jminter@redhat.com>
2017-04-20 12:27:51 +01:00
Brian Goff
1325f667ee Merge pull request #27565 from rothrock/env-by-regex
[splunk] Use a regex to match environment variables
2017-03-31 10:46:48 -04:00
Alexander Morozov
b3e7f70137 Merge pull request #31263 from nalind/journal-doublefree
Synchronize the cursor returned by followJournal
2017-02-22 17:45:49 -08:00
Nalin Dahyabhai
d57c330617 Synchronize the cursor returned by followJournal
Make sure that the cursor value returned by followJournal() is the last
of the values returned by its goroutine's calls to drainJournal() by
waiting for it, rather than returning a value that may be superceded by
another if we're singalling the goroutine that it should exit by closing
a pipe.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-02-22 16:22:00 -05:00
Kenfe-Mickael Laventure
81630df854 Prevent freeing a possible invalid pointer from journald
Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
2017-02-22 07:54:10 -08:00
Brian Goff
3f4fccb65f Use sync.Pool for logger Messages
This reduces allocs and bytes used per log entry significantly as well
as some improvement to time per log operation.

Each log driver, however, must put messages back in the pool once they
are finished with the message.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2017-02-01 13:52:37 -05:00
Andy Goldstein
76f58d7294 journald logs: drain 1 more time at container exit
In the journald log driver, attempt to drain the journal 1 more time
after being told to stop following the log. Due to a possible race
condition, sometimes data is written to the journal at almost the same
time the log watch is closed, and depending on the order of operations,
sometimes you miss the last journal entry.

Signed-off-by: Andy Goldstein <agoldste@redhat.com>
2017-01-25 09:29:21 -05:00
Joseph Rothrock
9758a2a724 Use a regex to match environment variables #27565
Signed-off-by: Joseph Rothrock <rothrock@rothrock.org>
2017-01-19 15:30:11 -08:00
Yanqiang Miao
17ec911da7 Rename 'context' to 'loginfo' in the logger module
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

gofmt

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

remove 'api/types/container/config.go' from this PR

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

change 'LogInfo' to 'Info'

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
2016-12-29 19:13:44 +08:00
Yanqiang Miao
52fd6e4664 Optimize the function 'Context.Name()' and replace 'Context.ContainerName' that need to remove slash with 'Context.Name()'.
Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>

update

Signed-off-by: Yanqiang Miao <miao.yanqiang@zte.com.cn>
2016-11-24 09:24:29 +08:00
Silvan Jegen
d359daaa48 Clean up journald logger
We clean up the journald logger with these four changes.

1. Make field array static
2. Make function name more appropriate
3. Initialize the file descriptors only once
4. Avoid copying the journald cursor

Point 4 is the most significant change: instead of treating the journald
cursor like a Go string we use it as a raw C.char pointer. That way we
avoid the copying by the C.CString and C.GoString functions.

Signed-off-by: Silvan Jegen <s.jegen@gmail.com>
2016-11-08 19:09:59 +01:00
Yong Tang
9528ea930c Sanitize docker labels when used as journald field names
This fix tries to address the issue raised in #23528 where
docker labels caused journald log error because journald
has special requirements on field names.

This fix addresses this issue by sanitize the labels per
requirements of journald.

Additional unit tests have been added to cover the changes.

This fix fixes #23528.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
2016-08-05 15:20:47 -07:00
Sebastiaan van Stijn
48569e0a35 Merge pull request #22911 from vdemeester/20033-default-logging-tag-value
Standardize default logging tag value
2016-07-29 02:18:57 +02:00
Nalin Dahyabhai
513ec73831 Improve logging of long log lines
This change updates how we handle long lines of output from the
container.  The previous logic used a bufio reader to read entire lines
of output from the container through an intermediate BytesPipe, and that
allowed the container to cause dockerd to consume an unconstrained
amount of memory as it attempted to collect a whole line of output, by
outputting data without newlines.

To avoid that, we replace the bufio reader with our own buffering scheme
that handles log lines up to 16k in length, breaking up anything longer
than that into multiple chunks.  If we can dispense with noting this
detail properly at the end of output, we can switch from using
ReadBytes() to using ReadLine() instead.  We add a field ("Partial") to
the log message structure to flag when we pass data to the log driver
that did not end with a newline.

The Line member of Message structures that we pass to log drivers is now
a slice into data which can be overwritten between calls to the log
driver's Log() method, so drivers which batch up Messages before
processing them need to take additional care: we add a function
(logger.CopyMessage()) that can be used to create a deep copy of a
Message structure, and modify the awslogs driver to use it.

We update the jsonfile log driver to append a "\n" to the data that it
logs to disk only when the Partial flag is false (it previously did so
unconditionally), to make its "logs" output correctly reproduce the data
as we received it.

Likewise, we modify the journald log driver to add a data field with
value CONTAINER_PARTIAL_MESSAGE=true to entries when the Partial flag is
true, and update its "logs" reader to refrain from appending a "\n" to
the data that it retrieves if it does not see this field/value pair (it
also previously did this unconditionally).

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
2016-06-14 14:11:47 -04:00
Vincent Demeester
f900e1cf47
Standardize default logging tag value
Use the same default tag value for all loggers that support tags.

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2016-06-04 12:38:12 +02:00
Nalin Dahyabhai
0da0a8f9da Add support for reading journal extras and in UTC
When told to read additional attributes from logs that we've sent to the
journal, pull out all of the non-trusted, non-user fields that we didn't
hard-code ourselves.  More of PR#20726 and PR#21889.

When reading entries in the journald log reader, set the time zone on
timestamps that we read to UTC, so that we send UTC values to the client
instead of values that are local to whatever timezone dockerd happens to
be running in.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
2016-06-02 10:17:07 -04:00
Nalin Dahyabhai
7772d270c0 Remove the logger.Message ContainerID field
Log drivers are instantiated on a per-container basis, and passed the
container ID (along with other information) when they're initialized.
Drivers that care about that value are caching the value that they're
passed when they're initialized and using it in favor of the value
contained in Message structures that are passed to them, so the field in
Messages is unused, so we remove it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-05-31 16:41:29 -04:00
Nalin Dahyabhai
ab62ecf393 Open the journald following descriptor earlier
Following a journal log almost always requires a descriptor to be
allocated.  In cases where we're running out of descriptors, this means
we might get stuck while attempting to start following the journal, at a
point where it's too late to report it to the client and clean up
easily.  The journal reading context will cache the value once it's
allocated, so here we move the check earlier, so that we can detect a
problem when we can still report it cleanly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2016-03-24 10:12:51 -04:00
Nalin Dahyabhai
8d597d25a8 Improve error reporting when following journals
When we set up to start following a journal, if we get error results
from sd_journal_get_fd() or sd_journal_get_events() that prevent us from
following the journal, report the error instead of just mysteriously
failing.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
2016-03-24 10:12:15 -04:00
Nalin Dahyabhai
52c0f36f7b Fix a race in cleaning up after journald followers
When following a journal-based log, it was possible for the worker
goroutine, which reads the journal using the journal context and sends
entry data down the message channel, to be scheduled after the function
which started it had returned.  This could create problems, since the
invoking function was closing the journal context object and message
channel before it returned, which could trigger use-after-free segfaults
and write-to-closed-channel panics in the worker goroutine.

Make the cleanup in the invoking function conditional so that it's only
done when we're not following the logs, and if we are, that it's left to
the worker goroutine to close them.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
2016-03-17 18:36:21 -04:00