Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concat lose last log line when it is a single line #109

Open
mierzej91 opened this issue Oct 8, 2021 · 8 comments
Open

concat lose last log line when it is a single line #109

mierzej91 opened this issue Oct 8, 2021 · 8 comments

Comments

@mierzej91
Copy link

Problem

I have a problem with concat plugin. Let's imagine that I have 2 application logs in kubernetes cluster, one application sometimes send logs with Java stacktrace, second one works quite good without error. I configured plugin to connecting java stack trace and it's seems to work good, but I noticed that I lose last log line of second application.

Let's say that last part of logs from second application looks like this:

2021-10-08 12:55:00.000 DEBUG 1 --- [Task-4] c._.cl1.perm.exc  : GET: http://localhost/api/accounts/create
2021-10-08 12:55:00.000 DEBUG 1 --- [Task-4] c._.cl1.perm.exc : RequestBody:
2021-10-08 12:55:00.008 DEBUG 1 --- [Task-4] c._.cl1.perm.exc : 200 OK, Time: 7ms, GET : http://localhost/api/accounts/create, RequestBody: , ResponseBody:

Finally in elasticsearch I'm able to view only first two lines, without last line.

...

Steps to replicate

My configuation

<source>
  @type tail
  @id in_tail_container_logs
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}"
  read_from_head true
  
  <parse>
    @type json
    time_format %Y-%m-%dT%H:%M:%S.%NZ
    keep_time_key true
  </parse>
</source>
# enrich with kubernetes metadata

 <filter kubernetes.**>
  @type concat
  key log
  stream_identity_key container_id
  multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}/
  flush_interval 5
  separator ""
</filter>

<filter kubernetes.**>
  @type kubernetes_metadata
</filter>

# Throw the healthcheck to the standard output instead of forwarding it
<match fluentd.healthcheck>
  @type stdout
</match>
# Forward all logs to the aggregators

<match **>
  @type elasticsearch_dynamic
  include_tag_key true
  host "elastichost"
  port "9200"
  user elastic_user
  password elastic_password
  scheme https
  ssl true
  ssl_verify false
  logstash_format true
  logstash_prefix logs

  reconnect_on_error true
  reload_on_failure true
  time_key time
  remove_keys time

  <buffer>
    @type file
    path /opt/bitnami/fluentd/logs/buffers/logs.buffer
    flush_thread_count 2
    flush_interval 5s
  </buffer>

</match>

<match **>
  @type stdout
</match>

Expected Behavior

Last line of log should be sent to Elasticsearch

...

Your environment

  • OS version

  • paste result of fluentd --version or td-agent --version
    fluentd 1.12.0

  • plugin version

    • paste boot log of fluentd or td-agent
    • paste result of fluent-gem list, td-agent-gem list or your Gemfile.lock
*** LOCAL GEMS ***

activesupport (6.1.0)
addressable (2.7.0)
aws-eventstream (1.1.0)
aws-partitions (1.414.0)
aws-sdk-core (3.110.0)
aws-sdk-kms (1.40.0)
aws-sdk-s3 (1.87.0)
aws-sdk-sqs (1.35.0)
aws-sigv4 (1.2.2)
bigdecimal (default: 1.4.1)
bundler (2.2.4, 2.1.4)
cmath (default: 1.0.0)
concurrent-ruby (1.1.7)
cool.io (1.7.0)
csv (default: 3.0.9)
date (default: 2.0.0)
did_you_mean (1.3.0)
digest-crc (0.6.3)
domain_name (0.5.20190701)
e2mmap (default: 0.1.0)
elasticsearch (7.10.0)
elasticsearch-api (7.10.0)
elasticsearch-transport (7.10.0)
elasticsearch-xpack (7.10.0)
etc (default: 1.0.1)
excon (0.78.1)
faraday (1.3.0)
faraday-net_http (1.0.0)
fcntl (default: 1.0.0)
ffi (1.14.2)
ffi-compiler (1.0.1)
fiddle (default: 1.0.0)
fileutils (default: 1.1.0)
fluent-config-regexp-type (1.0.0)
fluent-plugin-concat (2.4.0)
fluent-plugin-detect-exceptions (0.0.13)
fluent-plugin-elasticsearch (4.3.3)
fluent-plugin-grafana-loki (1.2.16)
fluent-plugin-kafka (0.15.3)
fluent-plugin-kubernetes_metadata_filter (2.5.2)
fluent-plugin-multi-format-parser (1.0.0)
fluent-plugin-prometheus (1.8.5)
fluent-plugin-record-reformer (0.9.1)
fluent-plugin-rewrite-tag-filter (2.4.0)
fluent-plugin-s3 (1.5.0)
fluent-plugin-systemd (1.0.2)
fluentd (1.12.0, 1.11.5)
forwardable (default: 1.2.0)
http (4.4.1)
http-accept (1.7.0)
http-cookie (1.0.3)
http-form_data (2.3.0)
http-parser (1.2.2)
http_parser.rb (0.6.0)
i18n (1.8.7)
io-console (default: 0.4.7)
ipaddr (default: 1.2.2)
irb (default: 1.0.0)
jmespath (1.4.0)
json (2.1.0)
jsonpath (1.1.0)
kubeclient (4.9.1)
logger (default: 1.3.0)
lru_redux (1.1.0)
ltsv (0.1.2)
matrix (default: 0.1.0)
mime-types (3.3.1)
mime-types-data (3.2020.1104)
minitest (5.14.2, 5.11.3)
msgpack (1.3.3)
multi_json (1.15.0)
multipart-post (2.1.1)
mutex_m (default: 0.1.0)
net-telnet (0.2.0)
netrc (0.11.0)
oj (3.3.10)
openssl (default: 2.1.2)
ostruct (default: 0.1.0)
power_assert (1.1.3)
prime (default: 0.1.0)
prometheus-client (0.9.0)
psych (default: 3.1.0)
public_suffix (4.0.6)
quantile (0.2.1)
rake (13.0.3, 12.3.3)
rdoc (default: 6.1.2)
recursive-open-struct (1.1.3)
rest-client (2.1.0)
rexml (default: 3.1.9)
rss (default: 0.2.7)
ruby-kafka (1.3.0)
ruby2_keywords (0.0.2)
rubygems-update (3.1.4)
scanf (default: 1.0.0)
sdbm (default: 1.0.0)
serverengine (2.2.2)
shell (default: 0.7)
sigdump (0.2.4)
stringio (default: 0.0.2)
strptime (0.2.5)
strscan (default: 1.0.0)
sync (default: 0.5.0)
systemd-journal (1.3.3)
test-unit (3.2.9)
thwait (default: 0.1.0)
tracer (default: 0.1.0)
tzinfo (2.0.4)
tzinfo-data (1.2020.6)
unf (0.1.4)
unf_ext (0.0.7.7)
webrick (default: 1.4.2)
xmlrpc (0.3.0)
yajl-ruby (1.4.1)

zeitwerk (2.4.2)
zlib (default: 1.0.0)

@kenhys
Copy link
Contributor

kenhys commented Oct 11, 2021

How about trying timeout_label? I mean:

<filter kubernetes.**>
  @type concat
  ...
  timeout_label xxxx
</filter>

<label xxxx>
 <match ...>
   @type ...
 </match>
</label>

@mierzej91
Copy link
Author

This same, with timeout options it also doesn't work.

@k-ayache
Copy link

k-ayache commented May 24, 2022

I have the same issue, I'm loosing logs because of this plugin, and It's critical for us because we use so much these logs for debug and Loki alerts,
But for me, i have a lot of log warning, so i'm wondering if it's related to the issue

2022-05-24 10:58:38 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: spring-boot.kubernetes.var.log.containers.spring-cart-5fc9cfc985-xc6cq_services-recette_istio-proxy-f29f38a5d2f2112c99e53d24b079013ce73cb0549c581bb53e463d1864d5e992.log:default" location=nil tag="spring-xxxxxx

@lixiang71
Copy link

I have the same issue

@RavenKyu
Copy link

It needs to take next filter or step after timeout..

@jungrae-prestolabs
Copy link

Same issue..

@liquidMiller
Copy link

Also having this same issue, did anyone find a solution or a workaround?

@amine250
Copy link

I'm having the same bug.

Is this plugin still maintained?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants