I’m having a problem using tempo and grafana. Looks like the search sometimes works and sometimes it doesn’t. For example, I’m trying to search for logs for a specific time - from 13:00 to 13:05 - and grafana doesn’t find anything. But when I was looking for another period - for examle from 12:00 to 14:00 grafana shown me logs in period from 13:00 to 13:05.
I looked at the tempo logs and found that few errors were appearing intermittently. I’m not sure this erros affect this issue with search, but I haven’t found any explanation why these errors occur.
First error:
level=error ts=2024-03-22T06:36:01.37975115Z caller=poller.go:156 msg="failed to poll or create index for tenant" tenant=single-tenant err="open /tmp/tempo/blocks/single-tenant/5c1805c7-1fa1-462a-b5fb-b21c896f206b: no such file or directory"
I’m using tempo and grafana in docker and this path represents a volume. But I’m not sure that anything other than tempo can affect this.
Second error:
level=error ts=2024-03-22T12:42:48.49124478Z caller=frontend_processor.go:71 msg="error processing requests" address=127.0.0.1:9095 err="rpc error: code = Canceled desc = context canceled"
This error occurs every time when I’m searching something via grafana explore. What is Tempo trying to do?
And last error:
level=error ts=2024-03-22T13:56:10.910465446Z caller=rate_limited_logger.go:27 msg="pusher failed to consume trace data" err="DoBatch: InstancesCount <= 0"
There is my tempo config:
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
http:
grpc:
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 168h
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
storage:
trace:
backend: local
wal:
path: /tmp/tempo/wal
local:
path: /tmp/tempo/blocks
overrides:
metrics_generator_processors: [service-graphs, span-metrics]
Can someone help me with this errors?
Update: on another stand I see the same problem without this error. But it looks like tempo even don’t try to search data - total_requests=0 started_requests=0
level=info ts=2024-03-27T11:55:51.692768615Z caller=handler.go:134 tenant=single-tenant method=GET traceID=45edd2e653a8fd1d url="/api/search?q=%7Bresource.service.name%3D%22app%22%7D&limit=20&spss=3&start=1711533240&end=1711533539" duration=203.805µs response_size=26 status=200
level=info ts=2024-03-27T11:55:52.613606631Z caller=searchsharding.go:253 msg="sharded search query request stats and SearchMetrics" tenant=single-tenant query="q={resource.service.name=\"app\"}&limit=20&spss=3&start=1711533240&end=1711533539" duration_seconds=49.501µs request_throughput=0 total_requests=0 started_requests=0 cancelled_requests=0 finished_requests=0 totalBlocks=0 inspectedBytes=0 inspectedTraces=0 totalBlockBytes=0