Replies: 10 comments 9 replies
-
I don't think anyone can say what happened from 4 lines of some properties file. But one thing to keep in mind is that these are just the defaults -> they do not say anything about the actual configuration of the individual topics. |
Beta Was this translation helpful? Give feedback.
-
Those are only the configuration we have made on top of the default configuration when you create a Kafka cluster. Which scenarios or configurations can Kafka topics move its offset from 0 to higher number is what I was looking for. Like log.retention.hours is one such configuration, what other configuration i should look for and which place? Help is what I am looking for, some guidance on debugging this. Thanks for replying. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Added Kafka configuration as well:
spec:
entityOperator:
template:
pod:
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: secondary_ondemand
topicOperator: {}
userOperator: {}
kafka:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- kafka
topologyKey: kubernetes.io/hostname
config:
log.message.format.version: "2.6"
log.retention.hours: -1
message.max.bytes: 1572864
offsets.retention.minutes: 525600
offsets.topic.replication.factor: 3
transaction.state.log.min.isr: 2
transaction.state.log.replication.factor: 3
listeners:
- authentication:
type: tls
name: external
port: 9094
tls: true
type: loadbalancer
- authentication:
type: tls
name: internal
port: 9093
tls: true
type: internal
logging:
loggers:
kafka.root.logger.level: DEBUG
type: inline
replicas: 3
storage:
class: gp2-expandable
deleteClaim: false
size: 3049Gi
type: persistent-claim
pod:
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: secondary_ondemand
zookeeper:
replicas: 3
storage:
class: gp2-expandable
deleteClaim: false
size: 50Gi
type: persistent-claim
template:
pod:
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: secondary_ondemand
status: |
Beta Was this translation helpful? Give feedback.
-
I also queried the topic in Kafka for this to see the actual configuration applied for it $ ./bin/kafka-topics.sh --command-config ./bin/client-ssl-auth.properties --describe --bootstrap-server XXXX:9094 --topic ts.titan.users
Topic: ts.titan.users
PartitionCount: 1
ReplicationFactor: 1
Configs: message.format.version=2.6-IV0,max.message.bytes=1572864
Topic: ts.titan.users
Partition: 0
Leader: 0
Replicas: 0
Isr: 0 |
Beta Was this translation helpful? Give feedback.
-
Listing down all Kafka configuration for the topic ubuntu @ kuber-mumbai :~/www/kafka (master)*$ ./bin/kafka-configs.sh --command-config ./bin/client-ssl-auth.properties --bootstrap-server internal-a1f0a5d4e70834f0f89f39ff4b9ae403-982227952.ap-south-1.elb.amazonaws.com:9094 --entity-type topics --entity-name ts.titan.users --describe --all
All configs for topic ts.titan.users are:
compression.type=producer sensitive=false synonyms={DEFAULT_CONFIG:compression.type=producer}
leader.replication.throttled.replicas= sensitive=false synonyms={}
min.insync.replicas=1 sensitive=false synonyms={DEFAULT_CONFIG:min.insync.replicas=1}
message.downconversion.enable=true sensitive=false synonyms={DEFAULT_CONFIG:log.message.downconversion.enable=true}
segment.jitter.ms=0 sensitive=false synonyms={}
cleanup.policy=delete sensitive=false synonyms={DEFAULT_CONFIG:log.cleanup.policy=delete}
flush.ms=9223372036854775807 sensitive=false synonyms={}
follower.replication.throttled.replicas= sensitive=false synonyms={}
segment.bytes=1073741824 sensitive=false synonyms={DEFAULT_CONFIG:log.segment.bytes=1073741824}
retention.ms=-1 sensitive=false synonyms={}
flush.messages=9223372036854775807 sensitive=false synonyms={DEFAULT_CONFIG:log.flush.interval.messages=9223372036854775807}
message.format.version=2.6-IV0 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.message.format.version=2.6, DEFAULT_CONFIG:log.message.format.version=2.6-IV0}
max.compaction.lag.ms=9223372036854775807 sensitive=false synonyms={DEFAULT_CONFIG:log.cleaner.max.compaction.lag.ms=9223372036854775807}
file.delete.delay.ms=60000 sensitive=false synonyms={DEFAULT_CONFIG:log.segment.delete.delay.ms=60000}
max.message.bytes=1572864 sensitive=false synonyms={DYNAMIC_BROKER_CONFIG:message.max.bytes=1572864, DEFAULT_CONFIG:message.max.bytes=1048588}
min.compaction.lag.ms=0 sensitive=false synonyms={DEFAULT_CONFIG:log.cleaner.min.compaction.lag.ms=0}
message.timestamp.type=CreateTime sensitive=false synonyms={DEFAULT_CONFIG:log.message.timestamp.type=CreateTime}
preallocate=false sensitive=false synonyms={DEFAULT_CONFIG:log.preallocate=false}
index.interval.bytes=4096 sensitive=false synonyms={DEFAULT_CONFIG:log.index.interval.bytes=4096}
min.cleanable.dirty.ratio=0.5 sensitive=false synonyms={DEFAULT_CONFIG:log.cleaner.min.cleanable.ratio=0.5}
unclean.leader.election.enable=false sensitive=false synonyms={DEFAULT_CONFIG:unclean.leader.election.enable=false}
retention.bytes=-1 sensitive=false synonyms={DEFAULT_CONFIG:log.retention.bytes=-1}
delete.retention.ms=86400000 sensitive=false synonyms={DEFAULT_CONFIG:log.cleaner.delete.retention.ms=86400000}
segment.ms=604800000 sensitive=false synonyms={}
message.timestamp.difference.max.ms=9223372036854775807 sensitive=false synonyms={DEFAULT_CONFIG:log.message.timestamp.difference.max.ms=9223372036854775807}
segment.index.bytes=10485760 sensitive=false synonyms={DEFAULT_CONFIG:log.index.size.max.bytes=10485760} |
Beta Was this translation helpful? Give feedback.
-
@alok87 it looks like someone updated status:
conditions:
- lastTransitionTime: "2021-12-21T15:25:33.507828Z"
status: "True"
type: Ready
observedGeneration: 2 |
Beta Was this translation helpful? Give feedback.
-
@scholzj Confirmed Kafka increased our oldest offset due to segment deletion.
Will Kafka delete my data if the storage gets full (with cleanup policy=DELETE and infinite storage) ? |
Beta Was this translation helpful? Give feedback.
-
Found deletable segments with base offsets due to retention time 604800000ms breach (kafka.log.Log) |
Beta Was this translation helpful? Give feedback.
-
Created a bug in kafka https://issues.apache.org/jira/browse/KAFKA-13575 |
Beta Was this translation helpful? Give feedback.
-
We are using Strimzi to manage our Kafka cluster in Kubernetes.
Below is our Strimzi configuration for our Kafka cluster (removed the SSL and authorization confs below):
We are using Infinite log retention to never delete data from our Kafka topics.
We also have a default cleanup policy (which is delete and not compact)
Today, our topics oldest offset moved from 0 to high numbers. We lost a lot of earlier events due to this.
We are trying to figure out why it happened when we are using infnite retention.
Any help is appreciated.
Beta Was this translation helpful? Give feedback.
All reactions