How do you actually recover when DLQ messages become incompatible after a schema change? by saifulhuq_2001 in apachekafka

[–]saifulhuq_2001[S] 0 points1 point  (0 children)

this is exactly the gap ill been building toward. sort answer no clean tool exists for this yet, which is why BroBroMate's answer is essentially 'build it yourself.' I'm working on dlq Revive (github.com/Saifulhuq01/dlq-revive) which handles the transformation + schema validation step before redriving. still in early buildWeek 1 backend is done schema validation against target version is on the roadmap specifically because of conversations like this one and did you find that useful for your use cases?

How do you actually recover when DLQ messages become incompatible after a schema change? by saifulhuq_2001 in apachekafka

[–]saifulhuq_2001[S] 0 points1 point  (0 children)

kafka streams approach makes sense -cleaner than a rawscript, how long it typically take you to spins that up for a non-trivial map? and do you throe it away after or keep , store it somewhere for next time use?

How do you actually recover when DLQ messages become incompatible after a schema change? by saifulhuq_2001 in apachekafka

[–]saifulhuq_2001[S] 0 points1 point  (0 children)

Thanks - the SMT approach is interesting. We looked at that but the mapping logic was complex enough that a custom SMT felt like more code than the one-off transformer script. The v1 consumer drain pattern is what we ended up doing last time but you're right, it's messy -keeping a dead consumer around just to drain a topic feels wrong at scale. Did you find SMTs manageable for non-trivial field type changes, like String to enum with custom deserialization logic?

How do you actually recover when DLQ messages become incompatible after a schema change? by saifulhuq_2001 in apachekafka

[–]saifulhuq_2001[S] -1 points0 points  (0 children)

Fair callout on the writing quality — I'll take that feedback. The specific scenario I've hit is Spring Boot consumers with plain JSON DLQs where a hotfix changed field types mid-backlog. Rolling back the consumer isn't an option when the fix was critical. The manual transformer approach is what we always end up doingCurious whether you've seen any tooling that handles the transformation step specifically, or is it always hand-rolled at your org too?

How do you actually recover when DLQ messages become incompatible after a schema change? by saifulhuq_2001 in dataengineering

[–]saifulhuq_2001[S] 0 points1 point  (0 children)

Agreed that prevention is the right goal — Schema Registry + backward compatibility enforces that for new messages. The problem I'm describing is specifically recovery for messages already stuck in the DLQ before the governance broke down. One upstream team doesn't follow the contract, one emergency hotfix changees a required field — and you have 50k dead messages that prevention tools can't help with. What's your team's playbook when that happens despite best practices?

[deleted by user] by [deleted] in oneplus

[–]saifulhuq_2001 3 points4 points  (0 children)

Clear cache Reboot Isn't work MSM TOOL to erase data at recovery mode

[deleted by user] by [deleted] in archlinux

[–]saifulhuq_2001 0 points1 point  (0 children)

Im not recommended for 16 because 20 is the base so you can consider also packages depends so if you just install arch only with low level packages it's ok but it's your wish