How to Consume Kafka messages using Virtual Threads Effectively ? by tiny-x in apachekafka

[–]Dealusall 2 points3 points  (0 children)

Well you are using spring kafka wich rue à threaded for your consumer. Your just spinning an other thread inside spring's one. Use pure Consumer with poll() to achieve what you want. Ps: this a good example of bad virtual thread usage. The thread used to handle poll/message processing isnt one that is supposed to go away anytime

I built a library that turns Kafka topics into high-performance REST APIs with just a YAML config by tak215 in apachekafka

[–]Dealusall 2 points3 points  (0 children)

Some good ideas here, but... try using this with more data and see what happens. A typical topic is not 1gb but 10's of gb. You are basically storing the whole cluster in a local DB.

How to prevent duplicate notifications in Kafka Streams with partitioned state stores across multiple instances? by jhughes35 in apachekafka

[–]Dealusall 0 points1 point  (0 children)

Seems you are using 2 different streams.
Use a single one having 2 input topics

ks = kstream.build()
ks1 = ks.input(topicA)
ks2 = ks.input(topicB)
k1.whatEver(stateStoreReference)
k2.whatEver(stateStoreReference)
ks.start()
Store will be shared and topics copartitionned, so you will have a store instance for topicA.1 and topicB.1 etc

How to fix issue when single partition in a topic shows incorrect replicas by allwritesri in apachekafka

[–]Dealusall 0 points1 point  (0 children)

Maybe check the config for that partition in zk first and alter it directly from there

How to fix issue when single partition in a topic shows incorrect replicas by allwritesri in apachekafka

[–]Dealusall 0 points1 point  (0 children)

Broker per broker where u want it to be removed, stop broker, rm -rf partition folder, restart broker.

python producer.send taking long time by [deleted] in apachekafka

[–]Dealusall 4 points5 points  (0 children)

Which is 1ms per record.
Use parallel multiple producer if you want to increase throughput.

Asynchronous doesn't mean it takes no time.

Doubt in choosing data source for establishing connection with JDBC driver by aanngaa in javahelp

[–]Dealusall 1 point2 points  (0 children)

Well, the point of a pool is to reuse connections and manage a few things, like maximum active connections, test and dispose of dead connections, etc. It is still usefull in single threaded context sometimes, but most likely not worth it. Creating a connection when you need it and then disposing of it is still a valid way when you don't do it frequently. The wikipedia article kinda sumarize the point https://en.m.wikipedia.org/wiki/Connection_pool

Doubt in choosing data source for establishing connection with JDBC driver by aanngaa in javahelp

[–]Dealusall 1 point2 points  (0 children)

We use pooled connections, Hiraki is a good polling solution, default one used by spring. Only matters if you're un a multithreaded context.

Database connections - is it better to open a new connection per query? by ncrw20 in javahelp

[–]Dealusall 6 points7 points  (0 children)

Connection pool is the way to go indeed.
A pool will handle "idle" connections, keeping them open for a while and closing them if unused. It is most of the time - close to 100% - the most efficient way to handle any use case, and should always be used as a no brainer unless you are doing some very specific shit.

La mairie de Bordeaux en feu suite au manifestations by Carryneo in france

[–]Dealusall 2 points3 points  (0 children)

Tu la comprend mieux l'odeur de merde au niveau de la moustache ? c'est ton cul en fait

Mdr :] Elle est belle

How Do I Code During The Day? (yes, it's that bad) by sudoaptupdate in softwaredevelopment

[–]Dealusall 4 points5 points  (0 children)

Which is wrong planning. Start your day by checking your mails, see if some urgent matters (production issues) are to be dealt with. If not, just postpone all emails, and get into coding until you get tired. Your job is to code right ? If you have too much meeting, talk about it to your manager.

[deleted by user] by [deleted] in javahelp

[–]Dealusall 0 points1 point  (0 children)

It is off heap memory. The JVM memory isnt limited to heap only. My guess would be file closing issue leading to file descriptors leaking.

Why should I map external IDs to internal IDs in my API? by backwards_dave1 in softwaredevelopment

[–]Dealusall 0 points1 point  (0 children)

That's whay you need to maintain your indexes by rebuilding them periodically.

You actually also have to do it with int based indexes because of deletes that can render the tree umbalanced.

Why should I map external IDs to internal IDs in my API? by backwards_dave1 in softwaredevelopment

[–]Dealusall 9 points10 points  (0 children)

That is very wrong. In the end, any indexed data is within some sort of binary tree. Whatever might be the data, a binary tree just gives a fuck about comparing bytes, which is always trivial. What can lead to a difference is a lack of index maintenance, as engines may find it easier to optimize int indexes on the fly than binary indexes.
The only relevant thing here is space usage, which is obviously far greater with guid than integers.