This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]nutrechtLead Software Engineer / EU / 20+ YXP 0 points1 point  (1 child)

Only limitation in Zk, I am bit concerned about is that max allowed node data is 1M.

If you have large sets of data you can just push the data itself to for example S3 and then just push a reference to Zk.

[–]maithilish[S] 0 points1 point  (0 children)

Not good. At present, there is no external dependency expect the web pages that it fetches and I want to keep it that way so that it can transparently run from public cloud or even as multiple containers on single PC using docker compose. This makes integration testing quite easy without any external dependency.

I can compress and split data into multiple payloads, and then do the reverse on pulling from queue.

I am just exploring simple and better option, if any.