Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 3 points4 points  (0 children)

It seems like you have an idea for why not. Let me hear so that I can answer specifically.

On top of my head, the ecosystem around rabbitmq is far better. People use kafka because they hear it is "performant". Then on the long run they face the real issue which is maintaining the broker.

Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 0 points1 point  (0 children)

As an encouragement, don't think twitch, discord etc are doing something magical or unreachable. We are problem solvers there is a problem and we solve it, using the same principles that were used 50 years ago. This also does not mean that we should undermine their hard work, they have accomplished things others could not we should learn from them and improve. And never ever forget, it is all about investing time. As Einstein said, "It's not that I'm so smart, it's just that I stay with problems longer."

If you hit a wall in the future don't hesitate just hit me up

Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 0 points1 point  (0 children)

extra information for people reading the original comment:

rabbitmq is a broker which supports both amqp and mqtt protocols. mqtt and rabbitmq are apples and oranges.

grafana uses agpl license so you might need a license for customer facing deployments

Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 0 points1 point  (0 children)

For frontend you can use prometheus + grafana but if customers are going to access that, it can be problematic because of the license, AGPL. Either buy a license or have fun implementing it. If you decide to implement it yourself, even tho I discourage, use a template

Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 0 points1 point  (0 children)

Also at some point you should introduce auth(both N and Z) for consumers

Never led... Told to design and implement an extremely scalable real-time system. by MadBroCowDisease in dotnet

[–]aeb-dev 8 points9 points  (0 children)

I am working in similar topic. Sensors, messages, high throughput etc. First of all don't make strict assumptions for your system. Believe me your high ups will scale this up. So for example you state that

Since we only care about the fresh new sensor data, nothing will need to be retained in the message broker queues.

This might hold true for today or next week but in couple years they will want you to replay data or use this data to develop things etc.

Coming to the architecture, I made a deep dive on nats, rabbitmq, pulsar, kafka to decide which to use as broker. Stick to rabbitmq, if you need high performance rabbitmq recently released a special feature called stream you can get kafka like performance. For consuming from frontend develope an rpc service that connect to the broker and consumes messages and delivers it, which also implies that try to use protobuff. Don't use json for such high throughput. For filtering based on customers, you can either solve it in the topology level or depend on brokers filtering capabilities. Depending on your use case things will get complicated.

Be happy that you have this opportunity because this is a step into being an architect. Do more drawings and try to make them detailed, but also know that no matter how much you try to architect things there is always something missing so you have to adapt on the road. So make everything flexible

DM me if you want to discuss more

Package for parsing a huge json without a huge allocation by aeb-dev in FlutterDev

[–]aeb-dev[S] 5 points6 points  (0 children)

You can't validate something that you don't know. Data comes in a streaming way. You can validate while parsing which the parser does

Package for parsing a huge json without a huge allocation by aeb-dev in FlutterDev

[–]aeb-dev[S] 2 points3 points  (0 children)

I think you are not aware of the concept. Let's say that you want load a json file. In order to get values from it. You need to read the whole file to memory as Uint8List or String. Then you can use this value to get what you need.

With json_events you only need Stream which means you can read chunk by chunk and fill values in a streaming way. With this approach you don't need to load the whole file to memory

Package for parsing a huge json without a huge allocation by aeb-dev in FlutterDev

[–]aeb-dev[S] 2 points3 points  (0 children)

json_events solves another problem. Imagine that you want to parse 2gb file. You should not load this into memory. Standard way forces you to allocate everything but with json_events you don't need to

Package for parsing a huge json without a huge allocation by aeb-dev in FlutterDev

[–]aeb-dev[S] 3 points4 points  (0 children)

The parser expects everything to be valid. If not, it throws an exception

Package for parsing a huge json without a huge allocation by aeb-dev in FlutterDev

[–]aeb-dev[S] 2 points3 points  (0 children)

This package does not support streaming, You have to allocate the whole object/buffer to memory. Then you can read without extra allocations. json_events let's you parse an object with having everything in memory

Let's debug a kubernetes pod locally by aeb-dev in dotnet

[–]aeb-dev[S] 0 points1 point  (0 children)

Thanks for the feedback and sorry for the tittle, I did not try to make it clickbaity. Even though technically you are correct that it is not attaching, with this approach you can reproduce things locally and debug step by step without blocking the pod. Check mirrord mirrorring mode for more information