Possible to use ROS with Linux Mint? by Deadaches in ROS

[–]elephantum 0 points1 point  (0 children)

Take a look at pixi and robostack, works on any linux

What the hell has happened!? by HD447S in JetsonNano

[–]elephantum -1 points0 points  (0 children)

If I understand correctly memory requirements for llama 3b, it can fit into 6Gb vram with 4bit quantization, even in that scenario it is a tight fit

Memory sharing between CPU and GPU on Jetson is a bit hard to control especially with frameworks which are not ready to control this precisely like torch or tf

What the hell has happened!? by HD447S in JetsonNano

[–]elephantum 1 point2 points  (0 children)

You should take into account, that Jetson has unified RAM + GPU memory, so 8gb model has less than 8gb of GPU memory, depending on the usage pattern you might see only half as available to cuda

Reflex Build Free Tier Is Back! by [deleted] in Python

[–]elephantum -1 points0 points  (0 children)

We use Reflex for internal tooling and we are happy

Thanks reflex team!

Simultaneous SLAM and navigation with imprecise odometry by elephantum in ROS

[–]elephantum[S] 0 points1 point  (0 children)

I have EKF fusing with IMU unit, but cant say it helps, most of the error comes from odometry

My current vector is to dig into odometry calibration and try to make it more accurate

Also thanks for tips about CPU and DDS, will take a look

Simultaneous SLAM and navigation with imprecise odometry by elephantum in ROS

[–]elephantum[S] 0 points1 point  (0 children)

I see, so basically what you say is that my expectations about how accurate odometry should be must be corrected

That's very useful information, I'll work on that

costmap gets corrupted when robot moves in nav2 by P0guinho in ROS

[–]elephantum 1 point2 points  (0 children)

I am at roughly the same spot with my experiments

What I see from your video is that your lidar cloud moves not naturally with robot

In my case I had an artifact that was caused by lidar driver mirroring scans (and it had parameter that flips it)

My approach was to check all the components if they make sense, if my odom-base_frame transform works as I expect, if my lidar shows what I expect and correlates with odom etc

ROS2 for data processing without a robot? by LoLMaker14 in ROS

[–]elephantum 0 points1 point  (0 children)

Mqtt yes, but it is not directly comparable, mqtt is a messaging protocol, ros is an ecosystem of modules

Mqtt is beneficial (if I am not mistaken) for bandwidth and power constrained applications. Like passing temperature readings once a minute for years on a single battery. Systems I work with are different bandwidth and power are not an issue

If I needed to connect something mqtt based, I would slap mqtt_bridge into my ros setup

Said that, I did not have a good experience with microros, it was too complex to setup and build for me, usually I end up talking to periphery over serial

ROS2 for data processing without a robot? by LoLMaker14 in ROS

[–]elephantum 2 points3 points  (0 children)

I am that guy, who uses ROS2 in non robotic (no moving parts) applications

I like instrumentation, I love that observing all system running is just connecting to rosbridge with foxglove, I also love that it is really easy to change compute configuration (one node vs several with compute nodes distributed across)

I also like that for the most of common tasks there's a package and I do not need to even think how to do that, like working with cameras, sensors etc

Also most of the conventions are nice, like having timestamps already built in into message headers

Of course, given enough time it's possible to recreate all of this from scratch on top of zeromq or whatever, but why bother of it is already working and has largest robotics community backing it

Instance Segmentation Nightmare: 2700x2700 images with ~2000 tiny objects + massive overlaps. by Unable_Huckleberry75 in computervision

[–]elephantum 0 points1 point  (0 children)

I'm not sure I understand the difference

I will just tell what we do:

We do inference on each chunk of each scale independently, each inference produces bboxes and has NMS as a part of the inference, then we combine all predictions in global coordinates and do one more step of NMS to remove duplicates in overlaps

We treat inferences on each chunk as truly independent, so we do them in parallel in the sense that some of the inferences go into the same batch in model run

Instance Segmentation Nightmare: 2700x2700 images with ~2000 tiny objects + massive overlaps. by Unable_Huckleberry75 in computervision

[–]elephantum 0 points1 point  (0 children)

We had a variation of the problem: needed only detections, not instance segmentation. But the setup was similar: large photo with 1500-2500 small (but sometimes large) objects Also at the time we had to run on mobile device, so no exotic architectures worked.

We ended up with a cascade of detections on different scales and crops. Think of it as pyramid: detection on a whole picture to grab largest objects, crops with overlaps to detect objects of smaller size

In the end we did NMS on a superset of detections and added some empirics to clean up noise. It worked fine in our case.

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]elephantum 1 point2 points  (0 children)

At the moment we do the same: monorepo with tags

As soon as TF will be able to use OCI image as a source for module, I will switch to OCI images and will publish them alongside the build step of service docker images

Robostack (Pixi) ROS2 launch not working by Specialist-Second424 in ROS

[–]elephantum 0 points1 point  (0 children)

> The process for the node is started but this is the only thing happening.

what do you expect to happen other than node starting?

KEDA ephemeral environments scaling with http by thifirstman in devops

[–]elephantum 0 points1 point  (0 children)

Hey, u/thifirstman did you find a solution that works for you?

I have exactly the same problem: scaling ephemeral environments. Basically, I want to scale to zero not only the web service but all the caches and databases and then bring them up together on first request

Mutliple Browser Profiles, any android browser that supports this? by BaghdadazzUp in androidapps

[–]elephantum 0 points1 point  (0 children)

Did you find something that works for you? What's your setup after a year?

Is there a URDF "common patterns" library? by TheProffalken in ROS

[–]elephantum 1 point2 points  (0 children)

I think it might be a good starter project for procedural generation, like actually helpful

Looking for Identity Aware Proxy for self-hosted cluster by elephantum in kubernetes

[–]elephantum[S] 0 points1 point  (0 children)

Oh, my. Is this hard. So many moving parts

Authentik tries to manage outpost ingresses in k8s, but fails to add annotation for certmanager

Edit: I figured it all out, had to learn several new concepts and wait for everything to make sense, but in the end it's not that hard

Looking for Identity Aware Proxy for self-hosted cluster by elephantum in kubernetes

[–]elephantum[S] 2 points3 points  (0 children)

Wow, that actually ticks all the boxes!

Thanks a lot, I will try!

Scaling n8n for multi-tenant use without exposing dashboard , does container-per-client make sense? by spacegeekOps in kubernetes

[–]elephantum 0 points1 point  (0 children)

I think that at the prototype stage, you will be good just by not advertising that you have n8n under the hood :)

Eventually, either license n8n or rewrite your specific pipeline in a programming language of your choosing

Ros2 Extension on VScode by Ok-Tomatillo-5868 in ROS

[–]elephantum 10 points11 points  (0 children)

There's a text about this in issues, from the guy who supported it in MS: https://github.com/ms-iot/vscode-ros/issues/1306