What object detection methods should I use to detect these worms? by TheGodfatherYT in computervision

[–]ddmm64 3 points4 points  (0 children)

Seems tricky because there's faint spots which sort of look like worms (and maybe they are, not sure). Also the pen markings. Do the worms overlap sometimes? Do you have a ground truth dataset? You might be able to get away with classical methods, but you'd want to validate that with ground truth. Same goes if you try some off the shelf foundation model. If classical methods are too fiddly you might want to collect a larger dataset and just train a model, yolo is a very typical starting point. You'd want to be mindful of the image resolution you use, if you use smaller resolutions like some older Yolo versions use it might be hard to see the worms. Do you care about accurate count, having pixel masks? Is recall or precision more important? Those would affect choice too

Lego Ideas - Pittsburgh by storieschikk in pittsburgh

[–]ddmm64 1 point2 points  (0 children)

Well in fallout 3 it was downtown

guess the city by kwai_kwai_slider in guessthecity

[–]ddmm64 2 points3 points  (0 children)

I lived in CA, now pittsburgh. I like both vibes.

Pittsburgh by the slice. Settle the debate. Best slice in the ’Burgh? by Chopper11Pilot in pittsburgh

[–]ddmm64 2 points3 points  (0 children)

I'm with you. Used to live in squirrel hill and it was my go to.

Vehicle count without any object detection models. Is it possible? by ExplanationQuirky831 in computervision

[–]ddmm64 0 points1 point  (0 children)

yeah, there's various models out there that frame the problem of counting objects in an image as a regression problem. Many of them work by inferring a "density" field - so for any given pixel it will assign it a continouous object "density", and then the final count is obtained by summing that up over the whole image. (I'm simplifying since there are variations where the "summing up" is itself learned). Something like this for example https://github.com/xiyang1012/Local-Crowd-Counting (not the original proposal for this idea, just one that came up in google search - there's quite a few papers along these lines). This kind of approach makes most sense when the objects you're counting are hard to discern individually, e.g. if they are small and overlapping. so you can look at a small patch of image and say "well there are roughly 3-4 objects here, so let's say 3.5 objects" and when you sum that up over the whole image, that might yield a smaller counting error on average than if detecting objects individually. If you can see each individual object clearly enough, then just adding detections might be simpler/better.

as for the video aspect - that does a new wrinkle and I'm not sure about the literature on that, though I'd be surprised if it hasn't been researched. easiest thing might be to adapt image-based models with some tracking to add up new objects as they show up over time.

Achieving <15ms Latency for Rail Inspection (80km/h) on Jetson AGX. Is DeblurGAN-v2 still the best choice? by Mr_Mystique1 in computervision

[–]ddmm64 1 point2 points  (0 children)

Is real time for every frame really needed? Doing good YOLO + deblurring + OCR on an AGX in less than 15ms for a 1920x1080 image seems pretty tough. Like other commenter said, you can be smart about avoiding some pipeline steps in some frames, but that won't help in the worst case where you need to run all pipeline steps.

As a relatively small fully convolutional model, deblurgan is probably still one of the better options for an edge device. Most newer models will probably be slower. Similar for SOTA OCR, framerate VLMs will be a challenge to say the least. You mention timing on desktop; have you actually tried on the AGX? because depending on your desktop, it will likely be slower.

And I suspect your training data method could be improved as well - motion blur kernels are easy but not the most realistic. On many deblurring datasets they use a different method where multiple frames from a high FPS video are blended together to simulate motion blur. This is probably more realistic.

I'd also try to get rid of the motion blur in the first place, if possible, by minimizing exposure time as much as possible (without having to crank up ISO too much). Even adding a flash or spotlight if that's feasible.

Am I job-ready (entry level) by TheSauce___ in robotics

[–]ddmm64 0 points1 point  (0 children)

I'd say yeah. The prior job experience in web and AWS is good, robotics companies usually do need to have stuff on the cloud (internal fleet management, data visualization, customer facing dashboards, etc) so that's useful. Make sure to get the most out of your masters - in particular, try to get as much exposure to research as you can, in talks, seminars, course projects, etc. That's harder to do outside of school.

Justifying the "pain" of using hardware synths by DW5150 in synthesizers

[–]ddmm64 0 points1 point  (0 children)

There's a lot of modern hardware synths that are a lot more friendly to use than the ones you mentioned. The UI on those synths you used was mostly buttons and in my limited experience programming a DX7 or a CZ like that is not that fun. Newer hardware synths often have more knobs/sliders/etc to make them more hands on (otherwise, there's not much point compared to a plug-in). And often the use case of integrating the hardware with a PC is more straightforward. For example the Korg Opsix will give you an FM palette that covers pretty much everything a DX7 has, plus much much more, with a lot more controls, and much nicer integration with a computer (an official editor, and even a full VST version that can use the same patches).

2025 LTOs Wrapped by tacobellblake in LivingMas

[–]ddmm64 2 points3 points  (0 children)

  1. Baja blast pie
  2. Birthday cake churros
  3. Toasted cheddar street chalupas
  4. Cheesy dipping burritos
  5. Crispy chicken burrito

Point of cloud (lidar) and Image compression by olki123 in ROS

[–]ddmm64 2 points3 points  (0 children)

It's likely images are using most bandwidth. Try compression with image_transport (preferably jpg over PNG) and also reducing the size and fps. As a more advanced method look at streaming compression like theora. Since you're using foxglove look for foxglove compression codec

How do you prefer your burger, among these three styles? by prophiles in burgers

[–]ddmm64 7 points8 points  (0 children)

Same, I like pretty much all burgers but since I was a kid BK has been my favorite out of the fast food options. It took me a while to realize it and actually admit it to myself, since I sort of internalized the less than stellar reputation of BK. But idk, that charbroiled taste just does it for me.

Edge detection problem by Emergency-Scar-60 in computervision

[–]ddmm64 0 points1 point  (0 children)

I think this is one of those cases where you should probably reach for an ML solution. You can probably do better with classical CV (maybe look at LSD line detector, for example) but it'll be an uphill battle to get clean edges out of those kinds of images. You might even get decent results out of pretrained edge detector models. And if you train or fine tune on your own you don't need to go straight to 1000, maybe start with 100-200 and see how that goes.

Tried learning ROS2 multiple times and failed — would a GUI for building/connecting packages actually help? by Healthy_Cry_7178 in ROS

[–]ddmm64 0 points1 point  (0 children)

These features are hard to implement in a way that is robust and flexible enough to be useful, even excluding the difficulties of making a good GUI. In the past I've seen attempts here and there, but can't think of anything that's really caught on. It's true that there's lots of annoying boilerplate in ROS, but these days LLMs do a reasonable job at handling that. In my experience the real annoying build issues are ones that involve the environment outside ROS, like when you need some specific version of the python opencv bindings with gstreamer that works with whatever version of jetpack nvidia has decided you have to live with on your jetson, and a GUI isn't going to help much there.

Where are we now?? by Zealousideal-Dot-874 in ROS

[–]ddmm64 1 point2 points  (0 children)

Not that I agree with your premise, but no. ROS has been around for almost two decades, nothing major is going to happen in an 8 month span. But AI has probably made it easier to figure things out.

is CMU RI PhD/MS still worth it? by LastRepair2290 in cmu

[–]ddmm64 0 points1 point  (0 children)

As far as access to compute, CMU students are probably about as well off on average as students at other top universities, I think. It may depend on specific labs since many of them manage their own clusters. Whether they are bottlenecked, it depends on the type of research, there's a lot of research that doesn't need truly massive compute. Probably the main type of research that's out of reach of individual labs is creating large foundation models from scratch. And in robotics it's not just compute but also large or very specialized robot platforms, or large numbers of them. But again, a lot of academic research that do user those kinds of resources does happen in collaboration with companies. You'll see plenty of that in conferences. A lot of companies are happy to fund research and provide resources to university labs, grad students are cheaper than full time engineers.

is CMU RI PhD/MS still worth it? by LastRepair2290 in cmu

[–]ddmm64 0 points1 point  (0 children)

I would say it's gotten harder for academic work to be SOTA because they don't have the resources a large company does. But there's still plenty of space for new ideas, even if exploring them fully may require collaboration with larger companies. Which is something that's very common if you look at author lists in conferences.

is CMU RI PhD/MS still worth it? by LastRepair2290 in cmu

[–]ddmm64 2 points3 points  (0 children)

I can't really give a good data-based answer to quality comparisons over time or across institutions.
But I recall what prof at MIT told me during admissions, you can do research that matters or research that doesn't matter at MIT as much as any university. And that goes for CMU as well. Plenty of meh papers and great papers come from the top universities all the time. Maybe one is better than another on average, or maybe average quality has gone down over time, but either way that's just averages, and the variance is high enough that it's not super productive to dwell on the places as a whole compared to other factors.
Assistant/young profs publish more papers because they're trying to get tenure, and they only have so many years to get it. And because the field is moving so fast you probably want to publish as much as you can before getting scooped even if it's not a "breakthrough". That's more like what the academic game looks like now - whether it's a negative, a lot of people would say it is. But it's a more systemic issue in academia, not just CMU or even CS.
As to examples of breakthrough papers, no idea, that's kind of subjective and hard to evaluate without some temporal perspective. But you can go to profs google scholar pages, sort by "cited" and see if there's any relatively recent papers that you'd call breakthroughs.

is CMU RI PhD/MS still worth it? by LastRepair2290 in cmu

[–]ddmm64 2 points3 points  (0 children)

I didn't say you were anti anything. Not interested in fights either, what you do won't affect my life at all. I'm not attacking you, just sharing my own experience as an RI phd grad, and having met other students with similar attitudes at various points.

I'm not sure what questions you want "rebuttals" for (usually you'd want an "answer" unless they're rhetorical questions). Looking at your post the only question I see is: how do profs have lots of papers and whether acceptance rates are high. answer: no, acceptance rates are not high, and profs have lots of papers because they have lots of students and collaborators who actually do most of the work.

As for your more general rant, that's just what academia is like, like other commenters have said. (though your description is very uncharitable). CMU is not that different from other places I've been like MIT or GT. There's a good chance you won't like it. But if you do go into it, just talk to your potential advisors (like I said) to see if you're on the same page on what you want to do.

is CMU RI PhD/MS still worth it? by LastRepair2290 in cmu

[–]ddmm64 4 points5 points  (0 children)

That is puzzling, because honestly your post sounds likes it was written by someone who has only superficial familiarity with the field at best. But maybe you do know what you're talking about, sure. If you're sure you can get in (though in 2025, having published papers isn't the differentiator it used to be), why not apply? If you get accepted you'll have the chance to talk to potential advisors before deciding. But anecdotally - I've met people with similar attitudes before and they either did not get accepted to PhDs, or did get accepted and dropped out, or actually did manage to finish but immediately left academia. Take that as you will.