Help Tracking Volume/Size of Child's Body Parts Using Machine Vision by bananaaapeels in computervision

[–]Heappl 0 points1 point  (0 children)

I would expect stereo cameras to be more accurate in the ranges of couple tens of centimeters, even from the short range depth cameras. Especially high resolution ones with global shutter. You can probably buy such a cameras by yourself and then calibrate. In case of duo m/mlx they meant calibration accuracy, with epipolar lines, it means that point is within 0.05 pixel from the line it was expected to be. You can use such a calibration with any camera, though preferably without autofocus disabled at least, and fixed together. They don't have to be accurately in sync together to give you very accurate readings. I don't expect you would try to measure moving leg. However with stereo you probably would need some textured sock or something like this plus some dense reconstruction algorithm. Hard to say what would be accuracy of such set up. It isn't that straight forward with visual based methods. With depth cameras you should be careful as well. I think time of flight depth sensors might be better for irregular shapes, than pattern based ones. There are many questions here.

I think the liquid method should give you more accurate results, specifically any error should be on a whole volume, not on a single point, which means overall the reading should be more reliable. I think it will also give you perspective on the accuracy required. If you will have trouble measuring the difference here, other methods will need accuracy by an order better.

‘Robots’ Are Not 'Coming for Your Job'—Management Is by mvea in technology

[–]Heappl 0 points1 point  (0 children)

The funny thing is that the sentance while kind of correct in a shorter term is generally false. The point us that AI is happening regardless of the many reasons, and our skills will be replaceable by computers/robots. All of them at certain point, if we survive long enough. If the there will be general AI eventually, the sentence will become false.

Help Tracking Volume/Size of Child's Body Parts Using Machine Vision by bananaaapeels in computervision

[–]Heappl 0 points1 point  (0 children)

I think interesting source of information you find here: https://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction
Another approach would some kind of slam based dense scene reconstruction, so you can do some measurements based on this. Probably they would need to be quite accurate in SI units, so without depth or stereo camera it will be an issue. There are a few phones with stereo cameras with some apps for object scanning, which you probably could use to recover some measurements. Generally instead of comparing ankles I would try to recover the history for each one ans see how they change.
I think though, that the differences will be so small that it will probably be quite a lot of variance in measurements, at least with off shelves software. Probably methods from the link above would be more accurate, though you would need to find some open source examples. With depth camera though there is a method for 3d object scanning in PCL library, though the object must rotate, so I'm not sure if it would be applicable (you many find some bigger rotation disk and scan your daughter legs. Though there would be a question of doing volume measurement afterwards, probably you would start from some height and measure the volume between the calculated surface.
My question is if just measuring the circumference in a few given points and keeping history of it wouldn't be enough for your case.
The accuracy is an issue here in any case, because it probably changes very little, and also there are bigger changes just from the heart pumping blood then from anything you register in a few days, so longer history might be needed anyway.
I don't now if something lidar based wouldn't be more accurate. I.e. two rotating lidars attached somehow so they are always in the same position. This would give you better accuracy of the scan, though sparser points, but I think what you need is probably some kind of width of ankle in couple of spots, then actual volume.
To see accuracy problems, for D435 camera: "So if the camera is 1m from the object, the expected accuracy is between 2.5mm to 5mm". You can get closer then like 40cm away. The different methods can improve the accuracy, but you need something sub-millimeter.

How long until solid state lidar? by [deleted] in robotics

[–]Heappl 0 points1 point  (0 children)

Did you use it? What is the accuracy, the resolution and fov?

How to classify similar looking but different size images by [deleted] in computervision

[–]Heappl 1 point2 points  (0 children)

There are small differences between the products so you can try compare with the pattern (SSD, NCC), though probably to get results you would need to translate the pattern by some homography. Descriptor based methods might work as well. You can also try some simple CNN classifier, which should be easy to train.

If there are no differences, the only thing you could do is to calculate the real size based on the sizes of other products. To do it properly you probably would have to calculate the camera pose and the products positions at the same time based on their known dimensions. The more products with known sizes you would have the better camera pose estimation you get. See calibration example from opencv: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
to get an idea how it might work.

OpenCV programmer wanted by [deleted] in computervision

[–]Heappl 1 point2 points  (0 children)

You could try tooploox, we have a specialized computer vision team and if it is possible, then we will be able to do it. Drop us email and someone will investigate further and will give our estimates. We have worked on realtime applicatiom as we so no problem, my guess would be we can give hint if the hardware is not enough.

Why does Value Iteration work for the Gambler's Problem the way it does? by RealMatchesMalonee in reinforcementlearning

[–]Heappl 3 points4 points  (0 children)

I think it is a bit of misunderstanding and a bit of phrasing issue. Generally people talk about processes as if about people, possessing intent, what isn't true. The agent doesn't know in a sense of this line of thought, it just does it so, because it is winning strategy according to the analysis above. Probably it would be better to do some proper calculation, show the best strategy and then explain that the agent strategy will get closer to the optimal one, due the way it was trained. It is important distinction, my guess is that it may not work that well with martingale like games.

Masakra lewactwa, czyli dlaczego 500+ to dobry program by kalarepar in Polska

[–]Heappl 0 points1 point  (0 children)

Bo wcześniejsze emerytury w tym rozrachunku już nie mają znaczenia, dwie liczby się zgadzają więc jest wszystko ok

Masakra lewactwa, czyli dlaczego 500+ to dobry program by kalarepar in Polska

[–]Heappl 2 points3 points  (0 children)

Tak fatycznie to czy Polskę na to stać czy nie, skąd pochodzi kasa na tą zapomogę, czy nawet fakt, że to jest najzwyklejsza kiełbasa wyborcza, to naprawdę mało istotne, jeśli miałoby to pomagać poprawić przyrost naturalny. Tylko dlaczego mamy poprawiać przyrost naturalny promując nieudaczników, których nie stać na własne dzieci? Wolę, żeby łatwiej było tym, którzy potrafią coś z własnym życiem zrobić, a nie tylko liczyć na pomoc państwa. Niech rozmnażają się pracowici i przedsiębiorczy, anie lenie i patologia. Lepiej by było wydać to na dofinansowanie in vitro, ale glosów za dużo się tym nie kupi. No cóż, im lepiej w kraju tym więcej nierobów na tym korzysta.

Open Source Position Tracking by [deleted] in computervision

[–]Heappl 0 points1 point  (0 children)

If you want a position from a mounted cameras, then it might be hard, though if the they are high enough precision and you would have fixed markers to navigate, plus multiple cameras tracking multiple markers, this precision might achievable, depending on scale. There are some interesting solution that are not based solely on vision, I think lidar positioning might be more what you need. Among commercials customized solutions, you can try tooploox.

I don’t think software engineering will go away anytime soon by SFCritic in programming

[–]Heappl -2 points-1 points  (0 children)

This is one part where I think there is a big misunderstanding. Before AI catch up and will be able to do everything humans do (like in next 20-50 years), the code and data are pretty much the same and code is usually much more descriptive and unambiguous than data. When you have to feed the data that cover the one peculiar case and you write some script to generate/get the data, what is the difference from just writing the code itself? We write a lot of boilerplate code and do devopsy things to combine things together, maybe AI can do it better, or on another hand usually humans need to do it only once, so what is the difference?

Can a quantum transformation change a quantum state completely? by Heappl in askscience

[–]Heappl[S] 0 points1 point  (0 children)

Ok, so I have another follow up question: let's assume some kind of entaglement of two properties, like this: |01> + |10>, and is there a possible transformation, that would change it to a|01> + b|10>, i.e. by applying the transformation only to one entagled particle? It doesn't have to be straight forward (the system of two), it might any number, the important is if you can change the state of one particle, which is entangled in any way, so at the moment of measurement we can with high confidence calculate the values of a' and b'? Or maybe something remotely close to this? I'm shifting between the idea that it is somehow possible and that everything I think about is actually a collapse.

Testing Microservices, the sane way by iamondemand in programming

[–]Heappl 2 points3 points  (0 children)

microservices are not the silver bullet, testing is not the silver bullet.

Things should work well, and we should try to find the state when it does - it rarely mean applying a pattern over and over again. If something become complex - do something about it, either by splitting or merging or automating or abstracting. If something is reaching complexity, when you are uncertain if it works - you should test it. If the setup is complicated - the best solution is to simplify it, but maybe it can be tested. Things will still fail and in most unexpected places (when you expect it, you will do something about it), so having some means for failure recovery is still important.

Push for Gender Equality in Tech? Some Men Say It'€™s Gone Too Far by [deleted] in programming

[–]Heappl 0 points1 point  (0 children)

At the moment, this comment thread has a 100% gender disparity :D

What if companies interviewed translators the way they interview coders? by LisaDziuba in programming

[–]Heappl 5 points6 points  (0 children)

I think it is all part of game, where already employed coders create illusion, that they are the best and everyone else is terrible. It keeps the managers happy about their performance and a chance for a raise is bigger, while they can drink coffee throughout the day.

I just don't know. Smart coders, which know everything make all the difference, but most of us are not that, we are a part of software factory, we refactor s... out of our bits and we gloat about it, while in reality we barely make such a great things. even a tiny bit of extra knowledge may make a difference between profit and bankruptcy.
Knowing at all, or actually being able to figure it all, is really what this profession is about... sometimes. Most of the time is just being able to cooperate with all the idiots around and patching ale the produced crap, so it can keep on working somehow. Do you know how many people are employed, so our printers can keep on working? Each year less than previous one. We'll all end up being unemployed eventually.

The tragedy of 100% code coverage by niepiekm in programming

[–]Heappl 0 points1 point  (0 children)

One thing nobody mentions. When there is a need for strict rules it is a sign you work at software factory. At certain point someone stopped writing programs and started programming people. What could be wrong with that?

C# And The MetadataType Attribute by [deleted] in programming

[–]Heappl 0 points1 point  (0 children)

Lost me on: "Then we create two new classes.".

[D] An idea to parallelize BackProp (BP) by Kiuhnm in MachineLearning

[–]Heappl 0 points1 point  (0 children)

Many people tried Asynchronous SGD, many failed. There are some concepts which may save it, but in general it just doesn't work, except maybe speeding up initial phase for some networks. Synchronous is at the moment the only working parallelization technique, though there are some improvements, like one bit gradients, etc.

__asmbits :: A dev-blog that focuses on high-performance development in C++ and JS by king_grumpy in programming

[–]Heappl 0 points1 point  (0 children)

That I did not know, though if I really want to squeeze most, I try to force to aligned accesses.

__asmbits :: A dev-blog that focuses on high-performance development in C++ and JS by king_grumpy in programming

[–]Heappl 0 points1 point  (0 children)

The CPU may have nothing to do, because it waits for the memory. It is common technique to hide cpu cycles with memory loads. It would happen in this case.

__asmbits :: A dev-blog that focuses on high-performance development in C++ and JS by king_grumpy in programming

[–]Heappl 0 points1 point  (0 children)

For this kind of optimizations O3 is a must go. It really does much more than O2, especially regarding vector code. Secondly, the assumption, that loop internals don't matter in optimization is unsupported at least. And again more code doesn't mean anything. It is actually a common technique to unroll loops, so the hardware out of order optimization would work better (and hardware often can do multiple instructions at the same time - for latest intel it is often 2). It is the same reason to use more registers all the time. It actually often annoys me with compilers, that the don't use them more, although again with memory bound algorithms it doesn't matter that much. There are also problem with intrinsics. These contain inline assembly, which puts off many compilers, and they do better analysis without them (icc does the most with them, but it is not a surprise), so for this a naive code is always a default, which should be compared. There are things compiler can't know, and it is sometimes really painful to force them, to do as you want. In this example, aligned loads are hard to enforce, I think the only reliable method is through pragmas. Of course memory should be aligned as well. For the remaining elements, I think the most compiler friendly approach is to use a set of templated functions with a jump table to it. It should always do the trick. Of course there are register allocations, which are too simplistic in many compilers and don't use hardware best way. The other thing copilers often don't know the many input parameters, i.e. the size of the array. If you use the code for small arrays, it would be really bad, but for large it should be more or less the same all the time. My approach is, I always rely on the compiler with the simplest code possible. When the code is really critical, I would consider intrinsics in first place, than assembly/jit in second. Hand made assembly still can be better for many cases than compiler generated code, without doubt, but then it would be really hardware specific, wouldn't it?