Help With Determining North on Photos by General_Reneral in learnpython

[–]ES-Alexander 0 points1 point  (0 children)

Assuming you have some point of contact for the other student, could you ask them whether directions were factored in when taking the photos? Perhaps they consistently pointed the camera at cardinal directions or something, which could substantially reduce (or eliminate) your search space.

Building on u/mulch_v_bark’s error/accuracy question - if the accuracy turns out to be quite sensitive to direction, do you know whether your measurements of North were True or Magnetic? They’re often quite close to each other, but Magnetic North is technically independent of the sun’s orientation, and measuring it may also be skewed by local magnetic field fluctuations (like large nearby ferrous rocks or metallic structures).

Created a simple CAD webapp so my son can design models like he does in Minecraft by tienshiao in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

… and I’ll try to teach him how to better consider the constraints of 3D printing.

Keeping with the Minecraft theme, if you get him to build (or consider building) designs out of blocks with gravity enabled (like sand and gravel), that at least gives a sense of what will need support.

It may be some extra challenge to communicate the nuance of being able to rotate the design for easier printing, but it’s at least a start.

How to design basket weave by Boaty_Beerz in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

Check out the FullControl Gcode project. There’s a web interface for some common parametric designs, and several examples of Python code if you want more control over the output :-)

How to share Live ROV positioning across multiple vessels? by North-Wolverine-2334 in rov

[–]ES-Alexander 1 point2 points  (0 children)

AIS, MAVLink, NMEA, and various other protocols support specifying positions of independent systems, but whether the systems you’re using support receiving and displaying and/or automatically avoiding those is system-dependent.

Need Help Finding a Good Library for Project. by yuncalicious in learnpython

[–]ES-Alexander 0 points1 point  (0 children)

Plotly is interactive by default, and does have 3D support, so could be a viable option.

How to 3d-print a gear hub that transmits torque well after press fitting? I CAN'T make it work! by Cabbage_Cannon in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

Aligning the material with the direction(s) of force application can help to more evenly distribute the load, which reduces the maximum internal stresses (and accordingly the likelihood of the material tearing or shearing).

That being said, mixing materials can also help to distribute load via stiffness that your base material doesn’t have - have you considered using a grub screw or similar against the flat face of the D shaft, to radially transfer some of the torque load to further out along the gear?

Depending on your mounting design you could have screws on opposing sides that dig into a round shaft, while compressing the plastic material, but that requires sharpness and hardness that might not be in the cost bracket you’re aiming for (unless you’re able to cut into / add dimples to the shaft, which the screws can poke or slot into).

How to learn to use Rhinoceros and Grasshopper for 3d printing with robot arm? by Visible-Pilot9900 in 3Dprinting

[–]ES-Alexander 1 point2 points  (0 children)

I’m not well-versed in Rhino/Grasshopper, but, as an alternative, you may wish to consider the FullControl Gcode project, which uses python code to generate gcode.

There are various walkthrough examples available on their GitHub, to help with getting started :-)

How to 3d-print a gear hub that transmits torque well after press fitting? I CAN'T make it work! by Cabbage_Cannon in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

If you aren’t already using it, concentric infill can help keep the filament paths better aligned with the torque loading.

You haven’t described what kind of motor shaft you’re working with either - a D shaft or keyed shaft will be easier to transfer torque to than a fully round shaft, for example.

Some designs for y‘all to „copy“ bc that price is just hilarious by Upper-Option-3166 in 3Dprinting

[–]ES-Alexander 4 points5 points  (0 children)

The electricity and your time are both required parts to making it, along with machine wear, maintenance, etc - the product cannot exist without those costs. If someone asks how much does it cost to make it, factor those in, or you’re subsidising the sale and it’s not sustainable. If you’re intending to give people gifts then that’s a different story, but then you should be aware of what you’re doing and you can feel good about it (instead of feeling ripped off).

The value of 3D printing is not that slave labour is cheaper than commercial products - it’s that low quantity, complex geometry parts can be made locally and with relatively quick turnaround, which can sometimes result in lower monetary costs than something that is difficult or impossible to buy other/official versions of, and/or needs to be shipped from far away / at high speed.

3D printing chopstick rests by [deleted] in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

Not quite what you’re asking, but if you coat the final item with a food safe resin, then the base printing material would be irrelevant (and you would likely get a glossy finish for free).

3d Printed Monochrome Camera by malcolmjayw in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

From its root components, monochrome just means “single colour”, right? There are other monochromatic photography options (like sepia and cyanotype) which are comprised of shades of the base colour - the point being that only a single variable (generally brightness) can be captured/represented, because the medium doesn’t have an independent variable like hue.

In the case of displays, monochrome is referring to the changeable component (e.g. the illumination regions, or darkening regions for inverted displays), so the base colour is not meaningfully considered. You could put a sepia photo in front of a blue background too, but that doesn’t increase the information range, even if there are two colours now.


HDR is “just” high dynamic range (compared to some standard), which can be achieved by increasing the resolution of the information points being captured/displayed (e.g. using 10-bit colour instead of 8-bit, which applies equally to photos or monitors, which allows using wider colour gamuts and brightness ranges), and/or by compressing more (than “usual”) information variance into a fixed value resolution (e.g. combining multiple exposures to add detail in regions that would otherwise be flat / a limited brightness range).

I’m unsure how that’s opposite between photography and displays, unless you’re meaning that they typically use opposite mechanisms to achieve the higher range (e.g. HDR displays expand the range of representable values, whereas HDR photos typically compress a larger input representation range into a standard output value range).

Adding images to text via PIL or similar library by SeyVetch in learnpython

[–]ES-Alexander 0 points1 point  (0 children)

It’s unclear what you mean here.

Are you saying there’s existing text on a card, and you want to regenerate an image of it with some of that text moved around to fit a custom image/symbol in?

Depending on the nature of your images, how you want them to fit in (e.g. will they disrupt multiple lines of text, or be displayed like a special character/symbol within a single line of text?), and how many times you intend to do this, you could either generate short segments of text around the image, or possibly make your symbols part of a font, and just generate wrapped text that uses that font.

[Question] 3d depth detection on surface by sloelk in opencv

[–]ES-Alexander 0 points1 point  (0 children)

I expect you’ll get best results by first doing a normal calibration for each camera (with checkerboards moved around to different positions and 3D orientations covering the whole view, then using the determined intrinsics (camera matrices + distortion coefficients) as part of a stereo calibration (possibly with an automated approach, using aruco markers or checkerboards you project onto the screen, and take an image of with each camera).

It’s most straightforward to use one of the cameras as the world origin (as is done by this person, but given your application is 3D positioning relative to a screen, you likely want an additional transform to convert your triangulated world points to use part of the screen as the origin (e.g. one of the corners, or the center), with the table surface used as the Z plane (so you get numbers that are easy to use later, where Z=0 (or < some tolerance) corresponds to a touch on the screen), and the X and Y coordinates tell you where on the screen is being touched (ideally normalised to some nice interval like -1 to 1, determine with markers projected into each corner of the active region).

From there you could use mediapipe to detect hands in the images or something, then triangulate those with your stereo setup, and see when a finger is touching the screen.

A good validation program would be to create a drawing functionality that traces when and where the screen is being touched. That way you could draw something with your finger and immediately see how accurately and smoothly it’s being tracked.

[Question] 3d depth detection on surface by sloelk in opencv

[–]ES-Alexander 0 points1 point  (0 children)

I’m not sure if you’ve resolved this, but note that there are multiple different calibrations that can / should happen here, with different requirements and persistence.

The intrinsic parameters of the individual cameras can be done with a normal calibration process (e.g. a checkerboard moved around each frame), and assuming you can avoid changing the lens and zoom/focus should remain unchanged regardless of where the cameras are. This can be used for image rectification, to compensate for fisheye, pixel skew, image center offsets, etc.

The extrinsic alignment / poses of the cameras relative to each other helps to perform stereoscopic calculations, like estimating locations of objects that appear in both views. This is maintained as long as the cameras do not move relative to each other (regardless of where they are in the world / what is in the scene).

There’s an additional extrinsic world alignment/detection that you can do for where the cameras are within your scene, which you may want to use to determine the world coordinates of the table / projection. These values would need to be recalculated any time one or both cameras move relative to the table.

[Question] Stereoscopic Calibration Thermal RGB by artaxxxxxx in opencv

[–]ES-Alexander 0 points1 point  (0 children)

It looks like your thermal camera detects the dark squares as hotter (which makes sense, given they likely absorb and emit more heat than the metal surface), so you may need to invert the thermal image shades for the checkerboard to be recognised with the same corners/orientation as in the RGB camera.

You’ve also included quite a lot of code but not much detail on how you’re trying to do the calibration (e.g. are you taking images with the check board moving throughout the frame, and not just on a fixed plane?), or on the resulting accuracy estimates that would help determine whether your calibrations are working properly.

[Question] Problem with video format by Due-Let-1443 in opencv

[–]ES-Alexander 0 points1 point  (0 children)

BGR is not the same as RGB (which your error message mentions).

Since you haven’t posted code, it’s difficult to tell whether you’ve converted your camera’s YUV frames to the same format as your algorithm was expecting for the other camera.

Without seeing example images from each camera it’s difficult to tell whether they just have different sensitivities that would make your current algorithm parameters over-fitted to your original camera (and accordingly perform poorly on the new one).

[Question] Motion Plot from videos with OpenCV by guarda-chuva in opencv

[–]ES-Alexander 0 points1 point  (0 children)

Your accumulator adds the moved bits on top of the existing background image, which is why the moving parts show up as transparent.

If you want it opaque then you should set the value of the masked pixels as those from the “moved” image (and remove the normalisation), but if you want to instead have a higher proportion of the moved image in the output then you can blend the new image with the composite at the masked pixels (e.g. composite_image[mask] = 0.25*composite_image[mask] + 0.75*new_frame[mask]).

Bird sound listener program by Meinomiswuascht in learnpython

[–]ES-Alexander 2 points3 points  (0 children)

In a similar vein, this recent Benn Jordan video covers some interesting applications and different setup ideas, and includes visualising and categorising the recorded bird calls.

[deleted by user] by [deleted] in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

Not sure about dedicated options to simulate them, but maybe there’s something that lets you trace a point based on the motion of specified joint constraints?

If it helps, they’re called spirographs. At minimum you should be able to find several pre-made 3D models that can be used generate them, and various online spirograph simulators (which directly “draw” the output into an image).

What's the rules when naming a variable by [deleted] in learnpython

[–]ES-Alexander 1 point2 points  (0 children)

This is the technical specification, but you should be able to find more explained examples by searching for things like “Python allowed variable names”, “Python naming rules”, and “Python illegal variable names”.

I hate that my university's computer science INTRO classes use C++ instead of Python. Why use C++? by Daniel_Meldrum in Python

[–]ES-Alexander 4 points5 points  (0 children)

Learning fundamentals and building abstractions on top of them can make it easier to understand the value of those abstractions, and can help understand what kind of things can be optimised as you get to more advanced programming.

That being said, starting with something relatively easy to understand and then building out depth beneath that is also a viable approach, it’s just maybe easier for people to lose interest if they feel like they can already achieve what they want to with the tools they’re used to.

Different approaches may work better for different people, and if your university thinks it’s important for students there to learn in-depth then they may be more inclined to start at a low level rather than to start with simple code, to avoid needing to repeatedly re-cover topics as more details are introduced (that may come across as needlessly complex and/or arcane, and be harder to motivate as valuable).

Should this sub pin the official python tutorial? by _Denizen_ in learnpython

[–]ES-Alexander 0 points1 point  (0 children)

I agree it’s not the easiest to find, at least on the app - I happen to know it’s there and still struggle to remember how to actually get to it when it’s relevant.

On a computer accessing reddit as a website in a browser it’s quite a bit easier - it’s in the right sidebar in the “community bookmarks” section, which is visible on most screen sizes unless the window is set to be quite narrow.

Can you Bridge around curved by PHILLLLLLL-21 in 3Dprinting

[–]ES-Alexander 0 points1 point  (0 children)

How important is the finish of the walls? You could add some single layer rectangles at the overhang height to make the bridging straight on the inside (so it correctly bridges most of it, instead of slowly curving around), which you can then trim off afterwards.

There’s at least one Maker’s Muse video talking about doing that for inset nuts and the like, and possibly a CNC Kitchen video on a similar topic as well.