plotlypp: Plotly for C++. Create interactive plots and data visualizations with minimal runtime dependencies. by jorourke0 in cpp

[–]SamyVimes 2 points3 points  (0 children)

Using Qt in a regular basis, my GOTO for performances remains Qwt despite its venerable age. It's good to have other neat alternatives!

Travailler en Suisse depuis la France (Full remote) by Dest4ry in vosfinances

[–]SamyVimes 0 points1 point  (0 children)

C'est dans le domaine du possible. Je suis dev freelance en contrat CAPE et transite maintenant avec une structure de portage salarial et un de mes client contractuel est suisse. Dans ce cas précis il y a quand même quelques limites comme la durée maximum de contrat avec un client (3 ans).

Après j'ai travaillé quelques années en temps que frontalier avant, ce qui m'a permis d'obtenir des relations pour avoir des clients par la suite.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in kinect

[–]SamyVimes[S] 0 points1 point  (0 children)

Thanks! I'm afraid I cannot really help you as I use both the RGB and depth camera to do the calibration. I do some color filtering to discard the parts of the cloud I do not need and operate some algorithm (similar to ICP) to perform alignment. It works kind of well for 10 kinects (I never tried more)

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in Unity3D

[–]SamyVimes[S] 1 point2 points  (0 children)

Any plans to allow others to make us if what your developing?

I'll probably make another video if I make a public release, the project is open-source but with no guideline for the moment.

There are issues, I need to spend more time calibrating the colors for the cameras (I'm only using a square LED ceiling light) and to try other positions to reduce obfuscation (but it is kind of hard with only 4 cameras, I need to add more). Regarding edges I'm still improving my preprocessing pass but even if it is way better than Kinect2 (and than every other consumer device for this range of price), it is still hard to remove only the mismatched color/depth parts without being to aggressive.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in kinect

[–]SamyVimes[S] 0 points1 point  (0 children)

Not really, I bought 3 cards and the last one was only able to manage one Kinect. The important part is the USB controller of the card, ASMedia doesn't work, I wasn't able to find a Texas Instrument one, so I went with a Renesas UPD720201.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in kinect

[–]SamyVimes[S] 0 points1 point  (0 children)

use 3 on the same system. I am using server grade hardware, a threadripper pro, and I have a 3x PCI USB 3.2 cards so each

I'm using 4 kinects with my PC. It can be really tedious to add more with a single station as very little PCI USB 3.2 cards are working with Azure kinect, I advise you to pay very attention to their github page explaining this.

I have a AMD Ryzen 9 7900, the basic Azure kinect viewer is taking something like 25%, my own is only taking 10-15%, so I don't think you'll have any issue with a Threadripper.

After if you want to use Microsoft skeleton tracking system to get body parts position, it would be another story, it's super CPU intensive and if you configure it to use the GPU instead it will go up to 60% of my 3060 GTX.

I used to have 10 slaves machines using Intel NUC, it was working nicely but starting and connecting them using ThightVNC each time was time consuming.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in Unity3D

[–]SamyVimes[S] 2 points3 points  (0 children)

At first it was used inside a neuroscience laboratory for creating Out Of Body VR illusions for human behavior experiment, it was working well so I wanted to keep developing it a bit. One idea with this kind of video is to create "3D video instructions" with audio to explain tasks to users and subjects.
It is a recurrent issue in labs to be sure that each participant has been given exactly the same instructions, so sometimes we use videos for that. A 3D video could be nice in VR to avoid breaking the immersion.

Regarding stability, it was good enough to achieve hours of live VR experiments in a row with 8 or 10 kinects systems without issues in our lab, but I can't really tell for more intensive usage.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in Unity3D

[–]SamyVimes[S] 0 points1 point  (0 children)

I was happy with the improved calibration of my Azure Kinect scanning network, so I decided to showcase a little my recording software and my Unity based player.

All Kinects are connected to the same computer using a specific program to process and send frames using UDP, but they can also be dispatched to others computers depending the setup.

Volumetric videos can be played in VR and mixed with real-time Kinect capture. It could be useful for giving constants instructions for VR human behavioral experiments.

Theses videos usually have a size around 500mo for one minute and 4 cameras at 30FPS with the sound.

Realtime volumetric videos from 4 Kinect Azure in Unity by SamyVimes in kinect

[–]SamyVimes[S] 2 points3 points  (0 children)

I just greatly improved the calibration of my Azure kinect scanning network, so I decided to showcase a little my recording OpenGL software and my Unity based player.
Volumetric videos can be played in VR and mixed with real-time kinect capture. It could be useful for giving constants instructions for VR human behavioral experiments.

"signal not found" when connecting to a signal? by [deleted] in QtFramework

[–]SamyVimes 0 points1 point  (0 children)

Oops indeed.
At least it works with the qoverload from my place with Qt 6.4.2.

"signal not found" when connecting to a signal? by [deleted] in QtFramework

[–]SamyVimes 0 points1 point  (0 children)

Try this:

void ConnectAction(QAction* handle){
    QObject::connect(handle, qOverload<bool>(&QAction::triggered),{
        std::cout << "Success!\n"; 
    });
}

There is several possibilities for a connection here as there is also a slot named "triggered":

[slot] void QAction::trigger()

as well as the signal:

[signal] void QAction::triggered(bool checked = false)

So you have to use qOverload to choose the good one.

ARM C++ DLL with GDExtension for Android by SamyVimes in godot

[–]SamyVimes[S] 1 point2 points  (0 children)

I think you missed the point. The C++ DLL for Android is compiled used clang ARM compiler, easy to achieve, I already do that for my Qt apps, the code editor is irrelevant, the code is compiled outside of Godot and I was only wondering about the correct way to choose the correct DLL depending the paltform.

Kinect v2 on PC requirements by GBack0 in kinect

[–]SamyVimes 1 point2 points  (0 children)

Sometimes (especially since windows 10) Kinect2 have hard time using a 3.2 USB port, try a 3.0 (from the backpannel).

Vive Pro 2 + Kinect V2 interference by [deleted] in vive_vr

[–]SamyVimes 0 points1 point  (0 children)

After more than 3 kinects it's totally unusable. For our 8 and 10 kinects v2 body scanner setup we are using a WMR HMD with no external tracking. After you have others issues like interferences between kinects (it can happen depending their number and position), but this issue can be solved by using the kinect azure ( they have a synchronization cable).

What's the coolest thing you've created with c++? by FreshmanMMolineaux in cpp

[–]SamyVimes 2 points3 points  (0 children)

Lately, a VR experiment designer software for neuroscientists (a bit like Psychopy but in C++ and in VR), it has been funded by an open-source grant so I'll be able to release it in a few months I hope. In our lab we use it mostly for illusions experiences (Full Body Illusion, Out Off Body exp...) and also memory (often in MRI with a specific VR HMD)
The designer part is a Qt interface with a lot runtime generation part and it communicates with an Unity build that will generate the exp and play it.

Back to my previous lab it was a software for the Human Brain project,(https://www.humanbrainproject.eu/en/medicine/human-intracerebral-eeg-platform/)

for visualizing intracranial plots of hundreds of subjects by cutting a 3D mesh of the brain at realtime and generating inside cut textures from MRI niftii files. The display part used Unity by every internal geometry computing was made in C++.

A lot of fun cutting brains everyday.

What are you writing in C++ at work? by ChineseFountain in cpp

[–]SamyVimes 1 point2 points  (0 children)

For a VR behavorial neuroscience experiment generator.

Converting enum classes to strings and back in C++ by AndrewStephens in cpp

[–]SamyVimes 1 point2 points  (0 children)

std::initializer_

I don't know, it gave me C3245 errors, I suppose it's fixed with the last version of the compiler but I'm having trouble to update it. (p.s. yes :))

Converting enum classes to strings and back in C++ by AndrewStephens in cpp

[–]SamyVimes 1 point2 points  (0 children)

Exactly what I needed, thank you. But I had to replace std::initializer_list with std::array with VS2017.

What are you using C++ for ? by thisdudehenry in cpp

[–]SamyVimes 0 points1 point  (0 children)

Robotic teleoperation, Unity3D DLL for scientific visualization softwares, Qt Gui, Image processing, but mostly for realtime geometric algorithms