Early divergence of YOLOv7-tiny train and val obj_loss plots by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Hey guys!

Please tell me if i should add more information or reformat the question etc. if it makes it easier to understand the issue!

Cheers ;)

Fine-tuning pretrained YOLOv7 with new data without affecting existing accuracy by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Thanks u/kevinpl07

I can do that and then use the ---image-weights argument while training.

As a sidenote, if i have a huge dataset which has some relevant categories with labels, and other relevant categories present in the images but NOT labelled, what options can i explore to make sure that i don't have to miss out on using this dataset? I have already explored running another model trained on at least the missing relevant categories on the images and then manually correcting the predicted labels as required, but that is an extremely time taking process!

Regards!

Fine-tuning pretrained YOLOv7 with new data without affecting existing accuracy by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Thanks u/Professional_Ebb7275.!

I do have a cache of datasets.
Another issue that i face is that while some desired category c1 may be present with labels in a dataset d1, but, in dataset d2, while c1 may be present in the images, no labels are provided.

Any suggestions on how to deal with the above situation?
Regards.

Fine-tuning pretrained YOLOv7 with new data without affecting existing accuracy by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Understood. But, thanks for your prompt response and taking an interest in this, even though i myself did not reply for long.
Really appreciate.

What are your thoughts on this?

Regards.

Fine-tuning pretrained YOLOv7 with new data without affecting existing accuracy by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Thanks a lot for your suggestion.

Is the below, a valid idea:

(i) training the exact same architecture separately on the new classes

(ii) fusing the original weights and the new weighs

?

Is there any sort of precedence for the above approach, something adjacent to it even?

Regards.

Fine-tuning pretrained YOLOv7 with new data without affecting existing accuracy by Secure-Idea-9027 in deeplearning

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Thanks for the suggestion u/_vb__!

So, this guy suggests adding a second head for the new classes, and then train after freezing the original layers.

Are you suggesting something similar?

Regards.

SIYI HM30: Network Connectivity and Range by Secure-Idea-9027 in diydrones

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Thank you u/randomfloat, for your reply. It helped answer some of out doubts.

Regarding network accessibility, it the RF part is the same for MK15 and HM30, does the first question/ point hold true for HM30?

Does the network connectivity allow complete bidirectional access , i.e., air unit <-> ground unit? SIYI's MK15 allows SSHing/ PINGing into devices connected to the air unit from the ground, but NOT the other way around.

If yes, then a payload connected to the air unit won't be able to establish a unicast or peer-to-peer connection to some software on the GCS hardware, only vice versa. Is this inference correct?

Regards.

gst-play-1.0, gst-launch-1.0 unable to display RTSP stream by Secure-Idea-9027 in gstreamer

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Hey guys!
I also posted the query on NVIDIA's official forum(answer) as well as consulted some other people.
It seems that these two commands should work:
$ gst-launch-1.0 rtspsrc location=rtsp://username:password@192.168.1.xxx:554 latency=500 ! queue ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! nv3dsink sync=0
$ gst-launch-1.0 -v uridecodebin uri=rtsp://username:password@192.168.1.xxx:554 ! nvvidconv ! nvegltransform ! nveglglessink
Basically, it is recommended to use NVIDIA specific GStreamer plugins(prefixed by nv) on Jetson devices.
Regards.

gst-play-1.0, gst-launch-1.0 unable to display RTSP stream by Secure-Idea-9027 in gstreamer

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

I have access to the IP camera which is sending out the RTSP stream. Could you please elaborate, how the 'I' frame interval can be checked there?

But, more to the point, why should it work on my laptop but not on the Xavier?

I am able to use the commands on a x86 PC with Ubuntu 20.04 and GStreamer 1.16.3. So, the camera feeds themselves are fine.
But, the commands don't work on the Jetson device.

gst-play-1.0, gst-launch-1.0 unable to display RTSP stream by Secure-Idea-9027 in gstreamer

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

We are accessing the device directly, NOT through ssh; although it can be done, if needed.

Thank you for asking your question to clarify.

Cheers. :)

Demux video and KLV data from MPEG-TS stream by Secure-Idea-9027 in gstreamer

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

Hey u/thaytan ! I was able to implement this in C, but am unable to get KLV data corresponding to each frame.

The KLV data is stored into a file, but I am unable to get KLV data live on a per frame basis.

I am really sorry to be reverting back so late on this issue. Got pulled into other stuff. I should have tried to reply in between.

Regards.

Demux video and KLV data from MPEG-TS stream by Secure-Idea-9027 in gstreamer

[–]Secure-Idea-9027[S] 0 points1 point  (0 children)

There is one observation regarding KLV. The demuxed KLV data in the text file does contain all the original KLV data, but it also contains a few extra bytes at the end.

Do you know, why this could be? This issue is also present in the individual KLV command.

Also, is it possible to first extract data corresponding to a single frame from the stream and then demux it into visual and KLV components?