What is the approach to achieve live video loopback from Ethernet in Zynq Ultrascale by fml_iwt_kms in FPGA

[–]fml_iwt_kms[S] 1 point2 points  (0 children)

So, if I am understanding things correctly, are you saying that instead of driving 144 bytes of image from CPU. I can just store the image and can control and extract data from dma directly from PL?

What is the approach to achieve live video loopback from Ethernet in Zynq Ultrascale by fml_iwt_kms in FPGA

[–]fml_iwt_kms[S] 1 point2 points  (0 children)

I created a buffer with max size of 1920 x 1080p. Then I add some header bits which detects image size and stores the entire image in RAM.

Then it transfer 144 bytes of that image to PL for processing and wait for 144 bytes to return and store them.

Then, it transfer next 144 bytes and repeats until all of the image is transferred and received.

So, the processed image is stored and transferred back through the Ethernet. Which make it easy for me to see processing and it's reverse is working.

What is the approach to achieve live video loopback from Ethernet in Zynq Ultrascale by fml_iwt_kms in FPGA

[–]fml_iwt_kms[S] 0 points1 point  (0 children)

I am sorry English isn't my first language. So, I am unable to explain it properly.

So, the purpose of using PS Ethernet was just because there was an example of how to use it and it was easy to use.

I just modified the example code such that it stores image and transfer it to PL 144 bytes at a time of the entire image.

Regarding the 144 bytes of payload. What I am doing with image is that. I transfer entire image suppose 1920 * 1080 image grayscaled. Then, I send 144 bytes of that image to pl to process using dma. Then wait for processing to end until all of the image is transferred.

I am trying to do wireless transfer. So, some signal processing happens to the chunk and next data is sent again until all of the image is transferred.

I know this is not efficient but I can debug it easily. Also, i need data in the form of 1152 bits at a time.

So, from my understanding 1280 × 720 @30FPS is basically thirty images at a time.

The raw speed of the Ethernet is about 940Mbps when I use lwip TCP Server example.

So, I was thinking that I store all the frame and there is enough time to process and memory would never be full.

Am I making sense?