In this video rendering, videos are decoded by a GPU, and the CPU does not work in most cases. Besides, there is no vexing problem of the spatial domain.
The graphics card can exert its maximum performance when playing camera multi-channel videos or high-resolution and high-frame-rate videos (that's done in this project).
It supports various network protocols such as RTSP, RTMP and FLV.
It also provides excellent rendering efficiency at the time of playing 4k and 8k videos.
This project refers to Lei Xiaohua's blog, so we highly appreciate his contribution to audio and video technology.
For HDR video, I expose gamma and contrast in the shader to correct the color.
Implementation principle:
Hardware decoding is achieved through the LibVLCSharp library to call back video frame data in YUV format (8bit, 10bit), and the pictures are rendered through GLWpfControl (this control is based on D3DImage, so there is no airspace problem).
Video YUV data -> OpenGL -> Shader(YUV to RGB) -> picture rendering
Testing equipment
CPU: AMD Ryzen 7 5800H
GPU: Nvidia GeForce RTX 3050 Laptop GPU 4G
Considering that the laptop relies on a core graphics card to render the pictures and the power consumption is limited, the actual test efficiency will be affected to some extent.
4K 60fps SDR video
CPU utilization 5 ~ 10%
GPU utilization 40 ~ 50%
4K 60fps HDR (in view of insufficient video brightness, HDR videos appearing on SDR screen is post-processed by tone mapping, and the conversion matrix available on the Internet will basically lose brightness)
CPU utilization 10 ~ 20%
GPU utilization 50 ~ 60%
4K 144fps SDR video (a high frame rate is achieved for later frame interpolation, so the frame interval is unstable)
CPU utilization 10 ~ 20%
GPU utilization 60 ~ 75%
8K 60fps SDR video (actually stable at around 40-45fps)
CPU utilization 10 ~ 20%
GPU utilization 70 ~ 80%
4-channel 1080p SDR video (30fps for the first two videos, 25fps for the last two)