Hardware acceleration nvidia linux

Hardware acceleration nvidia linux

With modern graphics cards, it’s often possible to offload the jobs of video encoding and decoding to them from the CPU in order to reduce power usage and make more resources available to the rest of the system. Compared to CPUs, GPUs are much more efficient at the job. However, both hardware as well as software support are required for this offloading, and the latter in particular (at least in the Linux world) has undergone much evolution in recent years. Online documentation is therefore sparse and incomplete, inconsistent, and often outdated. Furthermore, support for hardware video acceleration under Linux world is unfortunately split across different APIs with different levels of support.

Benefits

Historically, the benefits of hardware acceleration under Linux have been uncertain, but it seems likely that support today has improved drastically. In at least some relatively typical scenarios, the performance gains of using hardware decoding can be huge, with reductions in CPU usage of around 90%.

APIs and Hardware / Software Support

  • VA-API — Supported on Intel, AMD, and NVIDIA (only via the open-source Nouveau drivers). Widely supported by software, including Kodi, VLC, MPV, Chromium, and Firefox. Main limitation is lacking any support in the proprietary NVIDIA drivers.
  • VDPAU — Supported fully on AMD and NVIDIA (both proprietary and Nouveau). Supported by most desktop applications like Kodi, VLC, and MPV, but has no support at all in Chromium or Firefox. Main limitations are poor and incomplete Intel support and not working with browsers for web video acceleration.
  • NVENC/NVDEC — A proprietary API supported exclusively by NVIDIA. Only supported in a few major applications (FFmpeg and OBS Studio for encoding, FFmpeg and MPV for decoding). Main limitation is limited software and hardware support across the board because of its proprietary nature.

Installation

VA-API

VA-API sees broad software support and is even used by default in applications like MPV when it’s available.

For Nouveau and the various AMD drivers, support can be added simply by installing the mesa-va-drivers package.

For Intel, it’s split generationally, and into free and non-free drivers. The non-free drivers are necessary to encode media while the free drivers can only decode.

For Gen 8+ Intel hardware, the free driver can be installed with the intel-media-va-driver package. You can find the non-free driver in the intel-media-va-driver-non-free package after adding a non-free component to your apt sources.

For older Intel hardware, the free driver can be installed with the i965-va-driver package. The non-free driver can be installed with the i965-va-driver-shaders package after adding a non-free component to your apt sources. This driver supports up to Gen 9 GPUs. If both drivers are installed, the newer driver from intel-media-va-driver is preferred over i965-va-driver (since Debian 11/Bullseye).

Читайте также:  Linux etc hosts conf

Driver selection can be overridden by setting the environment variable LIBVA_DRIVER_NAME to a specific driver, e.g., LIBVA_DRIVER_NAME=i965 (to use the driver from i965-va-driver on Bullseye) or LIBVA_DRIVER_NAME=iHD (to use the driver from intel-media-va-driver on Debian 10/Buster). See EnvironmentVariables for more details on how to set this environment variable system-wide or per user.

VDPAU

VDPAU faces considerably more limitations compared to VA-API, but nonetheless, it’s the only option for some users. Particularly, anyone using the NVIDIA proprietary drivers. It’s not supported in any major browser except for GNOME Web but is useful for local playback. MPV is recommended for this.

To enable VDPAU support for the AMD drivers (radeon and amdgpu), along with the open-source Nouveau driver for NVIDIA cards, install the vdpau-driver-all package.

This will also enable VDPAU support over the OpenGL/VA-API backend for Intel GPUs. However, this has severe stability issues and may not work at all on some Intel devices. If possible, you are heavily recommended to use VA-API instead with Intel.

To enable VDPAU support for the proprietary NVIDIA drivers, you must choose the relevant package for your driver version. If you installed the latest drivers via the nvidia-driver package, then you can simply install the nvidia-vdpau-driver package.

NVENC/NVDEC

NVDEC is supported by the libnvcuvid1 package.

NVENC is supported by the libnvidia-encode1 package.

For the legacy drivers, instead choose either the libnvidia-legacy-340xx-encode1 or libnvidia-legacy-390xx-encode1 package.

These are only non-free runtime libraries, however. Applications in Debian’s main archive including FFmpeg (starting with libavcodec58 7:4.4.1-2) and users of FFmpeg’s libraries will load the libraries from the non-free driver if they are available.

Checking hardware support

You can find a full report of whether or not VA-API or VDPAU are functional, and what codecs they support, by installing the vainfo and vdpauinfo packages and running their commands. vainfo and vdpauinfo

Intel

vainfo will display codecs supported for decoding and encoding.

VAEntrypointVLD = decode VAEntrypointEnc* = encode

Intel GPU hardware acceleration can be confirmed via intel_gpu_top found in intel-gpu-tools. Anything greater than 0% in the «ENGINE Video» section confirms it is working in whatever application and codec you are testing e.g:

# intel_gpu_top intel-gpu-top - 173/ 174 MHz; 71% RC6; 0.16 Watts; 403 irqs/s IMC reads: 1020 MiB/s IMC writes: 226 MiB/s ENGINE BUSY MI_SEMA MI_WAIT Render/3D/0 5.12% |█▎ | 0% 0% Blitter/0 0.00% | | 0% 0% Video/0 6.47% |█▋ | 0% 0% VideoEnhance/0 0.00% | | 0% 0%

Note: If using intel_gpu_top, test each application individually.

Application Support

Application support for hardware acceleration varies, and each application requires individual configuration. Following are details for various applications.

mpv

mpv has good hardware acceleration support, although it is not enabled by default. To enable it, use the --hwdec command line switch. It can also be made the default by adding a line like “hwdec” to the mpv configuration file (e.g., $HOME/.config/mpv/mpv.conf). [hwdec can also take various values, although ideally supplying the switch with no value should be sufficient. See the mpv manpage for more details (which recommends not to set the switch).]

Читайте также:  Nfs server start linux

If hardware acceleration is being used, mpv’s output will contain lines like the following:

libva info: VA-API version 0.39.4 libva info: va_getDriverName() returns 0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so libva info: Found init function __vaDriverInit_0_39 libva info: va_openDriver() returns 0 AO: [alsa] 48000Hz stereo 2ch float Using hardware decoding (vaapi). VO: [opengl] 1920x816 vaapi

VLC

Hardware acceleration in VLC is controlled in the GUI via “Tools → Preferences → Input / Codecs → Hardware-accelerated decoding”, or via the CLI option –avcodec-hw value [‘value’ is mandatory].

If hardware acceleration is being used, VLC’s output will contain lines like the following:

libva info: VA-API version 0.39.4 libva info: va_getDriverName() returns 0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so libva info: Found init function __vaDriverInit_0_39 libva info: va_openDriver() returns 0 [00007f082ce97280] avcodec decoder: Using Intel i965 driver for Intel(R) Broadwell - 1.7.3 for hardware decoding

Please beaware that VLC 3.0.x in bookworm has limited VA-API support. Hardware accelerating using VDPAU is expected to work.

Browser support

Web video is one of the most important use-cases for hardware video acceleration, as without it, sites like YouTube will cause heavy CPU usage (and therefore power consumption), a particular concern on mobile devices such as laptops and tablets.

  • Chromium or Chrome — VA-API support is enabled by default in Chromium.
  • Firefox — starting at version 95, it is supported but must be manually enabled. See Firefox on how to enable. Firefox-ESR is projected to be updated to version 102 sometime in 2Q/3Q 2022.
  • GNOME Web has VA-API support through the gstreamer1.0-vaapi package if installed.

VDPAU isn’t supported at all in Chromium or Firefox. The only browser that supports it is GNOME Web, available in the epiphany-browser package. GNOME Web leverages GStreamer for this support, also requiring the gstreamer1.0-plugins-bad package for VDPAU support to work properly.

Источник

NVIDIA Hardware Acceleration on Linux

A number of plugins for Unmanic support hardware acceleration that utilise an NVIDIA GPU. This includes hardware acceleration (HWA) of video decoding/encoding in tools like FFmpeg or HandBrake CLI using the NVIDIA NVDEC/NVENC decoder/encoder.

Follow these instructions to configure the Unmanic Docker container for running FileBot as a Post-processor function.

Instructions:​

1) Check GPU Support​

You can find an official list of NVIDIA Graphics Cards and their supported codecs Here.

Check that your GPU is listed and is capable of doing what you want it to.

2) Install GPU Driver​

Ensure you have installed the NVIDIA drivers.

This is required even if you intend to run Unmanic within a Docker container.

Читайте также:  Linux ubuntu add to path

You can download the latest NVIDIA GPU driver from here.

The minimum required NVIDIA driver version is 418.30 for this to work in Linux.

It is recommended to also patch drivers the drivers by following the instructions here. This project will remove the restriction on maximum number of simultaneous NVENC video encoding sessions imposed by Nvidia to consumer-grade GPUs.

3) FFmpeg installation with NVENC support​

Only worry about this if you are running Unmanic natively on Linux. The Docker image has FFmpeg pre-installed with support for NVENC/NVDEC

Install FFmpeg for your operating system.

It is recommend to use the Jellyfin FFmpeg builds, however any recent release of FFmpeg will work fine.

To ensure your FFmpeg installation is capable of running the NVENC encoders, run this command:

for i in encoders decoders filters; do echo $i:; ffmpeg -hide_banner -$ | egrep -i "npp|cuvid|nvenc|cuda|nvdec"; done 

You should see a list of available encoders and decoders.

4) Running in Docker with NVENC support​

Installing the NVIDIA Container Toolkit​

If you intend to use Unmanic inside a Docker container, you will also need to pass through the required devices to the container. With NVIDIA this is done by installing the nvidia-docker2 package on your host.

Once you have followed these steps, you can test that the Unmanic Docker container can use the NVENC hardware by running:

docker run --rm --gpus all --entrypoint="" josh5/unmanic nvidia-smi 

You should see the following output:

Sun Apr 17 05:31:44 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 510.54 Driver Version: 510.54 CUDA Version: 11.6 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce . Off | 00000000:01:00.0 On | N/A | | 0% 34C P8 N/A / 120W | 185MiB / 4096MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ 

Creating the Docker container​

 PUID=$(id -u) PGID=$(id -g)  # CONFIG_DIR - Where you settings are saved CONFIG_DIR=/config  # LIBRARY_DIR - The location/locations of your library LIBRARY_DIR=/library  # CACHE_DIR - A tmpfs or and folder for temporary conversion files CACHE_DIR=/tmp/unmanic  # NVIDIA_VISIBLE_DEVICES - The GPUs that will be accessible to the container NVIDIA_VISIBLE_DEVICES=all  docker run -ti --rm \  -e PUID=$ \  -e PGID=$ \  -e NVIDIA_VISIBLE_DEVICES=$ \  --runtime=nvidia \  -p 8888:8888 \  -v $ :/config \  -v $ :/library \  -v $ :/tmp/unmanic \  josh5/unmanic:latest 
 # Variables that will need to be changed: # - User id for folder/file permissions # - Group id for folder/file permissions # - The GPUs that will be accessible to the container # Options: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#gpu-enumeration # - Path where Unmanic will store config files # - Path where you store the files that Unmanic will scan # - Cache path for in-progress encoding tasks #  --- version: '2.4' services: unmanic: container_name: unmanic image: josh5/unmanic:latest ports: - 8888:8888 environment: - PUID=> - PGID=> - NVIDIA_VISIBLE_DEVICES=> volumes: - >:/config - >:/library - >:/tmp/unmanic runtime: nvidia # For H/W transcoding using the NVENC encoder 

Источник

Оцените статью
Adblock
detector