What is drm linux

What is drm linux

Direct Rendering Manager (DRM)

The DRM is a kernel module that gives direct hardware access to DRI clients.

This module deals with DMA, AGP memory management, resource locking, and secure hardware access. In order to support multiple, simultaneous 3D applications the 3D graphics hardware must be treated as a shared resource. Locking is required to provide mutual exclusion. DMA transfers and the AGP interface are used to send buffers of graphics commands to the hardware. Finally, there must be security to prevent clients from escalating privilege using the graphics hardware.

Where does the DRM reside?

Since internal Linux kernel interfaces and data structures may be changed at any time, DRI kernel modules must be specially compiled for a particular kernel version. The DRI kernel modules reside in the /lib/modules/. /kernel/drivers/gpu/drm directory. (The kernel modules were in the /lib/modules/. /kernel/drivers/char/drm directory before version 2.6.26.) Normally, the X server automatically loads whatever DRI kernel modules are needed.

For each 3D hardware driver there is a kernel module, each of which requires the generic DRM support code.

The source code is at git://anongit.freedesktop.org/git/mesa/drm

In what way does the DRM support the DRI?

The DRM supports the DRI in three major ways:

  1. The DRM provides synchronized access to the graphics hardware. The direct rendering system has multiple entities (i.e., the X server, multiple direct-rendering clients, and the kernel) competing for direct access to the graphics hardware. Hardware that is currently available for PC-class machines will lock up if more than one entity is accessing the hardware (e.g., if two clients intermingle requests in the command FIFO or (on some hardware) if one client reads the framebuffer while another writes the command FIFO). The DRM provides a single per-device hardware lock to synchronize access to the hardware. The hardware lock may be required when the X server performs 2D rendering, when a direct-rendering client is performing a software fallback that must read or write the frame buffer, or when the kernel is dispatching DMA buffers. This hardware lock may not be required for all hardware (e.g., high-end hardware may be able to intermingle command requests from multiple clients) or for all implementations (e.g., one that uses a page fault mechanism instead of an explicit lock). In the later case, the DRM would be extended to provide support for this mechanism. For more details on the hardware lock requirements and a discussion of the performance implications and implementation details, please see [FOM99].
  2. The DRM enforces the DRI security policy for access to the graphics hardware. The X server, running as root, usually obtains access to the frame buffer and MMIO regions on the graphics hardware by mapping these regions using /dev/mem . The direct-rendering clients, however, do not run as root, but still require similar mappings. Like /dev/mem , the DRM device interface allows clients to create these mappings, but with the following restrictions: * The client may only map regions if it has a current connection to the X server. This forces direct-rendering clients to obey the normal X server security policy (e.g., using xauth ). * The client may only map regions if it can open /dev/drm? , which is only accessible by root and by a group specified in the XF86Config file (a file that only root can edit). This allows the system administrator to restrict direct rendering access to a group of trusted users. * The client may only map regions that the X server allows to be mapped. The X server may also restrict those mappings to be read-only. This allows regions with security implications (e.g., those containing registers that can start DMA) to be restricted.
  3. The DRM provides a generic DMA engine. Most modern PC-class graphics hardware provides for DMA access to the command FIFO. Often, DMA access has been optimized so that it provides significantly better throughput than does MMIO access. For these cards, the DRM provides a DMA engine with the following features: * The X server can specify multiple pools of different sized buffers which are allocated and locked down. * The direct-rendering client maps these buffers into its virtual address space, using the DRM API. * The direct-rendering client reserves some of these buffers from the DRM, fills the buffers with commands, and requests that the DRM send the buffers to the graphics hardware. Small buffers are used to ensure that the X server can get the lock between buffer dispatches, thereby providing X server interactivity. Typical 40MB/s PCI transfer rates may require 10000 4kB buffer dispatches per second. * The DRM manages a queue of DMA buffers for each OpenGL GLXContext, and detects when a GLXContext switch is necessary. Hooks are provided so that a device-specific driver can perform the GLXContext switch in kernel-space, and a callback to the X server is provided when a device-specific driver is not available (for the sample implementation, the callback mechanism is used because it provides an example of the most generic method for GLXContext switching). The DRM also performs simple scheduling of DMA buffer requests to prevent GLXContext thrashing. When a DMA is swapped a significant amount of data must be read from and/or written to the graphics device (between 4kB and 64kB for typical hardware). * The DMA engine is generic in the sense that the X server provides information at run-time on how to perform DMA operations for the specific hardware installed on the machine. The X server does all of the hardware detection and setup. This allows easy bootstrapping for new graphics hardware under the DRI, while providing for later performance and capability enhancements through the use of a device-specific kernel driver.
Читайте также:  Linux open files for processing

Is it possible to make a DRI driver without a DRM driver in a piece of hardware whereby we do all accelerations in PIO mode?

The kernel provides three main things:

  1. the ability to wait on a contended lock (the waiting process is put to sleep), and to free the lock of a dead process;
  2. the ability to mmap areas of memory that non-root processes can’t usually map;
  3. the ability to handle hardware interruptions and a DMA queue. All of these are hard to do outside the kernel, but they aren’t required components of a DRM driver. For example, the tdfx driver doesn’t use hardware interrupts at all — it is one of the simplest DRM drivers, and would be a good model for the hardware you are thinking about (in it’s current form, it is quite generic).

Note: DRI was designed with a very wide range of hardware in mind, ranging from very low-end PC graphics cards through very high-end SGI-like hardware (which may not even need the lock). The DRI is an infrastructure or framework that is very flexible — most of the example drivers we have use hardware interrupts, but that isn’t a requirement.

Has the DRM driver support for or loading sub-drivers?

Although the [Faith99] states that the DRM driver has support for loading sub-drivers by calling drmCreateSub, Linus didn’t like that approach. He wanted all drivers to be independent, so the top-level «DRM» module no longer exists and each DRM module is independent.

Is it possible to use floating point on the kernel?

You can use FP, but you have to jump through hoops to do so, especially if you’re in an asynchronous context (i.e. interrupt or similar).

Читайте также:  Linux where is jdk

In process context (i.e. ioctl code) you could just decide that part of the calling convention of the ioctl is that the FP register state is corrupted, and use FP fairly freely — but realizing that FP usage is basically the same as «access to user mode» and can cause traps.

Oh, and getting an FP exception in the kernel is definitely illegal, and can (and does) cause a hung box. The FP exception handling depends on a signal handler cleaning the thing up.

In general, the rule would be: don’t do it. It’s possible, but there are a lot of cases you have to worry about, and it would be a lot better to do the FP (including any coordinate snapping) in mesa in user mode, and maybe just verify the values in the kernel (which can be done with fairly simple integer arithmetic).

When to use semaphores?

  • The problem appears to be that the DRM people are used to using semaphores to protect kernel data structures. That is wrong. Follow-up, just in case somebody asks «what are semaphores there for then?»

There are reasons to use semaphores, but they are not about protecting data structures. They are mainly useful for protecting whole subsystems of code, notably in filesystems where we use semaphores extensively to protect things like concurrent access to a directory while doing lookups on it.

  • directory cache lookups (kernel «dcache» data structure): protected by «dcache_lock» spinlock
  • VFS callback into filesystem to do a lookup that wasn’t cached protected by per-directory inode semaphore Basically, spinlocks protect data, while semaphores are more of a high-level «protect the concept» thing.

I suspect that there is very little in the DRI code that would ever have a good reason to use a semaphore, they just shouldn’t be used at that kind of low level (they might be useful for doing things like serializing device opening etc, so I’m not saying that DRI should never ever use one, but you get the idea).

What future changes are going to be made to the DRM?

There is a NextDRMVersion page to put ideas for changes to be made to the DRM if we should ever decide to eliminate compatibility with previous versions.

Читайте также:  Linux path delete or deleting

Источник

What is drm linux

NAME

drm - Direct Rendering Manager

SYNOPSIS

DESCRIPTION

The Direct Rendering Manager (DRM) is a framework to manage Graphics Processing Units (GPUs). It is designed to support the needs of complex graphics devices, usually containing programmable pipelines well suited to 3D graphics acceleration. Furthermore, it is responsible for memory management, interrupt handling and DMA to provide a uniform interface to applications. In earlier days, the kernel framework was solely used to provide raw hardware access to priviledged user-space processes which implement all the hardware abstraction layers. But more and more tasks where moved into the kernel. All these interfaces are based on ioctl(2) commands on the DRM character device. The libdrm library provides wrappers for these system-calls and many helpers to simplify the API. When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each connected GPU is then presented to user-space via a character-device that is usually available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it still depends on the grapics driver which interfaces are available on these devices. If an interface is not available, the syscalls will fail with EINVAL. Authentication All DRM devices provide authentication mechanisms. Only a DRM-Master is allowed to perform mode-setting or modify core state and only one user can be DRM-Master at a time. See drmSetMaster(3) for information on how to become DRM-Master and what the limitations are. Other DRM users can be authenticated to the DRM-Master via drmAuthMagic(3) so they can perform buffer allocations and rendering. Mode-Setting Managing connected monitors and displays and changing the current modes is called Mode-Setting. This is restricted to the current DRM-Master. Historically, this was implemented in user-space, but new DRM drivers implement a kernel interface to perform mode-setting called Kernel Mode Setting (KMS). If your hardware-driver supports it, you can use the KMS API provided by DRM. This includes allocating framebuffers, selecting modes and managing CRTCs and encoders. See drm-kms(7) for more. Memory Management The most sophisticated tasks for GPUs today is managing memory objects. Textures, framebuffers, command-buffers and all other kinds of commands for the GPU have to be stored in memory. The DRM driver takes care of managing all memory objects, flushing caches, synchronizing access and providing CPU access to GPU memory. All memory management is hardware driver dependent. However, two generic frameworks are available that are used by most DRM drivers. These are the Translation Table Manager (TTM) and the Graphics Execution Manager (GEM). They provide generic APIs to create, destroy and access buffers from user-space. However, there are still many differences between the drivers so driver-depedent code is still needed. Many helpers are provided in libgbm (Graphics Buffer Manager) from the mesa-project. For more information on DRM memory-management, see drm- memory(7).

REPORTING BUGS

Bugs in this manual should be reported to http://bugs.freedesktop.org under the "Mesa" product, with "Other" or "libdrm" as the component.

SEE ALSO

drm-kms(7), drm-memory(7), drmSetMaster(3), drmAuthMagic(3), drmAvailable(3), drmOpen(3)

© 2019 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.

Источник

Оцените статью
Adblock
detector