Chapter 21. Configuring Power Management Support

Table of Contents

systemd Configuration
Exercising power management with systemd
Known Issues and Workarounds


The NVIDIA Linux driver includes support for the suspend (suspend-to-RAM) and hibernate (suspend-to-disk) system power management operations, such as ACPI S3 and S4 on the x86/x86_64 platforms. When the system suspends or hibernates, the NVIDIA kernel drivers prepare in-use GPUs for the sleep cycle, saving state required to return these GPUs to normal operation when the system is later resumed.

The GPU state saved by the NVIDIA kernel drivers includes allocations made in video memory. However, these allocations are collectively large, and typically cannot be evicted. Since the amount of system memory available to drivers at suspend time is often insufficient to accommodate large copies of video memory, the NVIDIA kernel drivers are designed to act conservatively, and normally only save essential video memory allocations.

The resulting loss of video memory contents is partially compensated for by the user-space NVIDIA drivers, and by some applications, but can lead to failures such as rendering corruption and application crashes upon exit from power management cycles.

To better support power management with these types of applications, the NVIDIA Linux driver provides a custom power management interface intended for integration with system management tools like systemd. This interface is still considered experimental. It is not used by default, but can be taken advantage of by configuring the system as described in this chapter.


The NVIDIA Linux driver supports the suspend and hibernate power management operations via two different mechanisms. In this section, each is summarized briefly with its capabilities and requirements:

Kernel driver callback

When this mechanism is used, the NVIDIA kernel driver receives callbacks from the Linux kernel to suspend, hibernate, and to resume each GPU for which a Linux PCI driver was registered. This is the default mechanism: it is enabled and used without explicit configuration.

While this mechanism has no special requirements, yields good results with many workloads, and has been supported by the NVIDIA kernel driver in similar form for years, it suffers from a few limitations. Notably, it can only preserve a relatively small amount of video memory reliably, and it cannot support power management when advanced CUDA features are being used.


Instead of callbacks from the Linux kernel, this mechanism, when used, relies on a system management tool, such as systemd, to issue suspend, hibernate, and resume commands to the NVIDIA kernel driver via the /proc/driver/nvidia/suspend interface. It is still considered experimental, and requires explicit configuration to use.

If configured correctly, this mechanism is designed to remove the limitations of the kernel driver callback mechanism. It supports power management with advanced CUDA features (such as UVM), and it is capable of saving and restoring all video memory allocations.

systemd Configuration

This section is specific to the /proc/driver/nvidia/suspend interface. The NVIDIA Linux kernel driver requires no configuration if the default power management mechanism is used.

In order to take advantage of the /proc interface, a system management tool like systemd needs to be configured to access it at appropriate times in the power management sequence. Specifically, the interface needs to be used to suspend or hibernate the NVIDIA kernel drivers just before writing to the Linux kernel's /sys/power/state interface to request entry into the desired sleep state. The interface also needs to be used to resume the NVIDIA kernel drivers immediately after the return from a sleep state, as well as immediately after any unsuccessful attempts to suspend or hibernate.

To save potentially large copies of video memory, the NVIDIA driver uses unnamed temporary files. By default, these files are created in /tmp, but this location can be changed with the TemporaryFilePath kernel module parameter, e.g. TemporaryFilePath=/run. The destination file system needs to support unnamed temporary files, and it needs to be large enough to accommodate all video memory copies for the duration of power management cycles.

When determining a suitable size for the video memory backing store, it is recommended to start with the overall amount of video memory supported by the GPUs installed in the system. For example: nvidia-smi -q -d MEMORY |grep 'FB Memory Usage' -A1. Each Total line returned by this command reflects one GPU's video memory capacity, in MiB. The sum of these numbers, plus 5% of margin, is a conservative starting point for the size of video memory save area.

Please note that file systems such as /tmp and /run are often of the type tmpfs, and potentially relatively small. Most commonly, the size of the type of the file system used is controlled by systemd. For more information, see To achieve the best performance, file system types other than tmpfs are recommended at this time.

Additionally, to unlock the full functionality of the interface, the NVIDIA Linux kernel module needs to be loaded with the NVreg_PreserveVideoMemoryAllocations=1 module parameter. This changes the default video memory save/restore strategy to save and restore all video memory allocations.

Both parameters can be set on the command line when loading the NVIDIA Linux kernel module, or more appropriately via the distribution's kernel module configuration files (such as those under /etc/modprobe.d).

The following example configuration documents integration with the systemd system and service manager, which is commonly used in modern GNU/Linux distributions to manage system start-up and various aspects of its operation. For systems not using systemd, the sample configuration files provided serve as a reference.

The systemd configuration uses the following files, all of which are provided in /usr/share/doc/NVIDIA_GLX-1.0/samples:


A systemd service description file used to instruct the system manager to write suspend to the /proc/driver/nvidia/suspend interface immediately before accessing /sys/power/state to suspend the system.


A systemd service description file used to instruct the system manager to write hibernate to the /proc/driver/nvidia/suspend interface immediately before accessing /sys/power/state to hibernate the system.


A systemd service description file used to instruct the system manager to write resume to the /proc/driver/nvidia/suspend interface immediately after returning from a system sleep state.


A systemd-sleep script file used to instruct the system manager to write resume to the /proc/driver/nvidia/suspend interface immediately after an unsuccessful attempt to suspend or hibernate the system via the /proc/driver/nvidia/suspend interface.


A shell script used by the systemd service description files and the systemd-sleep file to interact with the /proc/driver/nvidia/suspend interface. The script also manages VT switching for the X server, which is currently needed by the NVIDIA X driver to support power management operations.

Each of these files needs to be installed to their intended target location as root, e.g.:

  • sudo install /usr/share/doc/NVIDIA_GLX-1.0/samples/systemd/nvidia-suspend.service /etc/systemd/system

  • sudo install /usr/share/doc/NVIDIA_GLX-1.0/samples/systemd/nvidia-hibernate.service /etc/systemd/system

  • sudo install /usr/share/doc/NVIDIA_GLX-1.0/samples/systemd/nvidia-resume.service /etc/systemd/system

  • sudo install /usr/share/doc/NVIDIA_GLX-1.0/samples/systemd/nvidia /lib/systemd/system-sleep

  • sudo install /usr/share/doc/NVIDIA_GLX-1.0/samples/systemd/ /usr/bin

The NVIDIA systemd services then need to be enabled:

  • sudo systemctl enable nvidia-suspend.service

  • sudo systemctl enable nvidia-hibernate.service

  • sudo systemctl enable nvidia-resume.service

Exercising power management with systemd

This section is specific to the /proc/driver/nvidia/suspend interface, when configured as described above. When the default power management mechanism is used instead, or when the /proc interface is used without systemd, then the use of systemctl is not required.

To suspend (suspend-to-RAM) or to hibernate (suspend-to-disk), respectively, use the following commands:

  • sudo systemctl suspend

  • sudo systemctl hibernate

For the full list of sleep operations supported by systemd, please see the systemd-suspend.service(8) man page.

Known Issues and Workarounds