Appendix D. X Config Options

The following driver options are supported by the NVIDIA X driver. They may be specified either in the Screen or Device sections of the X config file.

X Config Options

Option "NvAGP" "integer"

Configure AGP support. Integer argument can be one of:

Value Behavior
0 disable AGP
1 use NVIDIA's internal AGP support, if possible
2 use AGPGART, if possible
3 use any AGP support (try AGPGART, then NVIDIA's AGP)

Please note that NVIDIA's internal AGP support cannot work if AGPGART is either statically compiled into your kernel or is built as a module and loaded into your kernel. Please see Appendix F, Configuring AGP for details. Default: 3.

Option "NoLogo" "boolean"

Disable drawing of the NVIDIA logo splash screen at X startup. Default: the logo is drawn.

Option "RenderAccel" "boolean"

Enable or disable hardware acceleration of the RENDER extension. Default: hardware acceleration of the RENDER extension is enabled.

Option "NoRenderExtension" "boolean"

Disable the RENDER extension. Other than recompiling it, the X server does not seem to have another way of disabling this. Fortunately, we can control this from the driver so we export this option. This is useful in depth 8 where RENDER would normally steal most of the default colormap. Default: RENDER is offered when possible.

Option "UBB" "boolean"

Enable or disable the Unified Back Buffer on Quadro-based GPUs (Quadro4 NVS excluded); please see Appendix K, Flipping and UBB for a description of UBB. This option has no effect on non-Quadro chipsets. Default: UBB is on for Quadro chipsets.

Option "NoFlip" "boolean"

Disable OpenGL flipping; please see Appendix K, Flipping and UBB for a description. Default: OpenGL will swap by flipping when possible.

Option "Dac8Bit" "boolean"

Most Quadro products by default use a 10-bit color look-up table (LUT); setting this option to TRUE forces these graphics chips to use an 8-bit (LUT). Default: a 10-bit LUT is used, when available.

Option "Overlay" "boolean"

Enables RGB workstation overlay visuals. This is only supported on Quadro4 and Quadro FX chips (Quadro4 NVS excluded) in depth 24. This option causes the server to advertise the SERVER_OVERLAY_VISUALS root window property and GLX will report single- and double-buffered, Z-buffered 16-bit overlay visuals. The transparency key is pixel 0x0000 (hex). There is no gamma correction support in the overlay plane. This feature requires XFree86 version 4.1.0 or newer, or the X.Org X server. Quadros 500 and 550 XGL have additional restrictions, namely, overlays are not supported in TwinView mode or with virtual desktops wider than 2046 pixels or taller than 2047. Quadro 7xx/9xx and Quadro FX will offer overlay visuals in these modes (TwinView, or virtual desktops larger than 2046x2047), but the overlay will be emulated with a substantial performance penalty. RGB workstation overlays are not supported when the Composite extension is enabled. Default: off.

UBB must be enabled when overlays are enabled (this is the default behavior).

Option "CIOverlay" "boolean"

Enables Color Index workstation overlay visuals with identical restrictions to Option "Overlay" above. The server will offer visuals both with and without a transparency key. These are depth 8 PseudoColor visuals. Enabling Color Index overlays on X servers older than XFree86 4.3 will force the RENDER extension to be disabled due to bugs in the RENDER extension in older X servers. Color Index workstation overlays are not supported when the Composite extension is enabled. Default: off.

UBB must be enabled when overlays are enabled (this is the default behavior).

Option "TransparentIndex" "integer"

When color index overlays are enabled, use this option to choose which pixel is used for the transparent pixel in visuals featuring transparent pixels. This value is clamped between 0 and 255 (Note: some applications such as Alias's Maya require this to be zero in order to work correctly). Default: 0.

Option "OverlayDefaultVisual" "boolean"

When overlays are used, this option sets the default visual to an overlay visual thereby putting the root window in the overlay. This option is not recommended for RGB overlays. Default: off.

Option "RandRRotation" "boolean"

Enable rotation support for the XRandR extension. This allows use of the XRandR X server extension for configuring the screen orientation through rotation. This feature is supported on GeForce2 or better hardware using depth 24. This requires an X.Org X 6.8.1 or newer X server. This feature does not work with hardware overlays, and emulated overlays will be used instead at a substantial performance penalty. See Appendix U, The XRandR Extension for details. Default: off.

Option "Rotate" "string"

Enable static rotation support. Unlike the RandRRotation option above, this option takes effect as soon as the X server is started and will work with older versions of X. This feature is supported on GeForce2 or better hardware using depth 24. This feature does not work with hardware overlays, and emulated overlays will be used instead at a substantial performance penalty. This option is not compatible with the RandR extension. Valid rotations are "normal", "left", "inverted", and "right". Default: off.

Option "AllowDDCCI" "boolean"

Enables DDC/CI support in the NV-CONTROL X extension. DDC/CI is a mechanism for communication between your computer and your display device. This can be used to set the values normally controlled through your display device's On Screen Display. Please see the DDC/CI NV-CONTROL attributes in NVCtrl.h and functions in NVCtrlLib.h in the nvidia-settings source code. Default: off (DDC/CI is disabled).

Option "SWCursor" "boolean"

Enable or disable software rendering of the X cursor. Default: off.

Option "HWCursor" "boolean"

Enable or disable hardware rendering of the X cursor. Default: on.

Option "CursorShadow" "boolean"

Enable or disable use of a shadow with the hardware accelerated cursor; this is a black translucent replica of your cursor shape at a given offset from the real cursor. Default: off (no cursor shadow).

Option "CursorShadowAlpha" "integer"

The alpha value to use for the cursor shadow; only applicable if CursorShadow is enabled. This value must be in the range [0, 255] -- 0 is completely transparent; 255 is completely opaque. Default: 64.

Option "CursorShadowXOffset" "integer"

The offset, in pixels, that the shadow image will be shifted to the right from the real cursor image; only applicable if CursorShadow is enabled. This value must be in the range [0, 32]. Default: 4.

Option "CursorShadowYOffset" "integer"

The offset, in pixels, that the shadow image will be shifted down from the real cursor image; only applicable if CursorShadow is enabled. This value must be in the range [0, 32]. Default: 2.

Option "ConnectedMonitor" "string"

Allows you to override what the NVIDIA kernel module detects is connected to your video card. This may be useful, for example, if you use a KVM (keyboard, video, mouse) switch and you are switched away when X is started. In such a situation, the NVIDIA kernel module cannot detect what display devices are connected, and the NVIDIA X driver assumes you have a single CRT.

Valid values for this option are "CRT" (cathode ray tube), "DFP" (digital flat panel), or "TV" (television); if using TwinView, this option may be a comma-separated list of display devices; e.g.: "CRT, CRT" or "CRT, DFP".

It is generally recommended to not use this option, but instead use the "UseDisplayDevice" option.

NOTE: anything attached to a 15 pin VGA connector is regarded by the driver as a CRT. "DFP" should only be used to refer to digital flat panels connected via a DVI port.

Default: string is NULL (the NVIDIA driver will detect the connected display devices).

Option "UseDisplayDevice" "string"

When assigning display devices to X screens, the NVIDIA X driver by default assigns display devices in the order they are found (looking first at CRTs, then at DFPs, and finally at TVs). This option can be used to override this assignment. For example, if both a CRT and a DFP are connected, you could specify:

    Option "UseDisplayDevice" "DFP"

to make the X screen use the DFP, even though it would have used a CRT by default.

Note the subtle difference between this option and the "ConnectedMonitor" option: the "ConnectedMonitor" option overrides what display devices are actually detected, while the "UseDisplayDevice" option controls which of the detected display devices will be used on this X screen.

Option "UseEdidFreqs" "boolean"

This option controls whether the NVIDIA X driver will use the HorizSync and VertRefresh ranges given in a display device's EDID, if any. When UseEdidFreqs is set to True, EDID-provided range information will override the HorizSync and VertRefresh ranges specified in the Monitor section. If a display device does not provide an EDID, or the EDID does not specify an hsync or vrefresh range, then the X server will default to the HorizSync and VertRefresh ranges specified in the Monitor section of your X config file. These frequency ranges are used when validating modes for your display device.

Default: True (EDID frequencies will be used)

Option "UseEDID" "boolean"

By default, the NVIDIA X driver makes use of a display device's EDID, when available, during construction of its mode pool. The EDID is used as a source for possible modes, for valid frequency ranges, and for collecting data on the physical dimensions of the display device for computing the DPI (see Appendix Y, Dots Per Inch). However, if you wish to disable the driver's use of the EDID, you can set this option to False:

    Option "UseEDID" "FALSE"

Note that, rather than globally disable all uses of the EDID, you can individually disable each particular use of the EDID; e.g.,

    Option "UseEDIDFreqs" "FALSE"
    Option "UseEDIDDpi" "FALSE"
    Option "ModeValidation" "NoEdidModes"

Default: True (use EDID).

Option "IgnoreEDID" "boolean"

This option is deprecated, and no longer affects behavior of the X driver. See the "UseEDID" option for details.

Option "NoDDC" "boolean"

Synonym for "IgnoreEDID". This option is deprecated, and no longer affects behavior of the X driver. See the "UseEDID" option for details.

Option "UseInt10Module" "boolean"

Enable use of the X Int10 module to soft-boot all secondary cards, rather than POSTing the cards through the NVIDIA kernel module. Default: off (POSTing is done through the NVIDIA kernel module).

Option "TwinView" "boolean"

Enable or disable TwinView. Please see Appendix G, Configuring TwinView for details. Default: off (TwinView is disabled).

Option "TwinViewOrientation" "string"

Controls the relationship between the two display devices when using TwinView. Takes one of the following values: "RightOf" "LeftOf" "Above" "Below" "Clone". Please see Appendix G, Configuring TwinView for details. Default: string is NULL.

Option "SecondMonitorHorizSync" "range(s)"

This option is like the HorizSync entry in the Monitor section, but is for the second monitor when using TwinView. Please see Appendix G, Configuring TwinView for details. Default: none.

Option "SecondMonitorVertRefresh" "range(s)"

This option is like the VertRefresh entry in the Monitor section, but is for the second monitor when using TwinView. Please see Appendix G, Configuring TwinView for details. Default: none.

Option "MetaModes" "string"

This option describes the combination of modes to use on each monitor when using TwinView. Please see Appendix G, Configuring TwinView for details. Default: string is NULL.

Option "NoTwinViewXineramaInfo" "boolean"

When in TwinView, the NVIDIA X driver normally provides a Xinerama extension that X clients (such as window managers) can use to discover the current TwinView configuration, such as where each display device is positioned within the X screen. Some window mangers get confused by this information, so this option is provided to disable this behavior. Default: false (TwinView Xinerama information is provided).

Option "TwinViewXineramaInfoOrder" "string"

When the NVIDIA X driver provides TwinViewXineramaInfo (see the NoTwinViewXineramaInfo X config option), it by default reports the currently enabled display devices in the order "CRT, DFP, TV". The TwinViewXineramaInfoOrder X config option can be used to override this order.

The option string is a comma-separated list of display device names. The display device names can either be general (e.g, "CRT", which identifies all CRTs), or specific (e.g., "CRT-1", which identifies a particular CRT). Not all display devices need to be identified in the option string; display devices that are not listed will be implicitly appended to the end of the list, in their default order.

Note that TwinViewXineramaInfoOrder tracks all display devices that could possibly be connected to the GPU, not just the ones that are currently enabled. When reporting the Xinerama information, the NVIDIA X driver walks through the display devices in the order specified, only reporting enabled display devices.

Examples:

        "DFP"
        "TV, DFP"
        "DFP-1, DFP-0, TV, CRT"

In the first example, any enabled DFPs would be reported first (any enabled CRTs or TVs would be reported afterwards). In the second example, any enabled TVs would be reported first, then any enabled DFPs (any enabled CRTs would be reported last). In the last example, if DFP-1 were enabled, it would be reported first, then DFP-0, then any enabled TVs, and then any enabled CRTs; finally, any other enabled DFPs would be reported.

Default: "CRT, DFP, TV"

Option "TVStandard" "string"

Please see Appendix H, Configuring TV-Out for details on configuring TV-out.

Option "TVOutFormat" "string"

Please see Appendix H, Configuring TV-Out for details on configuring TV-out.

Option "TVOverScan" "Decimal value in the range 0.0 to 1.0"

Valid values are in the range 0.0 through 1.0; Please see Appendix H, Configuring TV-Out for details on configuring TV-out.

Option "Stereo" "integer"

Enable offering of quad-buffered stereo visuals on Quadro. Integer indicates the type of stereo equipment being used:

Value Equipment
1 DDC glasses. The sync signal is sent to the glasses via the DDC signal to the monitor. These usually involve a passthrough cable between the monitor and video card.
2 "Blueline" glasses. These usually involve a passthrough cable between the monitor and video card. The glasses know which eye to display based on the length of a blue line visible at the bottom of the screen. When in this mode, the root window dimensions are one pixel shorter in the Y dimension than requested. This mode does not work with virtual root window sizes larger than the visible root window size (desktop panning).
3 Onboard stereo support. This is usually only found on professional cards. The glasses connect via a DIN connector on the back of the video card.
4 TwinView clone mode stereo (aka "passive" stereo). On video cards that support TwinView, the left eye is displayed on the first display, and the right eye is displayed on the second display. This is normally used in conjunction with special projectors to produce 2 polarized images which are then viewed with polarized glasses. To use this stereo mode, you must also configure TwinView in clone mode with the same resolution, panning offset, and panning domains on each display.
5 Vertical interlaced stereo mode, for use with SeeReal Stereo Digital Flat Panels.
6 Color interleaved stereo mode, for use with Sharp3D Stereo Digital Flat Panels.

Stereo is only available on Quadro cards. Stereo options 1, 2, and 3 (aka "active" stereo) may be used with TwinView if all modes within each MetaMode have identical timing values. Please see Appendix J, Programming Modes for suggestions on making sure the modes within your MetaModes are identical. The identical ModeLine requirement is not necessary for Stereo option 4 ("passive" stereo). Currently, stereo operation may be "quirky" on the original Quadro (NV10) chip and left-right flipping may be erratic. We are trying to resolve this issue for a future release. Default: 0 (Stereo is not enabled).

UBB must be enabled when stereo is enabled (this is the default behavior).

Stereo options 1, 2, and 3 (aka "active" stereo) are not supported on digital flat panels.

Multi-GPU cards (such as the Quadro FX 4500 X2) provide a single connector for onboard stereo support (option 3), which is tied to the bottommost GPU. In order to synchronize onboard stereo with the other GPU, you must use a G-Sync device (see Appendix X, Frame Lock and Genlock for details).

Option "AllowDFPStereo" "boolean"

By default, the NVIDIA X driver performs a check which disables active stereo (stereo options 1, 2, and 3) if the X screen is driving a DFP. The "AllowDFPStereo" option bypasses this check.

Option "ForceStereoFlipping" "boolean"

Stereo flipping is the process by which left and right eyes are displayed on alternating vertical refreshes. Normally, stereo flipping is only performed when a stereo drawable is visible. This option forces stereo flipping even when no stereo drawables are visible.

This is to be used in conjunction with the "Stereo" option. If "Stereo" is 0, the "ForceStereoFlipping" option has no effect. If otherwise, the "ForceStereoFlipping" option will force the behavior indicated by the "Stereo" option, even if no stereo drawables are visible. This option is useful in a multiple-screen environment in which a stereo application is run on a different screen than the stereo master.

Possible values:

Value Behavior
0 Stereo flipping is not forced. The default behavior as indicated by the "Stereo" option is used.
1 Stereo flipping is forced. Stereo is running even if no stereo drawables are visible. The stereo mode depends on the value of the "Stereo" option.

Default: 0 (Stereo flipping is not forced). Note that active stereo is not supported on digital flat panels.

Option "XineramaStereoFlipping" "boolean"

By default, when using Stereo with Xinerama, all physical X screens having a visible stereo drawable will stereo flip. Use this option to allow only one physical X screen to stereo flip at a time.

This is to be used in conjunction with the "Stereo" and "Xinerama" options. If "Stereo" is 0 or "Xinerama" is 0, the "XineramaStereoFlipping" option has no effect.

If you wish to have all X screens stereo flip all the time, please see the "ForceStereoFlipping" option.

Possible values:

Value Behavior
0 Stereo flipping is enabled on one X screen at a time. Stereo is enabled on the first X screen having the stereo drawable.
1 Stereo flipping in enabled on all X screens.

Default: 1 (Stereo flipping is enabled on all X screens).

Option "NoBandWidthTest" "boolean"

As part of mode validation, the X driver tests if a given mode fits within the hardware's memory bandwidth constraints. This option disables this test. Default: false (the memory bandwidth test is performed).

Option "IgnoreDisplayDevices" "string"

This option tells the NVIDIA kernel module to completely ignore the indicated classes of display devices when checking what display devices are connected. You may specify a comma-separated list containing any of "CRT", "DFP", and "TV". For example:

Option "IgnoreDisplayDevices" "DFP, TV"

will cause the NVIDIA driver to not attempt to detect if any digital flat panels or TVs are connected. This option is not normally necessary; however, some video BIOSes contain incorrect information about what display devices may be connected, or what i2c port should be used for detection. These errors can cause long delays in starting X. If you are experiencing such delays, you may be able to avoid this by telling the NVIDIA driver to ignore display devices which you know are not connected. NOTE: anything attached to a 15 pin VGA connector is regarded by the driver as a CRT. "DFP" should only be used to refer to digital flat panels connected via a DVI port.

Option "MultisampleCompatibility" "boolean"

Enable or disable the use of separate front and back multisample buffers. Enabling this will consume more memory but is necessary for correct output when rendering to both the front and back buffers of a multisample or FSAA drawable. This option is necessary for correct operation of SoftImage XSI. Default: false (a single multisample buffer is shared between the front and back buffers).

Option "NoPowerConnectorCheck" "boolean"

The NVIDIA X driver will abort X server initialization if it detects that a GPU that requires an external power connector does not have an external power connector plugged in. This option can be used to bypass this test. Default: false (the power connector test is performed).

Option "XvmcUsesTextures" "boolean"

Forces XvMC to use the 3D engine for XvMCPutSurface requests rather than the video overlay. Default: false (video overlay is used when available).

Option "AllowGLXWithComposite" "boolean"

Enables GLX even when the Composite X extension is loaded. ENABLE AT YOUR OWN RISK. OpenGL applications will not display correctly in many circumstances with this setting enabled.

This option is intended for use on X.Org X servers older than X11R6.9.0. On X11R6.9.0 or newer X servers, NVIDIA's OpenGL implementation interacts properly by default with the Composite X extension and this option should not be needed. However, on X11R6.9.0 or newer X servers, support for GLX with Composite can be disabled by setting this option to False.

Default: false (GLX is disabled when Composite is enabled on X servers older than X11R6.9.0).

Option "AddARGBGLXVisuals" "boolean"

Adds a 32-bit ARGB visual for each supported OpenGL configuration. This allows applications to use OpenGL to render with alpha transparency into 32-bit windows and pixmaps. This option requires the Composite extension. ENABLE AT YOUR OWN RISK. Some OpenGL applications may display incorrectly when this setting is enabled. Default: No visuals are added.

Option "DisableGLXRootClipping" "boolean"

If enabled, no clipping will be performed on rendering done by OpenGL in the root window. This is needed by some OpenGL-based composite managers to function correctly, as they draw the contents of redirected windows directly into the root window using OpenGL.

Option "DamageEvents" "boolean"

Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited. This will significantly improve performance and interactivity when using GLX applications with a composite manager running. It will also affect applications using GLX when rotation is enabled. This option is currently incompatible with SLI and MultiGPU modes and will be disabled if either are used. Enabled by default.

Option "ExactModeTimingsDVI" "boolean"

Forces the initialization of the X server with the exact timings specified in the ModeLine. Default: false (for DVI devices, the X server initializes with the closest mode in the EDID list).

Option "Coolbits" "integer"

Enables support in the NV-CONTROL X extension for manipulating GPU clock settings. When this option is set to "1" the nvidia-settings utility will contain a page labeled "Clock Frequencies" through which clock settings can be manipulated. Coolbits is only available on GeForce FX, Quadro FX, and newer GPUs. Default 0 (support is disabled).

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer's design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer's operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

Option "MultiGPU" "string"

This option controls the configuration of MultiGPU rendering in supported configurations.

Value Behavior
0, no, off, false, Single Use only a single GPU when rendering
1, yes, on, true, Auto Enable MultiGPU and allow the driver to automatically select the appropriate rendering mode.
AFR Enable MultiGPU and use the Alternate Frame Rendering mode.
SFR Enable MultiGPU and use the Split Frame Rendering mode.
AA Enable MultiGPU and use antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.

Option "SLI" "string"

This option controls the configuration of SLI rendering in supported configurations.

Value Behavior
0, no, off, false, Single Use only a single GPU when rendering
1, yes, on, true, Auto Enable SLI and allow the driver to automatically select the appropriate rendering mode.
AFR Enable SLI and use the Alternate Frame Rendering mode.
SFR Enable SLI and use the Split Frame Rendering mode.
AA Enable SLI and use SLI Antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.
AFRofAA Enable SLI and use SLI Alternate Frame Rendering of Antialiasing mode. Use this in conjunction with full scene antialiasing to improve visual quality. This option is only valid for SLI configurations with 4 GPUs.

Option "TripleBuffer" "boolean"

Enable or disable the use of triple buffering. If this option is enabled, OpenGL windows that sync to vblank and are double-buffered will be given a third buffer. This decreases the time an application stalls while waiting for vblank events, but increases latency slightly (delay between user input and displayed result).

Option "DPI" "string"

This option specifies the Dots Per Inch for the X screen; for example:

    Option "DPI" "75 x 85"

will set the horizontal DPI to 75 and the vertical DPI to 85. By default, the X driver will compute the DPI of the X screen from the EDID of any connected display devices. See Appendix Y, Dots Per Inch for details. Default: string is NULL (disabled).

Option "UseEdidDpi" "string"

By default, the NVIDIA X driver computes the DPI of an X screen based on the physical size of the display device, as reported in the EDID, and the size in pixels of the first mode to be used on the display device. If multiple display devices are used by the X screen, then the NVIDIA X screen will choose which display device to use. This option can be used to specify which display device to use. The string argument can be a display device name, such as:

    Option "UseEdidDpi" "DFP-0"

or the argument can be "FALSE" to disable use of EDID-based DPI calculations:

    Option "UseEdidDpi" "FALSE"

See Appendix Y, Dots Per Inch for details. Default: string is NULL (the driver computes the DPI from the EDID of a display device and selects the display device).

Option "ConstantDPI" "boolean"

By default on X.Org 6.9 or newer X servers, the NVIDIA X driver recomputes the size in millimeters of the X screen whenever the size in pixels of the X screen is changed using XRandR, such that the DPI remains constant.

This behavior can be disabled (which means that the size in millimeters will not change when the size in pixels of the X screen changes) by setting the "ConstantDPI" option to "FALSE"; e.g.,

    Option "ConstantDPI" "FALSE"

ConstantDPI defaults to True.

On X servers older than X.Org 6.9, the NVIDIA X driver cannot change the size in millimeters of the X screen. Therefore the DPI of the X screen will change when XRandR changes the size in pixels of the X screen. The driver will behave as if ConstantDPI was forced to FALSE.

Option "CustomEDID" "string"

This option forces the X driver to use the EDID specified in a file rather than the display's EDID. You may specify a semicolon separated list of display names and filename pairs. The display name is any of "CRT-0", "CRT-1", "DFP-0", "DFP-1", "TV-0", "TV-1". The file contains a raw EDID (e.g., a file generated by nvidia-settings).

For example:

    Option "CustomEDID" "CRT-0:/tmp/edid1.bin; DFP-0:/tmp/edid2.bin"

will assign the EDID from the file /tmp/edid1.bin to the display device CRT-0, and the EDID from the file /tmp/edid2.bin to the display device DFP-0.

Option "ModeValidation" "string"

This option provides fine-grained control over each stage of the mode validation pipeline, disabling individual mode validation checks. This option should only very rarely be used.

The option string is a semicolon-separated list of comma-separated lists of mode validation arguments. Each list of mode validation arguments can optionally be prepended with a display device name.

    "<dpy-0>: <tok>, <tok>; <dpy-1>: <tok>, <tok>, <tok>; ..."

Possible arguments:

  • "AllowNon60HzDFPModes": some lower quality TMDS encoders are only rated to drive DFPs at 60Hz; the driver will determine when only 60Hz DFP modes are allowed. This argument disables this stage of the mode validation pipeline.

  • "NoMaxPClkCheck": each mode has a pixel clock; this pixel clock is validated against the maximum pixel clock of the hardware (for a DFP, this is the maximum pixel clock of the TMDS encoder, for a CRT, this is the maximum pixel clock of the DAC). This argument disables the maximum pixel clock checking stage of the mode validation pipeline.

  • "NoEdidMaxPClkCheck": a display device's EDID can specify the maximum pixel clock that the display device supports; a mode's pixel clock is validated against this pixel clock maximum. This argument disables this stage of the mode validation pipeline.

  • "AllowInterlacedModes": interlaced modes are not supported on all NVIDIA GPUs; the driver will discard interlaced modes on GPUs where interlaced modes are not supported; this argument disables this stage of the mode validation pipeline.

  • "NoMaxSizeCheck": each NVIDIA GPU has a maximum resolution that it can drive; this argument disables this stage of the mode validation pipeline.

  • "NoHorizSyncCheck": a mode's horizontal sync is validated against the range of valid horizontal sync values; this argument disables this stage of the mode validation pipeline.

  • "NoVertRefreshCheck": a mode's vertical refresh rate is validated against the range of valid vertical refresh rate values; this argument disables this stage of the mode validation pipeline.

  • "NoEdidDFPMaxSizeCheck": when validating for a DFP, a mode's size is validated against the largest resolution found in the EDID; this argument disables this stage of the mode validation pipeline.

  • "NoWidthAlignmentCheck": the alignment of a mode's visible width is validated against the capabilities of the GPU; normally, a mode's visible width must be a multiple of 8. This argument disables this stage of the mode validation pipeline.

  • "NoVesaModes": when constructing the mode pool for a display device, the X driver uses a built-in list of VESA modes as one of the mode sources; this argument disables use of these built-in VESA modes.

  • "NoEdidModes": when constructing the mode pool for a display device, the X driver uses any modes listed in the display device's EDID as one of the mode sources; this argument disables use of EDID-specified modes.

  • "NoXServerModes": when constructing the mode pool for a display device, the X driver uses the built-in modes provided by the core XFree86/Xorg X server as one of the mode sources; this argument disables use of these modes. Note that this argument does not disable custom ModeLines specified in the X config file; see the "NoCustomModes" argument for that.

  • "NoCustomModes": when constructing the mode pool for a display device, the X driver uses custom ModeLines specified in the X config file (through the "Mode" or "ModeLine" entries in the Monitor Section) as one of the mode sources; this argument disables use of these modes.

  • "NoPredefinedModes": when constructing the mode pool for a display device, the X driver uses additional modes predefined by the NVIDIA X driver; this argument disables use of these modes.

  • "NoUserModes": additional modes can be added to the mode pool dynamically, using the NV-CONTROL X extension; this argument prohibits user-specified modes via the NV-CONTROL X extension.

Examples:

    Option "ModeValidation" "NoMaxPClkCheck"

disable the maximum pixel clock check when validating modes on all display devices.

    Option "ModeValidation" "CRT-0: NoEdidModes, NoMaxPClkCheck; DFP-0: NoVesaModes"

do not use EDID modes and do not perform the maximum pixel clock check on CRT-0, and do not use VESA modes on DFP-0.

Option "UseEvents" "boolean"

Enables the use of system events in some cases when the X driver is waiting for the hardware. The X driver can briefly spin through a tight loop when waiting for the hardware. With this option the X driver instead sets an event handler and waits for the hardware through the poll() system call. Default: the use of the events is disabled.

Option "FlatPanelProperties" "string"

This option requests particular properties for all or a subset of the connected flat panels.

The option string is a semicolon-separated list of comma-separated property=value pairs. Each list of property=value pairs can optionally be prepended with a flat panel name.

    "<DFP-0>: <property=value>, <property=value>; <DFP-1>: <property=value>; ..."

Recognized properties:

  • "Scaling": controls the flat panel scaling mode; possible values are: 'Default' (the driver will use whichever scaling state is current), 'Native' (the driver will use the flat panel's scaler, if possible), 'Scaled' (the driver will use the NVIDIA GPU's scaler, if possible), 'Centered' (the driver will center the image, if possible), and 'aspect-scaled' (the X driver will scale with the NVIDIA GPU's scaler, but keep the aspect ratio correct).

  • "Dithering": controls the flat panel dithering mode; possible values are: 'Default' (the driver will decide when to dither), 'Enabled' (the driver will always dither, if possible), and 'Disabled' (the driver will never dither).

Examples:

    Option "FlatPanelProperties" "Scaling = Centered"

set the flat panel scaling mode to centered on all flat panels.

    Option "FlatPanelProperties" "DFP-0: Scaling = Centered; DFP-1: Scaling = Scaled, Dithering = Enabled"

set DFP-0's scaling mode to centered, set DFP-1's scaling mode to scaled and its dithering mode to enabled.

Option "ProbeAllGpus" "boolean"

When the NVIDIA X driver initializes, it probes all GPUs in the system, even if no X screens are configured on them. This is done so that the X driver can report information about all the system's GPUs through the NV-CONTROL X extension. This option can be set to FALSE to disable this behavior, such that only GPUs with X screens configured on them will be probed. Default: all GPUs in the system are probed.

Option "DynamicTwinView" "boolean"

Enable or disable support for dynamically configuring TwinView on this X screen. When DynamicTwinView is enabled (the default), the refresh rate of a mode (reported through XF86VidMode or XRandR) does not correctly report the refresh rate, but instead is a unique number such that each MetaMode has a different value. This is to guarantee that MetaModes can be uniquely identified by XRandR.

When DynamicTwinView is disabled, the refresh rate reported through XRandR will be accurate, but NV-CONTROL clients such as nvidia-settings will not be able to dynamically manipulate the X screen's MetaModes. TwinView can still be configured from the X config file when DynamicTwinView is disabled.

Default: DynamicTwinView is enabled.

Option "IncludeImplicitMetaModes" "boolean"

When the X server starts, a mode pool is created per display device, containing all the mode timings that the NVIDIA X driver determined to be valid for the display device. However, the only MetaModes that are made available to the X server are the ones explicitly requested in the X configuration file.

It is convenient for fullscreen applications to be able to change between the modes in the mode pool, even if a given target mode was not explicitly requested in the X configuration file.

To facilitate this, the NVIDIA X driver will, if only one display device is in use when the X server starts, implicitly add MetaModes for all modes in the display device's mode pool. This makes all the modes in the mode pool available to full screen applications that use the XF86VidMode or XRandR X extensions.

To prevent this behavior, and only add MetaModes that are explicitly requested in the X configuration file, set this option to FALSE.

Default: IncludeImplicitMetaModes is enabled.

Option "LoadKernelModule" "boolean"

Normally, the NVIDIA Linux X driver module will attempt to load the NVIDIA Linux kernel module. Set this option to "off" to disable automatic loading of the NVIDIA kernel module by the NVIDIA X driver. Default: on (the driver loads the kernel module).