README: New kernel module source layout for NVIDIA Linux kernel modules
=======================================================================

The NVIDIA GPU driver for Linux consists of multiple user space and
kernel space components, with the kernel space components implemented as
Linux kernel modules. The NVIDIA Linux driver package includes source
code for any portions of these kernel modules which interface directly
with the Linux kernel, and the NVIDIA kernel modules must be built
against the target kernel from a combination of these source code files
and precompiled binary portions. Beginning with the 355.xx release of
the NVIDIA Linux driver, a new layout and kernel module build system
will be used for the kernel modules that are included in the installer
package. Some key differences between the existing build system and the
new one include:

  * Each kernel module will have its own subdirectory within the top
    level kernel module source directory.

  * Invoking the kernel module build from the top level directory will,
    by default, build all NVIDIA kernel modules at once. For example,
    instead of first building "nvidia.ko" as a prerequisite to building
    "nvidia-uvm.ko", under the new build system, both "nvidia.ko" and
    "nvidia-uvm.ko" are built within the same `make` invocation, with
    the Linux Kbuild system handling the inter-module dependencies.

  * All built kernel modules will be saved to the top level directory.
    For example, instead of "nvidia.ko" being saved to the top level
    directory, and "nvidia-uvm.ko" being saved to the "uvm/"
    subdirectory, both modules will be saved to the top level directory.

  * The new build system no longer supports building multiple instances
    of the NVIDIA kernel module (e.g. "nvidia0.ko", "nvidia1.ko", ...)
    which are managed by a shared "nvidia-frontend.ko" frontend module.

As this new build system may prove disruptive to existing tools which
repackage the NVIDIA driver, NVIDIA is providing example installer
packages which demonstrate the new layout. These packages include the
driver components from the already released 352.21 driver, with the
layout of the kernel module source files updated to reflect the layout
that will be used in the upcoming 355.xx and later driver releases.
These packages also include a version of `nvidia-installer` which has
been updated to handle the new kernel module source layout. An archive
which contains source code for the updated version of `nvidia-installer`
should be available from the same location as the example packages and
this README document.

If you are a repackager of the NVIDIA Linux driver, please use these
packages to test any changes to your packaging tools that you may need
to make in order to accommodate the new layout.

Special considerations for driver repackagers
---------------------------------------------

In addition to the differences between the old and new kernel module
build systems which have been highlighted above, there are a few special
considerations that repackagers of the NVIDIA Linux driver should keep
in mind:

### Dynamic Kernel Module Support (DKMS) ###

For packages that use DKMS to build the NVIDIA kernel modules, note that
the format of the included dkms.conf file has changed. Previous driver
versions included a dkms.conf for building "nvidia.ko", and an separate
dkms.conf.fragment to be optionally appended to the base dkms.conf file
to add support for building the "nvidia-uvm.ko" kernel module via DKMS.
The dkms.conf file in the new driver packages is no longer a valid
dkms.conf file in its own right, but is rather a template file, which
`nvidia-installer` modifies at installation time, as appropriate for the
target installation.

Repackagers are free to develop their own dkms.conf files, but for those
who wish to use the dkms.conf included in the driver package as a
starting point, please note the following tokens and substitutions:

  * __VERSION_STRING

    This should be replaced with the driver version, e.g. "352.21"

  * __JOBS  

    `nvidia-installer` detects the number of available processors during
    installation, and by default will build the kernel module with a
    level of parallelism matching the number of CPUs. `nvidia-installer`
    will also fill this level of parallelism in as an argument to `make`
    within the dkms.conf file.

  * __EXCLUDE_MODULES

    Some kernel modules, such as "nvidia-uvm.ko", are optional parts of
    the driver installation and may be excluded. The __EXCLUDE_MODULES
    token in the dkms.conf template included with the driver package
    should be replaced with a space-separated list of kernel module
    names (excluding the ".ko" filename extension) which should be
    excluded from the build.

  * __DKMS_MODULES

    This token should be replaced with a list of kernel modules that
    DKMS should install. Each entry in the list should have two lines:
    one `BUILT_MODULE_NAME[$index]` line to specify the name of the
    kernel module (excluding the ".ko" filename extension), as well as
    one `DEST_MODULE_LOCATION[$index]` line to specify the destination
    where the built module should be installed (so long as no local
    distribution-specific policy overrides this location; see dkms(8)).

    Each pair of entries should carry a unique, incrementing, 0-based
    array index to tie together its pair of `BUILT_MODULE_NAME[]` and
    `DEST_MODULE_LOCATION[]` entries. For example, a dkms.conf to
    install the "nvidia" and "nvidia-uvm" kernel modules should contain
    entries for the `BUILT_MODULE_NAME[]` and `DEST_MODULE_LOCATION[]`
    arrays such as the following:

        BUILT_MODULE_NAME[0]="nvidia"
        DEST_MODULE_LOCATION[0]="/kernel/drivers/video"
        BUILT_MODULE_NAME[1]="nvidia-uvm"
        DEST_MODULE_LOCATION[1]="/kernel/drivers/video"

    As noted above, the built modules are now all saved to the top level
    of the kernel module source directory. Under the previous build
    system, the "nvidia-uvm" kernel module was built in a subdirectory,
    and it was necessary to set its BUILT_MODULE_LOCATION entry to the
    subdirectory path, but under the new build system, this is no longer
    necessary.

  * "generated by nvidia-installer" comment

    A comment above the list of auto-generated `BUILT_MODULE_NAME[]`
    and `DEST_MODULE_LOCATION[]` entries is edited by the installer from
    "The list of kernel modules will be generated by nvidia-installer at
    runtime." to read "The list of kernel modules was generated by
    nvidia-installer at runtime." This change has no effect on the
    functionality of the dkms.conf file, but the comment should probably
    be updated or removed when producing dkms.conf files which are based
    on the template file included in the driver package, but which were
    not processed using `nvidia-installer`.

### List of built kernel modules ###

The list of built kernel modules should not be hardcoded into driver
repackaging tools, as it may vary between target architectures or driver
versions. The 355.xx release series will not add any new kernel modules
as compared to 352.xx; however, future driver releases will add new
kernel modules, which will be built using the new kernel module build
system.

The fourth line of the .manifest file at the top level of the extracted
contents of a .run installer package (set the --extract-only or -x
option on the .run file's command line to extract its contents) contains
a space-separated list of kernel modules which are included within that
package. `nvidia-installer` iterates over this list for any operations
that involve kernel modules; it would be prudent for repackaging tools
to be updated to use this list as well.

### Kernel module signing ###

Support for kernel module signing within the kernel module makefiles has
been removed. The "module_sign" target that existed in the old build
system was merely a thin wrapper around the kernel's own `sign-file`
utility script, and required that several arguments be passed to it,
only to be passed directly onto `sign-file`. `nvidia-installer` now
simply invokes `sign-file` directly when performing module signing.