Database Reference Stack

This guide describes the hardware and installation requirements for using the DBRS, along with getting started configuration examples, using Clear Linux* OS as the host system.


The Database Reference Stack is integrated, highly-performant, open source, and optimized for 2nd generation Intel® Xeon® Scalable processors and Intel® Optane™ persistent memory. This open source community release is part of an effort to ensure developers have easy access to the features and functionality of Intel Platforms.

Stack Features

Current supported database applications are Apache Cassandra* and Redis*, which have been enabled for Intel Optane PMM.

DBRS with Apache Cassandra can be deployed as a standalone container or inside a Kubernetes* cluster.

The Redis stack application is enabled for a multinode Kubernetes environment, using AEP PMem DIMM in fsdax mode for storage.


Refer to the Database Reference Stack website for information and download links for the different versions and offerings of the stack.

The release announcement for each release provides more detail about the stack features, as well as benchmark results.


The Database Reference Stack is a collective work, and each piece of software within the work has its own license. Please see the DBRS Terms of Use for more details about licensing and usage of the Database Reference Stack.

Hardware Requirements

  • Intel Xeon Scalable platform with Intel® C620 series chipset

  • 2nd Gen Intel Xeon Scalable processor CPU (Intel® Optane™ PMem-enabled stepping). Provides cache & memory control. Intel Optane PMem works only on systems powered by 2nd Generation Intel® Xeon® Platinum or Intel® Xeon® Gold processors.

  • BIOS with Reference Code

  • Intel Optane PMem

Hardware configuration used in stacks development

  • Intel® Server System R2208WFTZSR

  • BIOS with Reference Code * BIOS ID: SE5C620.86B.0D.01.0438.032620191658 * BMC Firmware: 1.94.6b42b91d * Intel Optane PMem Firmware:

  • 2x Intel Xeon Platinum 8268 Processor

  • Intel® SSD Data Center Family S5600 Series 960GB 2.5in SATA Drive

  • 64 GB RAM - Distributed in 4x 16 GB DDR4 DIMM’s

  • 2x Intel Optane PMem 256GB Module

  • 1-1-1 Layout 8 Intel Optane : 1 RAM ratio

Table 1. IMC

Channel 2

Channel 2

Channel 1

Channel 1

Channel 0

Channel 0

Slot 1

Slot 0

Slot 1

Slot 0

Slot 1

Slot 0




Firmware configuration


When updating DCPMM Firmware, all DCPMM parts must be in the same mode (you cannot mix 1LM and 2LM parts).

The latest firmware download for the Intel® Server Board S2600WF Family is available at the Intel Download Center

Firmware Update Steps

  1. Unzip the contents of the update package and copy all files to the root directory of a removable media (USB flash drive).

  2. Insert the USB flash drive to any available USB port on the system to be updated.

  3. Boot to EFI shell.

  4. Input “fsx(x:0,1,…):” to enter into your usb disk

  5. Run “startup.nsh”

  6. After update BMC firmware, system BIOS, ME firmware,FD, FRUSDR, system will reboot automatically.

If Intel Optane PMem is installed, run startup.nsh a second time after the first reboot to upgrade Intel Optane PMem Firmware:

  • Boot to EFI shell.

  • Input “fsx(x:0,1,…):” to enter into your usb disk

  • Run “startup.nsh” again to update the corresponding AEP FW.

Hardware Configuration

Online Resources

Before going through the configuration steps, we strongly recommend visiting the following resources and wikis to have a broader understanding of what is being done

Optane™ DIMM Configuration

The PMem DIMMs can be configured in devdax or fsdax mode. The use case to enable database stack on a kubernetes environment currently only support fsdax mode.

Configuration Steps


Run the following steps with root privileges (sudo) as shown in the examples

  1. To configure Optane™ DIMMs for App direct mode run this command

    sudo ipmctl create -goal PersistentMemoryType=AppDirect
  2. Verify the Optane™ Configuration by showing the defined region, then reboot the system for your changes to take effect

    sudo ipmctl show -region
  3. Next, list the defined namespaces for the pmem devices in the system. If they are not defined, create them as shown in the following step.

    sudo ndctl list -N
  4. Create namespaces based on the regions and set mode as fsdax – use the names of the regions listed in previous step as the –-region parameter (default is region0 and region1; one for each CPU socket)

    sudo ndctl create-namespace --region=region0 --mode=fsdax
    sudo ndctl create-namespace --region=region1 --mode=fsdax
  5. Create the filesystem and mount it. We are using /mnt/dax{#} as a convention in this guide to mount our devices

    sudo mkfs.ext4 /dev/pmem0
    sudo mount -o dax /dev/pmem0 /mnt/dax0
    sudo mkfs.ext4 /dev/pmem1
    sudo mount -o dax /dev/pmem1 /mnt/dax1

Running DBRS with Apache Cassandra*

DBRS with Apache Cassandra can be deployed as a standalone container or inside Kubernetes*. Instructions for both cases is included here. Note that you can use the released Docker image with Apache Cassandra (Docker* examples below). These instructions provide a baseline for creating your own container image. If you are using the released image, skip this section.


At the initial release of DBRS, Apache Cassandra is considered to be Engineering Preview release quality and may not be suitable for production release. Please take this into consideration when planning your project.

Build the DBRS with Apache Cassandra container

To build the container with Apache Cassandra, you must build cassandra-pmem, and then build the container using the docker build command. We are using Clear Linux OS as our container host as well as the OS in the container.

Build cassandra-pmem


At the initial release of DBRS, the pmem-csi driver is considered to be Engineering Preview release quality and may not be suitable for production release. Please take this into consideration when planning your project.

In the DBRS github repository, there is a file called, which handles all the requirements for compiling cassandra-pmem for Dockerfile usage. The dependencies for this build can be installed with swupd.

sudo swupd bundle-add c-basic java-basic devpkg-pmdk pmdk

Once installed, we run the script


At the completion of the build you will have a file called cassandra-pmem-build.tar.gz. Place this file in the same directory with the Dockerfile to build the Docker image.

Build the Docker container

To build the Docker image, run the Dockerfile in the same directory with the cassandra-pmem-build.tar.gz

docker build --force-rm --no-cache -f Dockerfile -t $build_image_name .

Once it completes, the Docker image is ready to be used.

Deploy Apache Cassandra PMEM as a standalone container


To deploy Apache Cassandra PMEM, you must meet the following requirements

  • PMEM memory must be configured in devdax or fsdax mode. The container image is able to handle both modes and depending on the PMEM mode, the mount points inside the container must be different.

  • In order to make available devdax pmem devices inside the container you must use the –device directive. Internally the container always uses /dev/dax0.0, so the mapping should be: --device=/dev/<host-device>:/dev/dax0.0

  • In a similar fashion for fsdax we need the device to be mapped to /mnt/pmem inside the container: --mount type=bind,source=<source-mount-point>,target=/mnt/pmem

Preparing PMEM for container use

The cassandra-pmem image is capable of using both fsdax and devdax, the necessary steps to configure the PMEM to work with cassandra are documented here.

We need to verify the device we want to use is in devdax mode

sudo ndctl create-namespace -fe namespace0.0  --mode=devdax
  "size":"3.94 GiB (4.23 GB)",
  "size":"3.94 GiB (4.23 GB)",
      "size":"3.94 GiB (4.23 GB)"

If needed, we can reconfigure it using ndctl create-namespace -fe <namespace-name> --mode=devdax.

Before using a devdax device we need to clear the device:

sudo pmempool rm -vaf /dev/dax0.0

The jvm.options configuration for Apache Cassandra should look like the following:


Where * pmem_path is the devdax device. * pool_size=0 indicates to use the entire devdax device.

When using the Docker image with Apache Cassandra, the file jvm.options is automatically populated.

Run the DBRS Container

Replace <image-id> in the following commands with the name of the image you are using.

In devdax mode:

docker run --device=/<devdax-device>:/dev/dax0.0 --ulimit nofile=262144:262144 -p 9042:9042 -p 7000:7000 -it --name cassandra-test <image-id>

Container Configuration

Using environment variables

The container listens on the primary container IP address, but if required, some parameters can be provided as environment variables using –env.

  • CASSANDRA_CLUSTER_NAME Cassandra cluster name, by default Cassandra Cluster

  • CASSANDRA_LISTEN_ADDRESS Cassandra listen address

  • CASSANDRA_RPC_ADDRESS Cassandra RPC address

  • CASSANDRA_SEED_ADDRESSES A comma separated list of hosts in the cluster, if not provided, cassandra is going to run as a single node.

  • CASSANDRA_SNITCH The snitch type for the cluster, by default it is SimpleSnitch, for more complex snitches you can mount your own file.

  • LOCAL_JMX If set to no the JMX service will listen on all IP addresses, the default is yes and listens just on localhost

  • JVM_OPTS When set you can pass additional arguments to the JVM for cassandra execution, for example for specifying memory heap sizes JVM_OPTS=-Xms16G -Xmx16G -Xmn12G

When using PMEM in fsdax mode, there are some parameters to control the allocation of memory:

  • CASSANDRA_FSDAX_POOL_SIZE_GB The size of the fsdax pool in GB, if it is not specified the pool size is 1

  • CASSANDRA_PMEM_POOL_NAME The filename of the pool created in PMEM, by default cassandra_pool

Using custom files

For more complex deployments it is also possible to provide custom cassandra.yaml and jvm.options files as shown below:

docker run --mount type=bind,source=/<fsdax-mountpoint>,target=/mnt/pmem -it  --ulimit nofile=262144:262144 --mount type=bind,source=/<path-to-file>/cassandra.yaml,target=/workspace/cassandra/conf/cassandra.yaml --mount type=bind,source=/path-to-file>/jvm.options,target=/workspace/cassandra/conf/jvm.options --name cassandra-custom-files


For a simple two node cluster using PMEM in fsdax mode on both containers:

Node 1

  • IP:

  • PMEM mountpoint: /mnt/pmem1

docker run --mount type=bind,source=/mnt/pmem1,target=/mnt/pmem  --ulimit nofile=262144:262144 -it -e 'CASSANDRA_FSDAX_POOL_SIZE_GB=2' -e 'CASSANDRA_SEED_ADDRESSES=,'  --name cassandra-node1 <image-id>

Node 2

  • IP:

  • PMEM mountpoint: /mnt/pmem2

docker run --mount type=bind,source=/mnt/pmem2,target=/mnt/pmem  --ulimit nofile=262144:262144 -it -e 'CASSANDRA_FSDAX_POOL_SIZE_GB=2' -e 'CASSANDRA_SEED_ADDRESSES=,'  --name cassandra-node2 <image-id>

Once both nodes are running, eventually the gossip is settled and we can use nodetool on either container to check cluster status.

docker exec -it <container-id> bash /workspace/cassandra/bin/nodetool status

The output should look similar to this:

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
UN  0 bytes    256          100.0%            22387159-8192-41cf-8b6c-8bf0e1049eb7  rack1
UN  0 bytes    256          100.0%            219b56ba-c07c-400b-a018-a5dc20edeb09  rack1


By default you can access the data written to Apache Cassandra as long as the container exists. In order to persist the data past that, you can mount volumes or bind mounts on /workspace/cassandra/data and /workspace/cassandra/logs and in this way the data can still be accessed once the container is deleted.

Deploy An Apache Cassandra-PMEM cluster on Kubernetes*

Many containerized workloads are deployed in clusters and orchestration software like Kubernetes can be useful. We will use the cassandra-pmem-helm Helm* chart in this example.


  • Kubectl* must be configured to access the Kubernetes Cluster

  • A Kubernetes cluster with pmem-csi enabled

  • The Kubernetes cluster must have helm and tiller installed

  • PMEM hardware


When selecting the fsdax pool file size, it is important to consider that when requesting a volume, certain amount of space is used by the filesystem metadata on that volume and the available space turns out to be less than total amount specified. Taking this into consideration the size of the fsdax pool file should be ~2G less than the total volume size requested.


In order to configure the Apache Cassandra PMEM cluster some variables and values are provided. These values are set in test/cassandra-pmem-helm/values.yaml, and can be modified according to your specific needs. A summary of those parameters is shown below:

  • clusterName: The cluster Name set across all deployed nodes

  • replicaCount: The number of nodes in the cluster to be deployed

  • image.repository: The address of the container registry where the cassandra-pmem image should be pulled

  • image.tag: The tag of the image to be pulled during deployment

  • The name of the image to be pulled during deployment

  • pmem.containerPmemAllocation: The size of the persistent volume claim to be used as heap, it uses the storage class pmem-csi-sc-ext4 from pmem-csi The size of the fsdax pool to be created inside the persistent volume claim, in practice it should be 1G less than pmem.containerPmemAllocation

  • pmem.fsdaxPoolSizeInGB: The size of the fsdax pool to be created inside the persistent volume claim, in practice it should be 1G less than pmem.containerPmemAllocation

  • enablePersistence: If set to true, K8s persistent volumes are deployed to store data and logs

  • persistentVolumes.logsVolumeSize: The size of the persistent volume used for storing logs on each node, the default is 4G

  • persistentVolumes.dataVolumeSize: The size of the persistent volume used for storing data on each node, the default is 4G

  • persistentVolumes.logsStorageClass: Storage class used by the logs pvc, by default it uses pmem-csi-sc-ext4

  • persistentVolumes.dataStorageClass: Storage class used by the data pvc, by default it uses pmem-csi-sc-ext4

  • provideCustomConfig: If set to true, it mounts all the files located on <helm-chart-dir>/files/conf on /workspace/cassandra/conf inside each container in order to provide a way to customize the deployment beyond the options provided here

  • exposeJmxPort: When set to true it exposes the JMX port as part of the Kubernetes headless service. It should be used together with enableAdditionalFilesConfigMap in order to provide authentication files needed for JMX when the remote connections are allowed. When set to false only local access through is granted and no additional authentication is needed.

  • enableClientToolsPod: If set to true, an additional pod independent from the cluster is deployed, this pod contains various Cassandra client tools and mounts test profiles located under <helm-chart-dir>/files/testProfiles to /testProfiles inside the pod. This pod is useful to test and launch benchmarks

  • enableAdditionalFilesConfigMap: When set to true, it takes the files located in <helm-chart-dir>/files/additionalFiles and mount them in /etc/cassandra inside the pods, some additional files for cassandra can be stored here, such as JMX auth files

  • jvmOpts.enabled: If set to true the environment variable JVM_OPTS is overridden with the value provided on jvmOpts.value

  • jvmOpts.value: Sets the value of the environment variable JVM_OPTS, in this way some java runtime configurations can be provided such as RAM heap usage

  • resources.enabled: if set to true, the resource constraints are set on each pod using the values under resources.requests and resources.limits

  • resources.requests.memory: Initial resource allocation for each pod in the cluster

  • resources.request.cpu: Initial resource allocation for each pod in the cluster

  • resources.limits.memory: Limits for memory allocation for each pod in the cluster

  • resources.limits.cpu: Limits for cpu allocation for each pod in the cluster


Once all the configurations are set, to install the chart inside a given Kubernetes cluster you must run:

helm install ./cassandra-pmem-helm

Eventually all the given nodes will be shown as running using kubectl get pods.

Running DBRS with Redis

The Redis stack application is enabled for a multinode Kubernetes environment using Intel Optane DCPMM PMem DIMMs in fsdax mode for storage.

The source code used for this application can be found in the Github repository

The following examples will use the Docker image with Redis. You can also build your own image with Docker by using the Dockerfile and running with this command

docker build --force-rm --no-cache -f Dockerfile -t ${DOCKER_IMAGE} .

Single node

Prior to starting the container, you will need to have the Intel Optane DCPMM module in fsdax with a file system and mounted in /mnt/dax0 as shown above.

Use the following to start the container, replacing ${DOCKER_IMAGE} with the name of the image you are using.

docker run --mount type=bind,source=/mnt/dax0,target=/mnt/pmem0 -i -d --name pmem-redis ${DOCKER_IMAGE} --nvm-maxcapacity 200 --nvm-dir /mnt/pmem0 --nvm-threshold 64 --protected-mode no

Redis Operator in a Kubernetes cluster

After setting up Kubernetes* in Clear Linux OS, you will need to enable it to support DCPMM using the pmem-cls driver. To install the driver follow the instructions in the pmem-csi repository.

We are using source code from the Redis operator .


If you already have a redis-operator, you will need to delete it before installing a new one.

After installing the operator you are ready to deploy redisfailover instances using a yaml file, like this example for persistent memory. You can download it and change the source of the image to reflect your environment. We have named our yaml redis-failover.yml

To start a redisfailover instance in Kubernetes run the following

kubectl create -f redis-failover.yml


There is a known issue in which the sentinels do not have enough memory to create the InitContainer. The current workaround is to build the image increasing the limits for the InitContainer memory to 32Mb

Running DBRS with Memcached

With DBRS V2.0 you can use the DBRS stack with Memcached, a free and open source, high performance, distributed meory object caching system. This stack is ready to use DCPMM in fsdax for storage. The source for this application can be found in the Memcached repository.


The DBRS v2.0 release does not support Redis or Cassandra.

Build the DBRS Memcached image

To build the Memcached enabled image, use the Dockerfile with this command:

docker build --force-rm --no-cache -f Dockerfile -t ${DOCKER_IMAGE} .

Run DBRS with Memcached as a standalone container

Prior to launching the container, you will need to configure the DCPMM in fsdax mode with a file system, and have it mounted in /mnt/dax0. Instructions for configuration can be found in Hardware Configuration.

To launch the container run this command:

docker run --mount type=bind,source=/mnt/dax0,target=/mnt/pmem0 -i -d --name pmem-memchached ${DOCKER_IMAGE} -e /mnt/pmem0/memcached.file -m 64 -c 1024 -p 11211


-m is the maximum memory limit to use in megabytes -e is the mmap path for external memory (DCPMM storage). For this container the DCPMM sould be mounted inside the container on /mnt/pmem0 -c is the number of concurrent connections -p is the TCP connection port.

For more information please refer to this blog post from Memcached

Intel, Xeon, Intel Optane, and the Intel logo are trademarks of Intel Corporation or its subsidiaries.