Skip to main content
Skip table of contents

Deployment using Quadlets

This article is new for Certificate Manager 8.10.4.

This article describes how to install Smart ID Certificate Manager (CM) server components using quadlets.

Prerequisites

  • A supported database server must be installed/available

  • License file must be available

  • Podman version 4.9.4 or later

  • RedHat Enterprise Linux 9.4 or Rocky Linux 9.4

  • Administrator's Workbench, Registration Authority, and Certificate Controller clients from CM distributable package.

Step-by-step instructions

Pre-configuration

There are a few pre-configuration steps required before CM can be deployed. To prepare the deployment with an initial configuration please follow the configuration steps in the below sections.

CM image archive files

The Podman images of Certificate Manager are stored in the images directory under the distributable. These image files may be uploaded to a local private container registry with controlled and limited access, but shall not be distributed to any public container registry.

For local use, the images can be read with below commands:

CODE
podman image load --input images/cf-server.tar
podman image load --input images/pgw.tar

CM license file

Create a license directory in the cm deployment directory and place the CM license files inside it. In this article, the license directory will be mounted as
a read-only bind file system volume for the cf-server container, which runs the certificate factory server.

Deployment directory

When deploying using quadlets the name of the directory in which the distributable deployment files are located will be dictated by the user running the container. It will map to the following directory:

$HOME/.config/containers/systemd/

Initialize the CM deployment

Before continuing with the CM deployment on quadlets, follow the steps for the corresponding database from one of the following pages:

To handle CM on Podman in a production system, it is recommended to create quadlets for each container. Example quadlets can be found in the quadlets directory inside the distributable cm directory.

For rootless deployment, the .container, .volume and .network files from the quadlets directory need to be copied to the following location, assuming that the current user is the operator for the container deployment:

$HOME/.config/containers/systemd

If the directory does not exist it can be created.

The license directory containing the CM license files must also be copied to the above directory.

Once the .container, .volume, .network and license files are in place they can be loaded into systemd using:

systemctl --user daemon-reload

This will create a systemd service for each container and volume and can be started up accordingly.

The deployment procedure for CM with quadlets is essentially the same as for podman-compose, with the exception that the containers are managed by using systemd. Volumes only need to be started once, but may be restarted if the volumes are removed for any reason.

Examples (for rootless deployment):

CODE
systemctl --user start cf-server-bin-volume
systemctl --user start cf-server-certs-volume
systemctl --user start cf-server-config-volume
systemctl --user start cf-server

The container images should be pulled or loaded manually before starting any of the containers using systemd. Due to lack of TTY (a virtual text console) it might cause startup problems with the containers, the systemctl command will appear to time out due to halting and waiting for console user input. Pulling the images manually with "podman pull" is preferable, which will permit user input in case an option needs to be selected.

Any changes to files in the systemd .container, .volume, .network files or even local bind volumes in the systemd directory require a daemon reload:
systemctl --user daemon-reload

Containers running from quadlets/systemd will be removed when the systemd service is stopped. Any data on the container not stored on volumes will be lost. 

Add the CM database connection

To add the CMDB connection to the CM configuration, a JDBC connection must be added.

Add the folloing --cm-param flags to the /quadlets/cf-server.container Exec property to make the CF container start with a correctly configured JDBC connection:

CODE
Exec=5009 combo --cm-param Database.name=<jdbc-connection-string>\
 --cm-param Database.user=<user> --cm-param Database.password=<password> --cm-param Database.connections=20

Alternatively, add the JDBC connection manually to the cm.conf file in the following way.

The CF container needs to be started to initialize the volumes with the configuration files, this start of CF will fail because no JDBC connection is yet configured.

systemctl --user start cf-server

The cm.conf can then be configured with the JDBC connection at the following path:

$HOME/.local/share/containers/storage/volumes/systemd-cf-server-config/_data/

Then restart the CF container by running the below command again:

systemctl --user restart cf-server

Post-configuration

Accessing the CM containers using the CM clients

At this point the CF is ready to accept connections on the exposed CF container port, so it is now possible to connect using Administrator's Workbench and Registration Agent clients. These clients can be installed from the CM distributable zip package.

There might be firewall rules blocking the exposed container ports in the Podman host.

Configuring the Protocol Gateway container

At this point the deployment will have configuration, certificates, and other persistent data on volumes mounted in the CF containers. To make changes in any of the configuration files or just copy files, the volumes needs to be accessed either inside the containers or mounting the volumes elsewhere.

For example, to edit configuration in CM from inside the container:

podman exec -ti cf-server bash

This will start a new shell inside the cf-server container, which allows editing of the cm.conf, and other files. The base directory where the tools and configuration can be found is /opt/cm/server/. It is also possible to mount a named volume to the host in case the container cannot start properly for some reason. For more details please consult the podman documentation.

The volumes can also be reached from the local file system in most cases by viewing the container volume mount list using the below command:

podman inspect cf-server

Initializing the Protocol Gateway container

The Protocol Gateway container, here named pgw, is based on an Apache Tomcat version 10 image and contains a configuration for a minimal deployment. The Protocol Gateway servlets are deployed but none of them are started.

For HTTPS in Tomcat, there is a default PKCS#12 TLS server token file name and password in the server.xml, "protocol-gateway-tls.p12", but the token file is not included. It needs to be issued and then it can be uploaded to the Tomcat configuration directory, (or a different volume backed path configured in server.xml.)

To initialize the volumes from the Protocol Gateway image data, the pgw container should be started up, and once Tomcat has started, the container should be stopped again:

CODE
systemctl --user start pgw
podman logs -f pgw
systemctl --user stop pgw

Configuring the Protocol Gateway container

The pgw container has two volumes by default:

CODE
systemd_pgw-config-gw
systemd-pgw-config-tomcat

The systemd_pgw-config-gw volume contains configuration related to PGW, this includes configuration of the different certificate issuance protocols that PGW supports.

The systemd-pgw-config-tomcat volume contains configuration related to Tomcat, this includes configuration of the different connectors that tomcat should listen on.

It should be configured in a stopped state, and in order to modify the configuration in these volumes one method is to access the container's file system from the Podman host. The volumes can be found in the following directories:

CODE
$HOME/.local/share/containers/storage/volumes/systemd-pgw-config-gw/_data/
$HOME/.local/share/containers/storage/volumes/systemd-pgw-config-tomcat/_data/

It is possible to edit the files from within the pgw container. However, this is not recommended due to the limit amount of utility tools available inside the container.

Enabling the pgw container health check

For the health check to pass successfully, the Ping servlet requires a valid virtual registration officer token from the previous step, and the ping servlet should also have a ping procedure configured in CM.

Once the token is configured and the ping procedure is available, the Ping servlet must be started by setting the "start=true" parameter in the ping.properties file in the systemd-pgw-config-gw volume.

Warnings will be logged in the Protocol Gateway if the Ping servlet does not refer to a Ping procedure in CM. See the Technical Description document for configuring this.

Starting the pgw container

Once the configuration has been edited the pgw container can be started:

systemctl --user start pgw

This is the required minimum configuration for setting up the Protocol Gateway. Additional volumes for possible output directories or HSM/other libraries, additional configuration, or web applications may be added if so required.

Connecting to services running on the Podman host

While it is not recommended to do so, there might be situations where a connection from a container to the Podman container host machine is needed. As Podman uses the slirp4netns driver by default, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be used to reach the Podman machine localhost, while adding the following configuration depending on the deployment type:

Deployment type

Location

Configuration

Quadlets

In the [Container] section

Network=slirp4netns:allow_host_loopback=true

Preventing shutdown of containers

Podman containers which belong to a user session will shut down after the user's session ends, i.e. user logs out of the podman host machine.

An easy way to prevent the shutdown is by enabling lingering via loginctl:

loginctl enable-linger <user>

HSM configuration

HSM libraries are by default stored in the directory /opt/cm/server/bin, which is also backed by a volume by default. However, it can be configured to point to another location in the container, which could be pointed out by the LD_LIBRARY_PATH environment variable inside the container, for example. The configuration location for the HSM should be indicated from its provided documentation.

It is recommended to create additional volumes for both the library and its configuration, so that they are persistent and can be upgraded to newer versions.

The CM configuration files have documentation for the parameters where a HSM library should be configured. To test and configure a HSM for using with CM, the "hwsetup" tool can be used. See Initialize Hardware Security Module for use in Certificate Manager for more details.

Troubleshooting

The container logs can be monitored using the "podman logs" command in order to narrow down any issues that might occur. If any of the containers fail to start up it is commonly necessary to access the configuration inside the container.

A simple way to handle this is to start another container mounting the volumes and overriding the image's entry point, for example:

podman run --rm --network cm_cmnet --user 0 --entrypoint /bin/bash -v cm_pgw-config-tomcat:/tomcat-cfg:z -ti smartid/certificatemanager/pgw:8.10.3-1

Even if the faulty container is down or unsuccessfully trying to restart, this temporary container allows for editing the configuration on the mounted volumes, and files can be copied between them and the Podman host.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.