Nexus' software components have new names:

Nexus PRIME -> Smart ID Identity Manager
Nexus Certificate Manager -> Smart ID Certificate Manager
Nexus Hybrid Access Gateway -> Smart ID Digital Access component
Nexus Personal -> Smart ID clients

Go to Nexus homepage for overviews of Nexus' solutions, customer cases, news and more.


Skip to end of metadata
Go to start of metadata

This article describes how to upgrade the Smart ID Digital Access component from version 5.13.x (includes 5.13.1, 5.13.2, 5.13.3, 5.13.4, 5.13.5) to 6.0.5 or above for single node appliance as well high availability or distributed setup.

Upgrades from 5.13.1 to 6.0.5 or above should not be done through v-apps or admin GUI. Follow the steps in this article for a successful upgrade.

Important

  • Version 5.13.0 cannot be directly upgraded to 6.0.5 or above. You must upgrade to 5.13.1 or above since 5.13.0 has Ubuntu 14.04 which is not a supported version for Docker.

These are one time steps to set the system to use docker and swarm. Once this is all set, upgrades later will become much easier.

There are two options, described below, for upgrading from version 5.13.1 and above to 6.0.5 and above:

  1. Migrate - This section describes how to upgrade, as well as migrate, the Digital Access instance from appliance to a new VM by exporting all data/configuration files with the help of script provided. (Recommended way)
  2. Upgrade - This section describes how to upgrade Digital Access in the existing appliance to the newer versions.

Download latest updated scripts

Make sure you download the upgradeFrom5.13.tgz file again in case you have downloaded it before 18th August 2021 to get the latest updated scripts.

Expand/Collapse All

  1. Make sure that you have the correct resources available (memory, CPU and hard disk) as per requirement on new machines.
  2. Install docker, xmlstarlet, although upgrade/migrate script will install docker and xmlstarlet if not installed already. But if it is an offline upgrade (no internet connection on machine) then install the latest version of docker and xmlstarlet before running the migration steps.
  3. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  4. For HA setup only,
    1. Make sure you have the similar set of machines, since the placement of services will be same as on the existing setup. For example, if you have two appliances in HA setup, you must have two new machines to migrate the setup.
    2. Identify the nodes, as the new setup should have equal number of machines. You must create mapping of machines from the old setup to the new setup.
    3. Identify the manager nodes. In docker swarm deployment, one machine is the manager node. Make sure that the administration-service node in the older setup replaces it during upgrade/migrate.

    1. Copy upgradeFrom5.13.tgz to all nodes, and extract the .tgz file.

      Extract
      tar -xzf upgradeFrom5.13.tgz
    2.  Run upgrade.sh to migrate files/configuration from the existing setup. It will create a .tgz file in upgrade/scripts/da_migrate_5.13.tgz. Copy this file to the new machine.

      Run the below commands with --manager on the manager node and --worker on the worker nodes:

      Run upgrade script
      sudo bash upgrade/scripts/upgrade.sh --manager --export_da (on node running administration-service)
      sudo bash upgrade/scripts/upgrade.sh --worker --export_da  (on all other worker nodes)
    3. After this step, the services on the older existing machines will be stopped. It will create a dump of the locally running PostgreSQL database. Note that the database dump will only be created at administration service node.
    1. Copy upgradeFrom5.13.tgz to all nodes, and extract the .tgz file.

      Extract
      tar -xzf upgradeFrom5.13.tgz
    2. Edit the configuration files on manager Node

      Navigate to the docker-compose folder (<path to upgrade folder>/docker-compose) and edit these files:

      • docker-compose.yml
      • network.yml
      • versiontag.yml

      docker-compose.yml

      For each service, add one section in the docker-compose.yml file.

      Change the values for the following keys:

      • Service name
      • Hostname
      • Constraints

      For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

        policy:
          # configure image tag from versiontag.yaml
          hostname: policy
          deploy:
            mode: replicated
            replicas: 1
            placement:
              constraints:
               #If you need to set constraints using node name
               #- node.hostname ==<node name> 
               # use node label
               [node.labels.da-policy-service == true ]
            resources:
              limits:
                cpus: "0.50"
                memory: 512M
              reservations:
                cpus: "0.10"
                memory: 128M
          volumes:
            - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
            - /etc/localtime:/etc/localtime
            - /etc/timezone:/etc/timezone
          logging:
            options:
             max-size: 10m
      
        policy1:
          # configure image tag from versiontag.yaml
          hostname: policy1
          deploy:
            mode: replicated
            replicas: 1
            placement:
              constraints:
               #If you need to set constraints using node name
               #- node.hostname ==<node name> 
               # use node label
               [node.labels.da-policy-service1 == true ]
            resources:
              limits:
                cpus: "0.50"
                memory: 512M
              reservations:
                cpus: "0.10"
                memory: 128M
          volumes:
            - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
            - /etc/localtime:/etc/localtime
            - /etc/timezone:/etc/timezone
          logging:
            options:
             max-size: 10m

      network.yml

      For each service add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

      Example:

      Change the value of:

      • Service name: Service name should be identical to what is mentioned in docker-compose.yml


      policy:
      ports:
            - target: 4443
              published: 4443
              mode: host
          networks:
            - da-overlay
      
      policy1:
      ports:
            - target: 4443
              published: 4443
              mode: host
          networks:
            - da-overlay

      Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

      versiontag.yml

      Add one line for each service in this file also.

      For example, if you have two policy services with name policy and policy1, you will have two lines for each service.

      Example:

      policy:
      image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
      
      policy1:
      image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
    3. Place the da_migrate_5.13.tgz file inside the scripts folder, upgrade/scripts/.
    4. Run the upgrade script to import files/configuration from the older setup and upgrade to the latest version.
      1. Although the upgrade script installs docker and pulls the images from the repository, it is recommended to install docker and pull the images before running the upgrade. That will reduce the script run time and also the downtime of system.
      2. Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not correct tag, please update it manually.
      3. Run the script pull_image.sh to pull images on all nodes. 

        Note: In case of offline upgrade, load DA docker images to the machine also If you are using internal postgres, load postgres:9.6-alpine image on manager node.

    Pull images
    sudo bash upgrade/scripts/pull_image.sh
      1. Run the import command:

        1. On manager node

          Run import command on manager node
          sudo bash upgrade/scripts/upgrade.sh --manager --import_da   
          (on node running administration-service)
          
          1. To set Docker Swarm provide your manager node host IP address. 
          2. In case you are using an external database, select No to skip postgres installation.

          (Only applicable for HA or distributed setup)

          The script prints a token in the output. This token will be used while setting up worker nodes.

          Example:
          docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

          Here the token part is:
          SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

          If you can’t find the token in the upgrade script output on the manager node, get the cluster join token by running this command. sudo docker swarm join-token worker

          Get cluster join token
          sudo docker swarm join-token worker
        2. On worker nodes

          Run import command on worker nodes (Only applicable for HA or distributed setup)
          sudo bash upgrade/scripts/upgrade.sh --worker --import_da --token <token value> --ip-port <ip:port>  
          (on all other worker nodes)
          
      2. Follow the screen messages and complete the upgrade. Check for any error in the logs. During the upgrade, it will extract the da_migrate_5.13.tgz files and create the same directory structure as it was on the older setup.
      3. On the manager node, it will install PostgreSQL database as docker container and import database dump from the older machine.
      4. After the scripts are executed, the .tgz file will still be there. Delete it once it is confirmed that the upgrade process has been completed successfully.
    1. Verify if all nodes are part of the cluster by running this command.

      Verify if all nodes are part of cluster
      sudo docker node ls

      Example:

    2. Identify nodes ID, master and worker where the service will be distributed.

      Identify nodes
      sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
    3. Output from this command:

      {ready  192.168.86.129}

      IP address will help to identify the Digital Access node

    Add new labels for each service which you want to run. Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.

    1. Use these commands to add label for each service:

      Commands to add labels
      sudo docker node update --label-add da-policy-service=true <manager node ID>
      sudo docker node update --label-add da-authentication-service=true <manager node ID>
      sudo docker node update --label-add da-administration-service=true <manager node ID>
      sudo docker node update --label-add da-access-point=true <manager node ID>
      sudo docker node update --label-add da-distribution-service=true <manager node ID>
      sudo docker node update --label-add da-policy-service1=true <worker node ID>
      sudo docker node update --label-add da-access-point1=true <worker node ID>
    2. Deploy your DA stack using this command. 

      Verify that the required images are available on the nodes. Then run start-all.sh script on manager node.

      Deploy DA stack
      sudo bash /opt/nexus/scripts/start-all.sh
    1. Log in to Digital Access Admin and if internal Database is used in configurations then provide the host machine IP address to connect Databases (HAG, Oath, Oauth)

    2. Publish the configurations.
    3. Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
    4. Go to Manage System > Distribution Services and select the checkbox for “Listen on All Interfaces” in case of the ports that are to be exposed.
    5. Go to Manage System >Access Points and provide the IP address instead of the service name. Also enable the "Listen on all Interfaces" option.
    6. If you want to enable XPI and SOAP services then in admin expose port IP should be 0.0.0.0

    7. Redeploy the services using this command on the manager node.

      Restart
      sudo bash /opt/nexus/scripts/start-all.sh

Prerequisites and Preparations

  1. Expand the root partition, since on 5.13.x appliance the size of the disk is only 4 GB, it must be expanded to upgrade work using docker swarm.
    1. Boot the virtual machine and login to it.
    2. To find out which hard disk to expand, do df -h to see which disk partition is mounted for root file system, /dev/sdc1 or /dev/sdb1.
    3. Assuming that sdb1 is the primary hard disk, shutdown the virtual machine and expand the size of the virtual machine's disk which is found from previous step, from 4GB to minimum 16GB via editing the VM settings.
      1. If /dev/sdb1is mounted, then it is hard disk 2 which is to expanded
      2. If /dev/sdc1is mounted, then it is hard disk 3 which is to expanded
    4. Boot the virtual machine and login again.
    5. Verify the disk size of mounted disk by running this command.
      Check disk size
      fdisk -l
    6. Once the disk is successfully expanded, resize the partition and expand the file system.
      1. Resize the partition using below command

        This should show the free space available on disk

        Resize
        sudo cfdisk /dev/sdb

      2. Select Resize menu in the bottom. Once resize is selected, it should show disk size after resizing as 16GB. (whatever was set initially)
      3. Select Write menu, to write the new partition table on disk.
      4. Select Quit to exit from command.
      5. Expand the file system on the root partition.
        resize
        sudo resize2fs /dev/sdb1
      6. Verify size of the disk partition
        check disk size
        df -h
  2. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  3. Copy upgradeFrom5.13.tgz to the manager node (node where administration service is running) and all worker (other) nodes. In this setup we will set and consider administration-service node as the manager node.

    Extract the tar file.

    Extract
    tar -xzf upgradeFrom5.13.tgz


  4. Setup docker daemon and pull images.
    1. Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not the correct tag, update it manually.
    2. Although upgrade script installs docker and pulls the images from repository, it is recommended to install docker and pull the images before running the upgrade, this will reduce the script run time and also the downtime of system.
    3. Run the script pull_image.sh to pull images on all nodes.

      In case of offline upgrade, load the Digital Access docker images to the machine. Also, if you are using internal postgres, load postgres:9.6-alpine image on manager node.

      Pull images
      sudo bash upgrade/scripts/pull_image.sh
    4. Make sure there is a backup/snapshot of the machine before starting upgrade.

Step-by-step instruction

Upgrade manager node

To upgrade the manager node (node on which the administration service runs):

  1. Run the upgrade script with this command line option

    Run upgrade script
    sudo bash upgrade/scripts/upgrade.sh --manager
  2. To set Docker Swarm provide your manager node host IP address. 
  3. In case you are using an external database, select No to skip postgres installation.

    (Only applicable for HA or distributed setup)

    The script prints a token in the output. This token will be used while setting up worker node

 Example:

docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

Here the token part is:

SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377


Optional: If you can’t find token in upgrade script output on manager node, get the cluster join token by running this command.

Get cluster join token

sudo docker swarm join-token worker

If you have HA setup, multiple instances of services running on different nodes, then you must edit the yml configuration.

Navigate to the docker-compose folder (/opt/nexus/docker-compose) and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml

For each service, add one section in the docker-compose.yml file.

Change the values for the following keys:

  • Service name
  • Hostname
  • Constraints

For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

  policy:
    # configure image tag from versiontag.yaml
    hostname: policy
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

  policy1:
    # configure image tag from versiontag.yaml
    hostname: policy1
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service1 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

network.yml

For each service add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

  • Service name: Service name should be identical to what is mentioned in docker-compose.yml

Example:

policy :
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

policy1:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

versiontag.yml

Add one line for each service, if you have two policy services with name policy and policy1, you will have entries for each service.

Example:

policy :
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx

policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx

Upgrade worker nodes (Only applicable for HA or distributed setup)

  1. Keep note of token, ip and port from the above steps.
  2. Run the upgrade script, follow the command line options:

    Get cluster join token
    sudo bash upgrade/scripts/upgrade.sh --worker --token <token value> --ip-port <ip:port>

Do final steps at manager node

  1. Verify if all nodes are part of cluster by running this command.

    Verify if all nodes are part of cluster
    sudo docker node ls

    Example:

  2. Identify nodes ID, master and worker where the service will be distributed.

    Identify nodes
    sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
  3. Output from this command:

    {ready  192.168.86.129}

    IP address will help to identify the Digital Access node

Add new labels for each service which you want to run. Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.

  1. Use these commands to add label for each service:

    Commands to add labels
    sudo docker node update --label-add da-policy-service=true <manager node ID>
    sudo docker node update --label-add da-authentication-service=true <manager node ID>
    sudo docker node update --label-add da-administration-service=true <manager node ID>
    sudo docker node update --label-add da-access-point=true <manager node ID>
    sudo docker node update --label-add da-distribution-service=true <manager node ID>
    sudo docker node update --label-add da-policy-service1=true <worker node ID>
    sudo docker node update --label-add da-access-point1=true <worker node ID>
  2. Deploy your DA stack using this command. 

    Verify that the required images are available on the nodes. Then run the start-all.sh script on the manager node.

    Deploy DA stack
    sudo bash /opt/nexus/scripts/start-all.sh
  1. Log in to Digital Access Admin and if internal Database is used in configurations then provide the host machine IP address to connect Databases (HAG, Oath, Oauth)

  2. Publish the configurations.
  3. Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.

  4. Go to Manage System > Distribution Services and select the checkbox for “Listen on All Interfaces” in case of the ports that are to be exposed.
  5. Go to Manage System >Access Points and provide the IP address instead of the service name. Also enable the "Listen on all Interfaces" option.
  6. If you want to enable XPI and SOAP services then in admin expose port IP should be 0.0.0.0
  7. Redeploy the services using this command on the manager node.

    Restart
    sudo bash /opt/nexus/scripts/start-all.sh

This article is valid for upgrade from Digital Access 5.13.1 and above to 6.0.5 or above