Nexus' software components have new names:

Nexus PRIME -> Smart ID Identity Manager
Nexus Certificate Manager -> Smart ID Certificate Manager
Nexus Hybrid Access Gateway -> Smart ID Digital Access component
Nexus Personal -> Smart ID clients

Go to Nexus homepage for overviews of Nexus' solutions, customer cases, news and more.


Skip to end of metadata
Go to start of metadata

This article describes how to upgrade the Smart ID Digital Access component from version 6.0.0 and above to 6.0.5 and above for single node appliance as well high availability or distributed setup.

These are one time steps to set the system to use docker and swarm. Once this is all set, upgrades later will become much easier.

There are two options, described below, for upgrading from 6.0.0 and above to 6.0.5 and above :

  1. Migrate - This section describes how to upgrade, as well as migrate, the Digital Access instance from appliance to a new VM by exporting all data/configuration files with the help of script provided. (Recommended way)
  2. Upgrade - This section describes how to upgrade Digital Access in the existing appliance to the newer version.

Download latest updated scripts

Make sure you download the upgrade.tgz file again in case you have downloaded it before 18th August 2021 to get the latest updated scripts.

This article is valid for upgrade from Digital Access 6.0.0 and above to 6.0.6 or above

For Upgrade from 6.0.5 to 6.0.6 and above follow:

Upgrade Digital Access component from 6.0.5 or above

Expand/Collapse All

  1. Make sure that you have the correct resources available (memory, CPU and hard disk) as per requirement on new machines.
  2. Install docker, xmlstarlet, although upgrade/migrate script will install docker and xmlstarlet if not installed already. But if it is an offline upgrade (no internet connection on machine) then install the latest version of docker and xmlstarlet before running the migration steps.
  3. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  4. For HA setup only,
    1. Make sure you have the similar set of machines, since the placement of services will be same as on the existing setup. For example, if you have two appliances in HA setup, you must have two new machines to migrate the setup.
    2. Identify the nodes, as the new setup should have equal number of machines. You must create mapping of machines from the old setup to the new setup.
    3. In docker swarm deployment, one machine is the manager node (node on which the administration service runs) and other nodes are worker nodes.

    1. Copy upgrade.tgz to all nodes, and extract the .tgz file.

      Extract
      tar -xzf upgrade.tgz
    2.  Run upgrade.sh to migrate files/configuration from the existing setup. It will create a .tgz file in upgrade/scripts/da_migrate_6.0.x.xxxxx.tgz. Copy this file to the new machine.

      Run the below commands with --manager on the manager node and --worker on the worker nodes:

      Run upgrade script
      sudo bash upgrade/scripts/upgrade.sh --manager --export_da (on node running administration-service)
      sudo bash upgrade/scripts/upgrade.sh --worker --export_da  (on all other worker nodes)
    3. After this step, the services on the older existing machines will be stopped. It will create a dump of the locally running PostgreSQL database. Note that the database dump will only be created at admin service node.
    1. Copy upgrade.tgz to all nodes, and extract the .tgz file.

      Extract
      tar -xzf upgrade.tgz
    2. Edit the configuration files. (Only applicable for HA or distributed setup)

      Navigate to the docker-compose folder (<path to upgrade folder>/docker-compose) and edit these files:

      • docker-compose.yml
      • network.yml
      • versiontag.yml

      docker-compose.yml

      For each service, add one section in the docker-compose.yml file.

      Change the values for the following keys:

      • Service name
      • Hostname
      • Constraints

      For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

        policy:
          # configure image tag from versiontag.yaml
          hostname: policy
          deploy:
            mode: replicated
            replicas: 1
            placement:
              constraints:
               #If you need to set constraints using node name
               #- node.hostname ==<node name> 
               # use node label
               [node.labels.da-policy-service == true ]
            resources:
              limits:
                cpus: "0.50"
                memory: 512M
              reservations:
                cpus: "0.10"
                memory: 128M
          volumes:
            - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
            - /etc/localtime:/etc/localtime
            - /etc/timezone:/etc/timezone
          logging:
            options:
             max-size: 10m
      
        policy1:
          # configure image tag from versiontag.yaml
          hostname: policy1
          deploy:
            mode: replicated
            replicas: 1
            placement:
              constraints:
               #If you need to set constraints using node name
               #- node.hostname ==<node name> 
               # use node label
               [node.labels.da-policy-service1 == true ]
            resources:
              limits:
                cpus: "0.50"
                memory: 512M
              reservations:
                cpus: "0.10"
                memory: 128M
          volumes:
            - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
            - /etc/localtime:/etc/localtime
            - /etc/timezone:/etc/timezone
          logging:
            options:
             max-size: 10m

      network.yml

      For each service add network configuration in the network.yml file. If you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

      • Service name: Service name should be identical to what is mentioned in docker-compose.yml

      Example:

      policy:
      ports:
            - target: 4443
              published: 4443
              mode: host
          networks:
            - da-overlay
      
      policy1:
      ports:
            - target: 4443
              published: 4443
              mode: host
          networks:
            - da-overlay

      Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

      versiontag.yml

      Add one line for each service in this file also.

      For example, if you have two policy services with name policy and policy1, you will have two lines for each service.

      Example:

      policy :
      image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
      
      policy1:
      image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
    3. Place the da_migrate_6.0.x.xxxxx.tgz file inside the scripts folder, upgrade/scripts/.
    4. Run the upgrade script to import files/configuration from the older setup and upgrade to the latest version.
      1. Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not correct tag, please update it.
      2. Although the upgrade script installs docker and pulls the images from the repository, it is recommended to install docker and pull the images before running the upgrade. That will reduce the script run time and also the downtime of system.

      3. Note: In case of offline upgrade, load DA docker images to the machine also If you are using internal postgres, load postgres:9.6-alpine image on manager node.
      4. pull images
        sudo bash upgrade/scripts/pull_image.sh
      5. Run the import command:

        1. On manager node

          Run import command on manager node
          sudo bash upgrade/scripts/upgrade.sh --manager --import_da   
          (on node running administration-service)
          
        2. To set Docker Swarm provide your manager node host IP address. 
        3. In case you are using an external database, select No to skip postgres installation.

          (Only applicable for HA or distributed setup)

          The script prints a token in the output. This token will be used while setting up worker nodes.

          Example:
          docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

          Here the token part is:
          SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

          If you can’t find the token in the upgrade script output on the manager node, get the cluster join token by running this command. sudo docker swarm join-token worker

          Get cluster join token (Only applicable for HA or distributed setup)
          sudo docker swarm join-token worker
        4. On worker nodes

          Run import command on worker nodes (Only applicable for HA or distributed setup)
          sudo bash upgrade/scripts/upgrade.sh --worker --import_da --token <token value> --ip-port <ip:port>  
          (on all other worker nodes)
          
      6. Follow the screen messages and complete the upgrade. Check for any error in the logs. During the upgrade, it will extract the da_migrate_6.0.x.xxxxx.tgz. files and create the same directory structure as it was on the older setup. 
      7. On the manager node, it will install PostgreSQL database as docker container and import database dump from the older machine.
      8. After the scripts are executed, the .tgz file will still be there. Delete it once it is confirmed that the upgrade process has been completed successfully.
    1. Verify if all nodes are part of the cluster by running this command.

      Verify if all nodes are part of cluster
      sudo docker node ls

      Example:

    2. Identify nodes ID, master and worker where the service will be distributed.

      Identify nodes
      sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
    3. Output from this command:

      {ready  192.168.86.129}

      IP address will help to identify the Digital Access node

    Add new labels for each service which you want to run.  Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.

    1. Use these commands to add label for each service:

      Commands to add labels
      sudo docker node update --label-add da-policy-service=true <manager node ID>
      sudo docker node update --label-add da-authentication-service=true <manager node ID>
      sudo docker node update --label-add da-administration-service=true <manager node ID>
      sudo docker node update --label-add da-access-point=true <manager node ID>
      sudo docker node update --label-add da-distribution-service=true <manager node ID>
      sudo docker node update --label-add da-policy-service1=true <worker node ID>
      sudo docker node update --label-add da-access-point1=true <worker node ID>
      
    2. Deploy your DA stack using this command. 

      Verify that the required images are available on the nodes. Then run start-all.sh script on manager node.

      Deploy DA stack
      sudo bash /opt/nexus/scripts/start-all.sh
    1. Log in to Digital Access Admin and If internal Database is used in configurations then provide the host machine IP address to connect Databases (HAG, Oath, Oauth)

    2. Publish the configurations.
    3. Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
    4. Go to Manage System > Distribution Services and select the checkbox for “Listen on All Interfaces” in case of the ports that are to be exposed.

    5. Go to Manage System >Access Points and provide the IP address instead of the service name. Also enable the "Listen on all Interfaces" option.

    6. If you want to enable XPI and SOAP services then expose port IP should be 0.0.0.0  

    7. If there is any host entry for DNS on appliance, then provide an additional host entry for the same in the docker-compose file.
    8. Redeploy the services using this command on the manager node.

      Restart
      sudo bash /opt/nexus/scripts/start-all.sh

Prerequisites and Preparations

  1. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  2. Copy upgrade.tgz to the manager node (node where administration service is running) and all worker (other) nodes.

    Extract the tar file.

    Extract
    tar -xzf upgrade.tgz


  3. Setup docker daemon and pull images.
    1. Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade.
    2. Although upgrade script installs docker and pulls the images from repository, it is recommended to install docker and pull the images before running the upgrade, this will reduce the script run time and also the downtime of system.

    3. Run the script pull_image.sh to pull images. 

      In case of offline upgrade, load the Digital Access docker images to the machine. Also, if you are using internal postgres, load postgres:9.6-alpine image on manager node.

      Pull images
      sudo bash upgrade/scripts/pull_image.sh
    4. Make sure there is a backup/snapshot of the machine before starting upgrade.

Step-by-step instruction

Upgrade manager node

To upgrade the manager node (node on which the administration service runs):

  1. Run the upgrade script with this command line option

    Run upgrade script
    sudo bash upgrade/scripts/upgrade.sh --manager
  2. To set Docker Swarm provide your manager node host IP address. 
  3. In case you are using an external database, select No to skip postgres installation.

If you have HA setup, multiple instances of services running on different nodes, then you must edit the yml configuration.

Navigate to the docker-compose folder (/opt/nexus/docker-compose) and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml

For each service, add one section in the docker-compose.yml file.

Change the values for the following keys:

  • Service name
  • Hostname
  • Constraints

For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

  policy:
    # configure image tag from versiontag.yaml
    hostname: policy
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

  policy1:
    # configure image tag from versiontag.yaml
    hostname: policy1
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service1 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

network.yml

For each service add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

  • Service name: Service name should be identical to what is mentioned in docker-compose.yml

Example:

policy :
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

policy1:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

versiontag.yml

Add one line for each service, if you have two policy services with name policy and policy1, you will have entries for each service.

Example:

policy :
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx

policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
  1. Get the cluster join token by running this command on manager node.

    Get cluster join token
    sudo docker swarm join-token worker


  2. This token will be used while setting up other nodes. Output of the command will be like:

    Output

    docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

    Here the token part is:  SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 

Upgrade worker nodes (Only applicable for HA or distributed setup)

  1. Keep note of token, ip and port from the above steps.
  2. Run the upgrade script, follow the command line options:

    Run upgrade script on worker node
    sudo bash upgrade/scripts/upgrade.sh --worker --token <token value> --ip-port <ip:port>

Do final steps at manager node

  1. Verify if all nodes are part of cluster by running this command.

    Verify if all nodes are part of cluster
    sudo docker node ls

    Example:

  2. Identify nodes ID, master and worker where the service will be distributed.

    Identify nodes
    sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
  3. Output from this command:

    {ready  192.168.86.129}

    IP address will help to identify the Digital Access node

Add new labels for each service which you want to run.  Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.

  1. Use these commands to add label for each service:

    Commands to add labels
    sudo docker node update --label-add da-policy-service=true <manager node ID>
    sudo docker node update --label-add da-authentication-service=true <manager node ID>
    sudo docker node update --label-add da-administration-service=true <manager node ID>
    sudo docker node update --label-add da-access-point=true <manager node ID>
    sudo docker node update --label-add da-distribution-service=true <manager node ID>
    sudo docker node update --label-add da-policy-service1=true <worker node ID>
    sudo docker node update --label-add da-access-point1=true <worker node ID>
  2. Deploy your DA stack using this command. 

    Verify that the required images are available on the nodes.

  3. Then run the start-all.sh script on the manager node.

    Deploy DA stack
    sudo bash /opt/nexus/scripts/start-all.sh
  4. Log in to Digital Access Admin and if internal Database is used in configurations then provide the host machine IP address to connect Databases (HAG, Oath, Oauth)

  5. Publish the configurations.
  6. Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.

  7. If there is any host entry for DNS on appliance, Then provide an additional host entry for the same in docker-compose file.
  8. If you want to enable XPI and SOAP services then in admin expose port IP should be 0.0.0.0

  9. Redeploy the services using this command on the manager node.

    Deploy DA stack
    sudo bash /opt/nexus/scripts/start-all.sh