Nexus' software components have new names:

Nexus PRIME -> Smart ID Identity Manager
Nexus Certificate Manager -> Smart ID Certificate Manager
Nexus Hybrid Access Gateway -> Smart ID Digital Access component
Nexus Personal -> Smart ID clients

Go to Nexus homepage for overviews of Nexus' solutions, customer cases, news and more.


Skip to end of metadata
Go to start of metadata

Smart ID Digital Access component supports distributed mode to enable high availability and failover that provides powerful flexibility and scalability. With this mode, Digital Access component will switch to a redundant service once the primary one has stopped working. Thereby, not only one but several redundant services are supported. Using high availability enables systems to meet high service-level agreement (SLA) requirements.

This article describes the steps to upgrade a system with HA or distributed mode setup from Digital Access 6.0.2 or above to 6.0.5.

Expand/Collapse All

Prerequisites

 Prerequisites

The following prerequisites apply:

  • The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • UDP port 4789 for overlay network traffic
  • For more details refer to: https://docs.docker.com/network/overlay/
  • The upgrade tgz file is downloaded on the system. Download the file from the support portal.
  • Keep a note of IP addresses of nodes where access point is running.
  • There is a backup/snapshot before starting upgrade.
  • Digital Access version 6.0.2 or above already with HA setup.

Step-by-step instruction

Upgrade manager node

 Upgrade manager node

To upgrade the manager node (node on which the administration service will run):

  1. Copy upgrade.tgz to your working directory.
  2. Extract the file.

    Extract file
    tar -xf upgrade.tgz
  3. Navigate to the scripts folder inside the setup folder

    Navigate to scripts folder
    cd upgrade/scripts
  4. Run the script upgrade_ha.sh

    Run upgrade script
    sudo bash upgrade_ha.sh manager
 Edit configuration files

Navigate to the docker-compose folder and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml

For each service, add one section in the docker-compose.yml file.

For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

Example:

Change the values for the following keys:

  • Service name
  • Hostname
  • Constraints


policy 1:
   
 image:nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259
    hostname: policy1
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy1 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
		max-size: 10m

policy 2:

 # configure image tag from versiontag.yaml
    image:nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259
    hostname: policy2
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set comnstraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy2 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
		max-size: 10m

network.yml

For each service add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

Example:

Change the value of:

  • Service name: Service name should be identical to what is mentioned in docker-compose.yml


policy 1:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

policy 2:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

versiontag.yml

Add one line for each service in this file also.

For example, if you have two policy services with name policy1 and policy2, you will have two lines for each service.

Example:

policy 1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259

policy 2:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259

Add manager to worker swarm

 Add manager to worker swarm
  1. To add a manager to worker swarm, get the cluster join token by running this command.

    Get cluster join token
    sudo docker swarm join-token worker


  2. This token will be used while setting up other nodes. Output of the command will be like:

    Output

    docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

    Here the token part is:

    SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

Upgrade other nodes/worker nodes (except the manager node)

 Upgrade other nodes/worker nodes
  1. Copy upgrade.tgz to your working directory.
  2. Extract the file using tar command:

    Extract file
    tar -xf upgrade.tgz
  3. Navigate to the scripts folder inside the setup folder.

    Navigate to scripts folder
    cd upgrade/scripts
  4. Run the script upgrade_ha.sh. Use the <token> from the "Add manager to worker swarm" section above.

    Run the script
    sudo bash upgrade_ha.sh worker <token> <ip:port>

    For example:

    Example
    sudo bash upgrade_ha.sh worker SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

Do final steps at manager node

 Verify and identify nodes
  1. Verify if all nodes are part of cluster by running this command.

    Verify if all nodes are part of cluster
    sudo docker node ls

    Example:

  2. Identify nodes ID, master and worker where the service will be distributed.

    Identify nodes
    sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
  3. Output from this command:

    {ready  192.168.86.129}

    IP address will help to identify the Digital Access node

 Add new labels for each service

Add new labels for each service which you want to run. In this example, we have used “1” as postfix for each service name. You can choose any name based on requirement, but make sure they are in accordance with what we have defined in constraint section in the docker-compose.yml file.

  1. Use these commands to add label for each service:

    Commands to add labels
    sudo docker node update --label-add da-policy1=true <node ID>
    sudo docker node update --label-add da-authentication1 =true <node ID>
    sudo docker node update --label-add da-accesspoint1=true <node ID>
    sudo docker node update --label-add da-distribution1=true <node ID>
  2. Deploy your DA stack using this command.

    Deploy DA stack
    sudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>
 Do updates in Digital Access Admin
  1. Log in to Digital Access Admin and change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
  2. Go to Manage System > Distribution Services and select the checkbox for “Listen on All Interfaces” in case of the ports that are to be exposed.

  3. Go to Manage System >Access Points > Edit Access Points > Edit Registered Additional Listener and provide the IP address instead of the service name. Also enable the "Listen on all Interfaces" option.

  4. Restart services using these commands.

    Restart
    sudo docker stack rm <your da stack name>
    sudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>