Nexus' software components have new names:

Nexus PRIME -> Smart ID Identity Manager
Nexus Certificate Manager -> Smart ID Certificate Manager
Nexus Hybrid Access Gateway -> Smart ID Digital Access component
Nexus Personal -> Smart ID clients

Go to Nexus homepage for overviews of Nexus' solutions, customer cases, news and more.


Skip to end of metadata
Go to start of metadata

Smart ID Digital Access component supports distributed mode to enable high availability and failover that provides powerful flexibility and scalability. With this mode, Digital Access component will switch to a redundant service once the primary one has stopped working. Thereby, not only one but several redundant services are supported. Using high availability enables systems to meet high service-level agreement (SLA) requirements.

This article describes the setup of high availability for two Digital Access components with docker swarm and running services. See also High availability architecture for Digital Access component.

  • Manager node is the node that hosts the administration service.
  • Worker node is a node that hosts other services, not running the administration service.

Expand/Collapse All

Prerequisites

 Prerequisites

The following prerequisites apply:

  • The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • UDP port 4789 for overlay network traffic
  • For more details refer to: https://docs.docker.com/network/overlay/
  • Keep a note of IP addresses of nodes where access point is running.

Step-by-step instruction

Get token and stop services - manager node

 Get cluster join token
  1. SSH to the node running administration service, that is, the manager node.
  2. Get the cluster join token by running this command. This token will be used for joining worker nodes to the manager node.

    Get token
    sudo docker swarm join-token worker

    Output of the command will be like:

    Output

    docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377


 Stop services
  1. Stop the running services.

    Stop services
    sudo docker stack rm <your da stack name>

Join as worker nodes

Do these steps on all worker nodes.

 Join the nodes as worker nodes
  1. SSH to the worker node(s).

  2. Stop the running services.

    Stop services
    sudo docker stack rm <your da stack name>
  3. Get the node ID.

    Get node ID
    sudo docker node ls
  4. Remove the labels.

    Remove labels
    sudo docker node update --label-rm  da-accesspoint <nodeid>
    sudo docker node update --label-rm  da-authentication <nodeid>
    sudo docker node update --label-rm  da-distribution <nodeid>
    sudo docker node update --label-rm  da-policy <nodeid>
    sudo docker node update --label-rm  da-admin <nodeid>
  5. if you are using PostgreSQL as database then remove label using this command (not to run on PostgreSQL node):

    If using PostgreSQL
    sudo docker node update --label-rm  postgres <nodeid>
  6. Remove the node from the current swarm.

    Remove node
    sudo docker swarm leave --force
  7. Join to manager swarm using the command output from "Get cluster join token" above.

    Example of output of 'get token' command
    docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
    
  8. On success, the output will be: This node joined a swarm as a worker.

Remove labels at manager node

 Remove labels at manager node
  1. SSH to manager node.
  2. Remove label for all services which are not required on this node.

    Remove label
    sudo docker node update --label-rm  da-accesspoint <nodeid>

Edit configuration files

 Edit configuration files

Navigate to the docker-compose folder and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml

For each service, add one section in the docker-compose.yml file.

For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

Example:

Change the values for the following keys:

  • Service name
  • Hostname
  • Constraints
policy1: 

image:nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259 
    hostname: policy1 
    deploy: 
      mode: replicated 
      replicas: 1 
      placement: 
        constraints: 
         #If you need to set constraints using node name 
         #- node.hostname ==<node name>  
         # use node label 
         [node.labels.da-policy1 == true ] 
      resources: 
        limits: 
          cpus: "0.50" 
          memory: 512M 
        reservations: 
          cpus: "0.10" 
          memory: 128M 
    volumes: 
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z 
      - /etc/localtime:/etc/localtime 
      - /etc/timezone:/etc/timezone 
    logging: 
      options: 
max-size: 10m 
 
policy2: 

    # configure image tag from versiontag.yaml 
    image:nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259 
    hostname: policy2 
    deploy: 
      mode: replicated 
      replicas: 1 
      placement: 
        constraints: 
         #If you need to set comnstraints using node name 
         #- node.hostname ==<node name>  
         # use node label 
         [node.labels.da-policy2 == true ] 
      resources: 
        limits: 
          cpus: "0.50" 
          memory: 512M 
        reservations: 
          cpus: "0.10" 
          memory: 128M 
    volumes: 
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z 
      - /etc/localtime:/etc/localtime 
      - /etc/timezone:/etc/timezone 
    logging: 
      options: 
max-size: 10m 

network.yml

For each service add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

Example:

Change the value of:

  • Service name: Service name should be identical to what is mentioned in docker-compose.yml
policy1: 
    ports: 
      - target: 4443 
        published: 4443 
        mode: host 
    networks: 
      - da-overlay 
 
       
  Policy2: 
    ports: 
      - target: 4443 
        published: 4443 

Also make sure all the listeners that are used for access point Load balance are exposed on network.yml.

versiontag.yml

Add one line for each service in this file also.

For example, if you have two policy services with name policy1 and policy2, you will have two lines for each service.

Example:

Policy1: 
    image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259 
policy2: 
    image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.5.60259 

At manager node

 Verify and identify nodes
  1. Verify if all nodes are part of cluster by running this command.

    Verify if all nodes are part of cluster
    sudo docker node ls

    Example:

  2. Identify nodes ID, master and worker where the service will be distributed.

    Identify nodes
    sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
  3. Output from this command:

    {ready  192.168.86.129}

    IP address will help to identify the Digital Access node

 Add new labels for each service

Add new labels for each service which you want to run in worker nodes. In this example, we have used “2” as postfix for each service name. You can choose any name based on your requirement, but make sure they are in accordance with what we have defined in constraint section in the docker-compose.yml file.

  1. Use these commands to add label for each service:

    Commands to add labels
    sudo docker node update --label-add da-policy2=true <node ID> 
    sudo docker node update --label-add da-authentication2 =true <node ID> 
    sudo docker node update --label-add da-accesspoint2=true <node ID> 
    sudo docker node update --label-add da-distribution2=true <node ID>
  2. Deploy your stack using this command. To run the command your working directory should be docker-compose.

    Deploy DA stack
    sudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>

    Here: 

    • docker stack deploy is the command to deploy services as stack. 
    • compose file flag is used to provide the file name of base docker-compose file. 
    • -c is short for –compose-file flag. It is used to provide override files for docker -compose. 
    • <your da stack name> is the name of the stack. You can change it based on requirements. 

In Digital Access Admin

 Do updates in Digital Access Admin
  1. Log in to Digital Access Admin and change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
  2. Go to Manage System > Distribution Services and
    1. select the checkbox “Listen on all Interfaces” in case of the ports that are to be exposed
    2. also select the checkbox “Distribute key files automatically”.

  3. Go to Manage System >Access Points and provide the IP address instead of the service name. Also enable the "Listen on all Interfaces" option.

Do final steps

 Do final steps
  1. Make sure all services are stopped, else remove stack using this command.

    Remove stack
    sudo docker stack rm <da/stack Name> 
  2. In worker node, edit service Local Configuration file, and provide values for:

    <core>      <id>6</id>    </core>

    <attribute name="mHost" type="string" value="policy1"/>

    <attribute name="mId" type="integer" value="6"/>

  3. Copy the keys from manager node to worker node services.

    For access point: copy only shared key.

    For all services enabled in worker node: copy internal and shared keys.

    Copy keys
    /opt/nexus/config/administration-service/keys# scp internal.key agadmin@<worker node Ip>:/home/agadmin
    
    opt/nexus/config/administration-service/keys# scp shared.key agadmin@@<worker node Ip>:/home/agadmin
  4. Restart services using these commands.

    Restart services
    sudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack nam> 
  5. For Database connection issue, enter this command.

    Database connection issue
    Restart postgres container  docker stop <postgres container ID> 

    Check connection in Digital Access Admin for IP and password provided.