# UniConfig 5.0.1

# New Features

# Propagation of data-change-events from ‘uniconfig' / 'unistore’ configuration

Description

  • Implemented propagation of data-change-events into distinct Kafka topic. Data-change-events are currently supported per-node in ‘uniconfig' and 'unistore' network-topologies.
  • Using subscription, user specifies observed subtrees against data-changes. Afterwards, data-change-events are generated by UniConfig instances after some transaction is committed and committed changes contain subscribed subtrees.

API

  • Created new YANG module that defines data-change-events structure in form of YANG notifications and RPC calls for manipulation / reading of subscriptions:
module data-change-events {
    yang-version 1.1;
    namespace http://frinx.io/yang:data-change-events;
    prefix dce;
    organization "Frinx s.r.o.";
    contact "support@frinx.io";

    import frinx-types {
        prefix ft;
        revision-date "2021-04-09";
    }
    import network-topology {
        prefix nt;
        revision-date 2013-10-21;
    }
    import ietf-yang-types {
        revision-date 2013-07-15;
        prefix "yang";
    }

    revision 2022-01-25 {
        description "Initial revision";
    }

    typedef data-tree-scope {
        description "Represents scope of data change (addition, replacement, deletion).";
        type enumeration {
            enum BASE {
                description "Represents only a direct change of the node, such as replacement of a node,
                             addition or deletion.";
            }
            enum ONE {
                description "Represent a change (addition, replacement, or deletion) of the node or one
                             of its direct children.";
            }
            enum SUBTREE {
                description "Represents a change of the node or any of or any of its child nodes, direct and nested.
                             This scope is superset of ONE and BASE.";
            }
        }
    }

    grouping subtree-path {
        leaf subtree-path {
            type ft:rfc-8040-path;
            mandatory true;
            description "Identifier of the subtree in the data-tree expressed as relative path.";
        }
    }

    grouping subtree-identifier {
        uses subtree-path;
        leaf data-change-scope {
            type data-tree-scope;
            default "SUBTREE";
            description "Scope of detected data-tree change events under specified path.";
        }
    }

    grouping node-topology-identifier {
        leaf node-id {
            type nt:node-id;
            mandatory true;
            description "Identifier of 'network-topology/topology/node' list entry.";
        }
        leaf topology-id {
            type nt:topology-id;
            mandatory true;
            description "Identifier of 'network-topology/topology' list entry.";
        }
    }

    grouping node-subscriptions {
        list data-change-subscriptions {
            key "subscription-id";
            config false;
            description "List of active subscriptions to data-change events.";
            uses subscription-identifier;
            uses subtree-identifier;
        }
    }

    grouping subscription-identifier {
        leaf subscription-id {
            type yang:uuid;
            mandatory true;
            description "Unique identifier of the subscription.";
        }
    }

    rpc create-data-change-subscription {
        description "Registration to data-tree change events under specific node and path.";
        input {
            uses node-topology-identifier;
            uses subtree-identifier;
        }
        output {
            uses subscription-identifier;
        }
    }

    rpc delete-data-change-subscription {
        description "Unregistration from data-tree change events using provided subscription-id.";
        input {
            uses subscription-identifier;
        }
    }

    rpc show-subscription-data {
        input {
            uses subscription-identifier;
        }
        output {
            uses node-topology-identifier;
            uses subtree-identifier;
        }
    }

    augment "/nt:network-topology/nt:topology/nt:node" {
        uses node-subscriptions;
    }

    notification data-change-event {
        description "Notification about created/deleted/updated data entries.";
        uses subscription-identifier;
        leaf transaction-id {
            description "Transaction in which modifications were made.";
            type yang:uuid;
            mandatory true;
        }
        list edit {
            key "subtree-path";
            description "List of edits made in the transaction.";
            must "data-before or data-after";
            uses edit-content;
        }
    }

    grouping edit-content {
        uses subtree-path;
        leaf data-before {
            description "JSON-formatted representation of original data.";
            type string;
        }
        leaf data-after {
            description "JSON-formatted representation of updated data.";
            type string;
        }
    }
}

Documentation

Kafka Notifications | Frinx Docs
../../user-guide/uniconfig-operations/kafka-notifications/#data-change-events

Configuration

  • Added settings into lighty-uniconfig-config.json file:
    • dataChangeEventsEnabled - turning on/off generation and distribution of data-change-events (by default, they are enabled)
    • dataChangeEventsTopicName - name of the Kafka topic (default identifier is 'data-change-events')
// Grouped settings that are related to notifications.
"notifications": {
    "kafka": {
        // Enabled collection and propagation of data-change-events into Kafka.
        "dataChangeEventsEnabled": true,
        // Unique identifier of the Kafka topic used for distribution of data-change-events.
        "dataChangeEventsTopicName": "data-change-events"
    }
}

Java client

Example, how to use data-change-events as triggers for callback inside UniConfig Java client:

/*
 * Copyright © 2022 Frinx and others.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package io.frinx.samples;

import io.frinx.uniconfig.client.dce.api.DataChangeEventsService.DataChangeEventsListener;
import io.frinx.uniconfig.client.dce.api.DataChangeEventsService.DataChangeEventsSubscription;
import io.frinx.uniconfig.client.dce.impl.DataChange;
import io.frinx.uniconfig.client.dce.impl.DataChangeEventsSubscriptionData.DataChangeEventsSubscriptionDataBuilder;
import io.frinx.uniconfig.client.dce.impl.DataTreeScope;
import io.frinx.uniconfig.client.dom.node.LeafNode;
import io.frinx.uniconfig.client.services.Uniconfig;
import io.frinx.uniconfig.client.services.Uniconfig.UniconfigBuilder;
import io.frinx.uniconfig.client.services.config.KafkaSettings.KafkaBrokerLocation;
import io.frinx.uniconfig.client.services.config.KafkaSettings.KafkaSettingsBuilder;
import io.frinx.uniconfig.client.services.config.UniconfigServerSettings.UniconfigServerSettingsBuilder;
import java.time.Instant;
import java.util.Collections;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public final class DataChangeEventsUseCase {
    private static final Logger LOG = LoggerFactory.getLogger(DataChangeEventsUseCase.class);
    private static final String YANG_REPOSITORY_NAME = "system";
    private static final String UNISTORE_NODE_ID = "test";

    public static void main(String[] args) throws InterruptedException {
        final var uniconfig = new UniconfigBuilder()
                .setUniconfigServerSettings(new UniconfigServerSettingsBuilder()
                        .setUniconfigHostname("127.0.0.1")
                        .setRestServerPort(8181)
                        .setUsername("admin")
                        .setPassword("admin")
                        .build())
                // Kafka settings - topic name is important
                .setKafkaSettings(new KafkaSettingsBuilder()
                        .setDataChangeEventsTopic("data-change-events")
                        .setBrokers(Collections.singletonList(new KafkaBrokerLocation("127.0.0.1", 9092)))
                        .build())
                .build();

        LOG.info("Writing initial configuration for testing purposes");
        try (var rwTx = uniconfig.readWriteTransaction()) {
            final var unistore = rwTx.unistore(UNISTORE_NODE_ID, YANG_REPOSITORY_NAME);
            unistore.config()
                    .containerNode("config", "settings")
                    .with(settings -> {
                        settings.container("ospf")
                                .with(ospf -> {
                                    ospf.leaf("enabled").setBooleanValue(true);
                                    ospf.leaf("process-id").setUint8Value((short) 10);
                                    ospf.leaf("key-id").setUint8Value((short) 20);
                                    ospf.container("remote")
                                            .leaf("router-id").setStringValue("PCE-01234");
                                });
                        settings.list("process")
                                .with(processList -> {
                                    processList.listEntry(LeafNode.of("uid").setStringValue("p1"))
                                            .with(listEntry -> {
                                                listEntry.leaf("uid").setStringValue("p1");
                                                listEntry.container("address")
                                                        .with(address -> {
                                                            address.leaf("bus-id").setStringValue("0x451FE");
                                                            address.leaf("bus-size").setUint16Value(1024);
                                                        });
                                            });
                                    processList.listEntry(LeafNode.of("uid").setStringValue("p2"))
                                            .leaf("uid").setStringValue("p2");
                                });
                    })
                    .put();
            rwTx.commit();
        }

        LOG.info("Creation of new subscription");
        final var subscriptionInfo = new DataChangeEventsSubscriptionDataBuilder()
                .toUnistoreTopology()
                .setNodeId(UNISTORE_NODE_ID)
                .setSubtreePath("/config:settings/process")
                .setDataTreeScope(DataTreeScope.SUBTREE)
                .build();

        try (var processSubscription = uniconfig.dataChangeEventsService().subscribe(subscriptionInfo);
             var ignored = processSubscription.registerListener(new DataChangeListener())) {
            TimeUnit.SECONDS.sleep(10);
            LOG.info("Created new subscription: {}", processSubscription.subscriptionData().getSubscriptionId());

            updateProcessList(uniconfig);
            TimeUnit.HOURS.sleep(1);

            final var uuid = processSubscription.subscriptionData().getSubscriptionId();
            uniconfig.dataChangeEventsService().findSubscription(uuid).ifPresent(DataChangeEventsSubscription::close);
            
        }
    }

    private static void updateProcessList(final Uniconfig uniconfig) {
        LOG.info("Updating 'process' list");
        try (var rwTx = uniconfig.readWriteTransaction()) {
            // updating some leaves under 'p1'
            rwTx.unistore(UNISTORE_NODE_ID).config()
                    .containerPath("config", "settings")
                    .list("process")
                    .listEntryPath(LeafNode.of("uid").setStringValue("p1"))
                    .container("address")
                    .createDataNode()
                    .with(address -> {
                        address.leaf("bus-id").setStringValue("0xFFFF");
                        address.leaf("bus-size").setUint16Value(2048);
                    })
                    .merge();
            // removal of list entry 'p2'
            rwTx.unistore(UNISTORE_NODE_ID).config()
                    .containerPath("config", "settings")
                    .list("process")
                    .listEntryPath(LeafNode.of("uid").setStringValue("p2"))
                    .delete();
            // creation of new list entry 'p3'
            rwTx.unistore(UNISTORE_NODE_ID).config()
                    .containerPath("config", "settings")
                    .list("process")
                    .listEntryPath(LeafNode.of("uid").setStringValue("p3"))
                    .createDataNode()
                    .put();
            rwTx.commit();
        }
    }

    private static final class DataChangeListener implements DataChangeEventsListener {
        @Override
        public void onDataChanged(final UUID transactionId, final Instant timestamp,
                                  final List<DataChange> dataChangeList) {
            LOG.info("{}: Captured data-change-event, committed transaction: {}", timestamp, transactionId);
            dataChangeList.forEach(dataChange -> {
                LOG.info("Data before: {}", dataChange.getDataBefore());
                LOG.info("Data after: {}", dataChange.getDataAfter());
            });
        }
    }
}

# Added config option to disable immediate-commit model

Description

  • Immediate-commit model is in some cases dangerous, because changes are automatically committed to managed network devices.
  • Added option to disable immediate-commit model globally.

Configuration

New setting 'isImmediateCommitEnabled' in the lighty-uniconfig-config.json:

// Grouped settings that are related to Uniconfig transactions.
"transactions": {
    /*
    Boolean value if the Immediate Commit Model is enabled or not. Default value is true.
    If disabled, only manually created transactions can exist.
    */
    "isImmediateCommitEnabled": true
}

Default value is 'true'.

# Calling replace-config-with-oper after sync-from-network in the immediate-commit-model

  • In the immediate-commit-model, if user called sync-from-network operation, it behaved as 'sync-to-network' operation:
    1. reading configuration from device
    2. resolving diff between actual (device) and intended state (last saved configuration in database)
    3. sending resolved diff to operation - reverting changes, that have been done on device side

  • This dangerous if network device is configured manually by user or another tool.
  • Fixed by calling replace-config-with-oper operation after called-sync-from-network operation and before committing temporary transaction created in the immediate-commit model session. It will result in storing of loaded configuration to database without performing any action on managed devices.
  • This change alters only immediate-commit model. Build-and-commit model stays unaltered.

# Making default CLI connection parameters configurable

Description

  • There are couple of CLI connection parameters with some default values defined in cli-topology YANG module that can be specified at installation of device.
  • This feature allows user to adjust these default parameters without repetitive adjustment in the install-node RPC request.
  • Priority of using install parameters:
    • Parameter set in install RPC request
    • Default parameter set in database
    • Default parameter from YANG model

Documentation

Device installation | Frinx Docs
../../user-guide/network-management-protocols/uniconfig-installing/#default-parameters

API

Exposed default CLI settings into distinct container that is accessible using RESTCONF API (module cli-topology):

container cli-default-parameters {
    uses cli-node-keepalive-parameters;
    uses cli-node-keepalive-strategy-parameters;
    uses cli-node-lazy-strategy-parameters;
    uses cli-node-tracing-parameters;
    uses cli-node-parsing-parameters;
}

Exposed settings in the UniConfig shell - configuration mode / settings container.

# Making default NETCONF connection parameters configurable

Description

  • There are couple of NETCONF connection parameters with some default values defined in netconf-topology YANG module that can be specified at installation of device.
  • This feature allows user to adjust these default parameters without repetitive adjustment in the install-node RPC request.
  • Priority of using install parameters:
    • Parameter set in install RPC request
    • Default parameter set in database
    • Default parameter from YANG model

Documentation

Device installation | Frinx Docs
../../user-guide/network-management-protocols/uniconfig-installing/#default-parameters

API

Exposed default NETCONF settings into distinct container that is accessible using RESTCONF API (module netconf-node-topology):

container netconf-default-parameters {
    uses netconf-node-default-connection-parameters;
    uses netconf-strict-parsing;
    uses netconf-device-customization-factory;
    uses netconf-node-tracing-parameters;
}

Exposed settings in the UniConfig shell - configuration mode / settings container.

# 💡 Improvements

# Improved displaying of children nodes of DataNode in Java client

  • Children nodes were organised under multiple levels of Map objects - it was not readable especially when user was debugging code.
  • Now, children nodes are displayed directly under simple List collection:

# Added YANG-based documentation to Java client

Added JavaDoc description to DataNode and DataNodePath sub-classes, how they are used in comparison to YANG schema tree.

Example:

/**
 * Container that has only a single instance and can have multiple children nodes.
 *
 * <pre>{@code
 * Example:
 *
 *  module mod1 {
 *      namespace "www.company.com/mod1";
 *      prefix mod1;
 *
 *      grouping g1 {
 *          container c1 {
 *              leaf l1 {
 *                  type string;
 *              }
 *          }
 *      }
 *  }
 *
 *  module mod2 {
 *      namespace "www.company.com/mod2";
 *      prefix mod2;
 *
 *      import mod1 {
 *         prefix mod1;
 *      }
 *
 *      container c2 {
 *          uses mod1:g1;
 *      }
 *  }
 * }</pre>
 *
 * <p>For provided yang {@link ContainerNode} can be created this way:</p>
 * <pre>{@code
 *   ContainerNode c2 = ContainerNode.of("mod2", "c2");
 *   ContainerNode c1 = c2.container("mod1", "c1");
 * }</pre>
 */
public final class ContainerNode extends CompositeDataNode<ContainerNodePath, ContainerId, ContainerNode>

# Removed redundant module-name prefixes from built paths in Java client

  • RFC-8040 specifies that module-name prefix must be added only to the first augmented elements (transition to different namespace).
  • Previously it worked non-optimally - module name was added to all elements of the path:
PATH: /m1:element1/m1:element2/m2:element3/m2:element4

After improvement:

PATH: /m1:element1/element2/m2:element3/element4

# Added option to enable PostgreSQL driver logs in UniConfig

Description

  • Logging connections and communication between UniConfig and PostgreSQL can be handy in case of debugging of some errors.

Configuration

  • To log detailed information about executed queries and PG connections, user should set org.postgresql logger level to DEBUG or TRACE.
<!-- used for automatic propagation of set logger level to JUL logging implementation -->
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
    <resetJUL>true</resetJUL>
</contextListener>

<!-- PostgreSQL driver -->
<logger name="org.postgresql" level="INFO"/>

# Added transaction-id also to both RESTCONF requests and responses

Description

  • UniConfig transaction-id simplifies debugging of executed RESTCONF operations.
  • Example (added 'Uniconfig transaction' property):
08:51:21.508 TRACE org.opendaylight.restconf.nb.rfc8040.jersey.providers.logging.RestconfLoggingBroker - HTTP request:
Message ID: 3
Uniconfig transaction: b6639cb4-55f2-449e-a91e-d2ad490198d2
HTTP method: POST
URI: http://localhost:8181/rests/operations/logging:enable-device-logging
Source address: 0:0:0:0:0:0:0:1
Source port: 37472
User ID: admin@sdn
HTTP headers:
    User-Agent: [curl/7.69.1]
    Authorization: ***
    Host: [localhost:8181]
    Accept: [*/*]
    Content-Length: [116]
    Content-Type: [application/json]
Request body:
{
  "input": {
    "broker-identifier": "netconf_notifications",
    "device-list": [
      "xr6",
      "xr7"
    ]
  }
}

08:51:21.518 TRACE org.opendaylight.restconf.nb.rfc8040.jersey.providers.logging.RestconfLoggingBroker - HTTP response:
Request message ID: 3
Uniconfig transaction: b6639cb4-55f2-449e-a91e-d2ad490198d2
Status code: 200
HTTP headers:
    Content-Type: [application/yang-data+json]
Response body:
{
  "output": {
    "message": "Successfully updated logging broker [netconf_notifications]",
    "status": "complete"
  }
}

Documentation

Logging Framework | Frinx Docs
../../user-guide/operational-procedures/logging/#restconf

# Hiding sensitive data in logs

In the UniConfig logs are shown sensitive data like PostgreSQL DB credentials, etc. This is a potential security hole.

Example:

10:33:30.365 INFO io.frinx.lighty_uniconfig.initializers.ConfigParsingUtils - dbPersistence configuration: {"embeddedDatabase":{"enabled":false,"dataDir":"./data/pg_dir","cleanDataDir":true},"connection":{"databaseLocations":[{"host":"uniconfig-postgres","port":5432}],"dbName":"uniconfig","username":"postgres","password":"postgres","initialDbPoolSize":5,"maxDbPoolSize":100,"maxIdleConnections":5,"maxWaitTime":30000,"enabledTls":false,"tlsClientCert":"/opt/uniconfig-frinx/./client.pks","tlsClientKey":"/opt/uniconfig-frinx/./client.key","tlsCaCert":"/opt/uniconfig-frinx/./ca.pks"},"uniconfigInstance":{"instanceName":null,"backupInstanceName":null},"monitoring":{"heartbeatInterval":1000,"initialMonitoringDelay":5000,"backupMonitoringInterval":10000,"maxInstanceAge":5000}}

Fixed by hiding JSON configuration parsing details from logs.

# Bug Fixes

# Fixed invocation of device-discovery RPC multiple times

  • RPC response contained also results from previous RPC invocation.
  • Fixed by isolation of RPC results.

# Fixed setting of max-connection attempts during device installation (CLI/NETCONF)

Description

  • Removed max-connection-attempts parameter from install-node RPC. It was clashing with parameters from southbound layers and introducing confusion.
  • Fixed switched loading of max-connection-attempts and max-reconnection-attempts on NETCONF layer. It resulted in the infinite initial maximum connections attempts (by default, there should be 1 attempt).
  • Setting default max-connection-attempts to 1 in YANG model (both CLI and NETCONF layers).

Documentation

Updated document:

Device installation| Frinx Docs
../../user-guide/network-management-protocols/uniconfig-installing/#device-installation

API

  • connection-manager - removed leaf max-connection-attempts:
grouping mount-node-input-fields {
    uses node-connection;

-    leaf max-connection-attempts {
-        type uint32;
-        description "Maximum number of connection attempts in southbound layer, before mounting process fails.
-                     Value of this field doesn't override value from the mount request - it is used only as
-                     a default value.";
-        default 1;
-    }
}

cli-topology - setting max-connection-attempts default value to 1:

grouping cli-node-keepalive-parameters {
    leaf max-connection-attempts {
        type uint32;
        default 1;
        description "Maximum number of connection attempts before connection initialization is marked as failed.
                     Value 0 disables this limit.";
    }

netconf-node-topology - setting max-connection-attempts default value to 1:

grouping netconf-node-default-connection-parameters {
    leaf max-connection-attempts {
        description "Maximum number of connection retries. Non positive value or null is interpreted as infinity.";
        config true;
        type uint32;
        default 1;
    }

# Fixed stuck UniConfig API because of interrupted SQL operation

Description

  • Default socket-read-timeout for the PostgreSQL driver is 0 - UniConfig is waiting forever for result of some query. This causes blocking of other UniConfig operations on specific node, if connection between UniConfig and PG is dropped during execution of some query.
  • Fixed by exposing socketReadTimeout parameter and setting its default value to 20 seconds.

Configuration

Added socketReadTimeout to database connection settings (lighty-uniconfig-config.json):

"dbPersistence": {
    // Grouped settings related to database connection.
    "connection": {
        /*
        Timeout value used for socket read operations. If reading from the server takes longer than this value,
        the connection is closed. This can be used as both a brute force global query timeout and a method of
        detecting network problems. The timeout is specified in seconds and a value of 0 means that it is disabled.
        */
        "socketReadTimeout": 20
    }
}

# Fixed propagation of error on disabled templates

  • If templates are disabled, then user will get direct error message with 400 status code. Previously it failed on some parsing error or it didn’t fail at all and UniConfig just ignored unknown data.

Example:

{
    "errors": {
        "error": [
            {
                "error-message": "Templates are disabled in the config!",
                "error-tag": "unknown-element",
                "error-type": "application"
            }
        ]
    }
}

# Fixed PKI authentication to NETCONF device (negative cases)

PKI authentication on device - attempt to install device with reference to not existing private key

Previously it failed with error:

DEBUG    uniconfig:uniconfig.py:297 response: 200 {"output":{"status":"fail","error-message":"Unified mountpoint 'versa' was not successfully created; connection-status: failed, status-message: cannot create southbound mountpoint (unable-to-connect). Maximum reconnection attempts reached. Cause : No more authentication methods available"}}

After fix it will fail with error message that private key with specified identifier doesn’t exist.

PKI authentication on device - registering the password protected key with RPC netconf-keystore:add-keystore-entry - but providing bad password

  • Fixed by validation of input password against key-store.
  • If it is invalid, UniConfig will return error immediately and will not try to register such private key and afterwards use it during mounting process.

# NETCONF edit-config operation with insert attribute failed because of aggregated messages

When insert attribute was used with value before/after, there was problem with NETCONF messages ordering in the aggregated message.

Fixed by assuring that list entry specified by insert attribute is placed before actual list entry in the edit-config message sent to NETCONF server.