Data Export/Import

Introduction

Frinx DAEXIM (Data Export/Import) is a fork from ODL’s DAEXIM project and is part of FRINX UniConfig. DAEXIM can be used for exporting of data from data-store to file-system or reversely importing of data from file-system to data-store (it represents a tool for data backup). ODL implementation contains only RPCs for importing or exporting of whole data-stores except the selected root paths (blacklisted paths), FRINX has supplemented this implementation by two RPCs that also allow to export only data on selected paths (whitelisted paths) from both operational or configurational data-stores.

Interaction with DAEXIM in FRINX UniConfig is achieved by initial configuration and RPCs (RESTCONF) that can be directly used for importing/exporting of data. The following sections describe DAEXIM configuration and available RPCs.

Configuration

DAEXIM directory is the only available configuration parameter available in FRINX UniConfig. The string value of this parameter represents path to the DAEXIM storage where exported JSON-encoded files are stored and from which DAEXIM backups are loaded.

The default DAEXIM directory is set to ‘daexim’ root directory of FRINX UniConfig. It can be overridden by modifying following section in ‘config/lighty-uniconfig-config.json’ configuration file:

{
    "daexim": {
        "daeximDirectory": "custom/path/to/daexim"
    }
}

Note

Configuration of ‘importOnInit’ flag is suppressed in FRINX UniConfig - it is always set to ‘false’ value. This functionality has not been enabled in FRINX UniConfig since original DAEXIM implementation requires setting of initial-import flag to ‘false’ after the first initial-import task completes.

Exporting of data

RPC: Whitelisted export

Whitelisted export RPC can be used for exporting of only selected subtrees from the operational or configurational data-store. Subtrees are identified by ‘instance-identifier’ YANG built-in type (see https://tools.ietf.org/html/rfc6020#page-133 for information about format of these paths). It is possible to export multiple subtrees from both configurational and operational data-store using one RPC request.

The following example shows how to export ‘uniconfig’, ‘cli’, ‘topology-netconf’ topology subtrees from configurational data-store and ‘xrnetconf’ node subtree (placed in ‘unified’ topology) from operational data-store.

curl --request POST 'http://127.0.0.1:8181/rests/operations/data-export-import:whitelisted-export' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "whitelisted-paths": [
            {
                "data-store": "config",
                "paths": [
                    "/network-topology:network-topology/topology[topology-id='\''uniconfig'\'']",
                    "/network-topology:network-topology/topology[topology-id='\''cli'\'']",
                    "/network-topology:network-topology/topology[topology-id='\''topology-netconf'\'']"
                ]
            },
            {
                "data-store": "operational",
                "paths": [
                    "/network-topology:network-topology/topology[topology-id='\''unified'\'']/node[node-id='\''xrnetconf'\'']"
                ]
            }
        ]
    }
}'

If exporting job finishes successfully, the following output with result and reason will be wrapped in reply (if it fails, result value will be set to ‘false’ and reason will contain error-message):

{
    "output": {
        "reason": "Whitelisted export has been successfully completed.",
        "result": true
    }
}

Explanation of the fields that are placed in input body:

  • whitelisted-paths - List of data-stores from which we would like to export selected subtrees. Leaf ‘data-store’ represents the key of this list.

  • data-store - Name of the data-store - possible values are ‘config’ and ‘operational’ (from this reason ‘whitelisted-paths’ can have 2 entries at most).

  • paths - Leaf-list that contains paths of subtrees that are going to be exported. Format of these paths must follow ‘instance-identifier’ built-in YANG type.

Repeated calling of whitelisted export RPC does following rollback-aware task:

  1. Backing-up of all current whitelisted DAEXIM files for specific data-store by renaming of them (adding of some prefix).

  2. Exporting of selected subtrees to separated DAEXIM files.

    1. Exporting task was successful - removing of backed-up whitelisted DAEXIM files that contain prefix in file-name.

    2. Exporting task failed - removing of all whitelisted DAEXIM files without added prefix and renaming of backed-up whitelisted DAEXIM files back to original names (removal of previously added prefix).

Note

If input whitelisted-paths is empty or input is not specified at all, no data will be exported.

RPC: Simple export

Simple export RPC can be used for exporting of whole data-store except the data from specified modules in specified data-stores (specification of the black-list is not mandatory - then both configurational and operational data-stores are serialized into JSON). Each time this RPC is called, old DAEXIM files are removed and new files are written. There are exactly two DAEXIM files - one for configurational data-store and the second one that represents content of configurational data-store. It is not possible to feed this RPC with only specific paths that should be exported - data-store is serialized to JSON beginning at data-tree root.

The following example shows how to back-up whole configurational data-store except data from ‘aaa-cert-mdsal’ and ‘cli-translate-registry’ modules. Operational data-store is not exported - for this purpose, asterisk character in place of ‘module-name’ leaf value is utilized (actually, in this case an empty JSON is written into file: ‘{}’).

curl --request POST 'http://127.0.0.1:8181/rests/operations/data-export-import:simple-export' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "excluded-modules": [
            {
                "module-name": "aaa-cert-mdsal",
                "data-store": "config"
            },
            {
                "module-name": "cli-translate-registry",
                "data-store": "config"
            },
            {
                "module-name": "*",
                "data-store": "operational"
            }
        ]
    }
}'

If export task completes without any error, the following RPC output is returned:

{
    "output": {
        "result": true
    }
}

Explanation of the fields that are placed in input body:

  • excluded-modules - List of excluded modules. The key of list is represented by ‘module-name’ leaf.

  • module-name - Name of the module or ‘*’ character that causes skipping of the whole data-store from serialization process.

  • data-store - Name of the data-store. Possible values are ‘config’ and ‘operational’.

Note

Empty input in RPC requests causes exporting of whole configurational and operational data-stores.

Note

Data from module with name ‘data-export-import-internal’ (only in operational data-store) is implicitly excluded from serialization process.

Importing of data

RPC: Whitelisted import

Whitelisted import RPC can be used for importing of selected or all DAEXIM backup files that have already been created during previous whitelisted export task. Also, data that you are going to import must exactly match paths that have been previously exported - importing of partial subtrees from exported data is not supported for now. It is also not possible to import data using whitelisted import RPC that has been exported using simple export RPC (said in other words, simple export/import is not compatible with whitelisted import/export). Like with whitelisted export, it is possible to import multiple paths from configurational and operational data-stores at once.

The next example shows how to import some of previously exported configurational subtrees - ‘cli’ and ‘topology-netconf’ topologies.

curl --request POST 'http://127.0.0.1:8181/rests/operations/data-export-import:whitelisted-import' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "whitelisted-paths": [
            {
                "data-store": "config",
                "paths": [
                    "/network-topology:network-topology/topology[topology-id='\''cli'\'']",
                    "/network-topology:network-topology/topology[topology-id='\''topology-netconf'\'']"
                ]
            }
        ]
    }
}'

If importing job finishes successfully, the following output with result and reason will be wrapped in reply (if it fails, result value will be set to ‘false’ and reason will contain error-message):

{
    "output": {
        "reason": "Whitelisted import has been successfully completed.",
        "result": true
    }
}

Explanation of the fields that are placed in input body:

  • whitelisted-paths - List of data-stores from which we would like to import selected subtrees. Leaf ‘data-store’ represents the key of this list.

  • data-store - Name of the data-store - possible values are ‘config’ and ‘operational’ (from this reason ‘whitelisted-paths’ can have 2 entries at most).

  • paths - Leaf-list that contains paths of subtrees that are going to be imported.

It is also possible to import all available whitelisted DAEXIM files at once (from both configurational and operational data-store):

{
    "input": {
    }
}

If you would like to import all whitelisted DAEXIM files but only from configurational data-store, just send following RPC request:

{
    "input": {
        "whitelisted-paths": [
            {
                "data-store": "config"
            }
        ]
    }
}

Note

Be aware that importing of whole topology also triggers mounting of all devices that were stored by DAEXIM.

Note

Even if it is possible, it is not recommended to import operational data to data-store. Operational data should capture current state of system - it usually does not make sense to restore such data from backup.

RPC: Immediate import

Immediate import is used for importing of all DAEXIM files that have been exported by simple export RPC (DAEXIM files created by whitelisted export RPC are not loaded). There is no option to select what data to import as it is possible in simple export RPC input.

The next example shows how to invoke immediate import task. There is also ‘clear-stores’ option set to ‘all’ which causes removal of all data from both data-store before loading of data from JSON file. Other possible leaf values are ‘none’ (no data is deleted) and ‘data’ (only data in data-stores for which data files are supplied and don’t contain empty JSON is deleted).

curl --request POST 'http://127.0.0.1:8181/rests/operations/data-export-import:immediate-import' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "clear-stores": "all"
    }
}'

If import task completes without any error, the following RPC output is returned:

{
    "output": {
        "result": true
    }
}

Format of serialized data

Exported data is encoded in JSON format so it is readable and easily editable. Structure of serialized data depends on RPC that is used for exporting of data.

Simple export RPC

  • Two files are created for two data-stores - ‘odl_backup_config.json’ and ‘odl_backup_operational.json’.

  • Each of the files contains one JSON that represents root of the data-tree.

  • Empty JSON in file represents skipped data-store whose data has not been exported - used ‘*’ character in place of excluded module name. Such data file is also skipped during import process.

  • JSON data is squashed - removed new-lines and other redundant whitespace characters.

Formatted example of exported data (shortened output):

{
  "network-topology:network-topology": {
    "topology": [
      {
        "topology-id": "cli",
        "node": [
          {
            "node-id": "xrcli",
            "node-extension:reconcile": false,
            "cli-topology:host": "192.168.1.214",
            "cli-topology:transport-type": "ssh",
            "cli-topology:dry-run-journal-size": 150,
            "cli-topology:username": "cisco",
            "cli-topology:password": "cisco",
            "cli-topology:journal-size": 150,
            "cli-topology:port": 22,
            "cli-topology:device-version": "5.3.4",
            "cli-topology:device-type": "ios xr",
            "cli-topology:command-timeout": 120,
            "cli-topology:connection-lazy-timeout": 30,
            "cli-topology:connection-establish-timeout": 60
          }
        ]
      }
    ]
  }
}

Whitelisted export RPC

  • Multiple files can be created for every data-store - one JSON file is created per one instance-identifier path and data-store. The common prefix for data exported from configurational data-store is ‘odl_filtered_config_’ and the common prefix for data exported from operational data-store is ‘odl_filtered_operational_’. After this prefix, a random integer is placed (it is computed as hash from supplied instance-identifier).

  • The first container, that is always placed as the first one in JSON, is identified by ‘metadata’ name. This header contains serialized YANG instance identifier (‘yiid’) of exported data and list of modules (‘usedModules’) that are required for successful importing of this DAEXIM file. Module is described by ‘namespace’ and optionally ‘revision’. List of modules is derived from modules used in YANG instance identifier + list of modules used in exported data.

  • The second part of JSON file starts with ‘data’ identifier. This container encapsulates exported data. If exported data on selected path doesn’t exist, the ‘data’ container is not written into JSON file.

  • JSON data is also squashed - removed new-lines and other redundant whitespace characters.

Formatted example of exported data:

{
  "metadata": {
    "yiid": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='topology-netconf']",
    "usedModules": [
      {
        "namespace": "urn:opendaylight:netconf-node-topology",
        "revision": "2015-01-14"
      },
      {
        "namespace": "urn:TBD:params:xml:ns:yang:network-topology",
        "revision": "2013-10-21"
      }
    ]
  },
  "data": {
    "network-topology:topology": [
      {
        "topology-id": "topology-netconf",
        "node": [
          {
            "node-id": "xrnetconf",
            "netconf-node-topology:host": "192.168.1.216",
            "netconf-node-topology:password": "cisco",
            "netconf-node-topology:username": "cisco",
            "netconf-node-topology:dry-run-journal-size": 180,
            "netconf-node-topology:port": 22,
            "netconf-node-topology:keepalive-delay": 0,
            "netconf-node-topology:tcp-only": false
          }
        ]
      }
    ]
  }
}