dssrefresh - DSS 6 | Data Source Solutions Documentation

Documentation: dssrefresh - DSS 6 | Data Source Solutions Documentation

dssrefresh

Usages

<b>dssrefresh</b> [<b>-R</b><em>url</em>] [<em>-options</em>]... <em>hub</em> <em>chn</em>

Description

Command dssrefresh copies tables (available in a channel) from source location(s) to target location(s). The source must be a database location, but the targets can be databases or file locations. For more information, see the Refresh concept page.

For consumption-based licensing, Data Source Solutions offers a five-day troubleshooting window per table for performing free Refreshes in DSS. This window starts on the day you first perform a Refresh on a table in a given month. Any subsequent Refreshes done outside this window count towards paid MAR.

Refreshing from a source location is supported only on certain location types. For the list of supported source location types, see section Refresh and Compare in Capabilities.

Command dssrefresh corresponds to the Refresh Data into Target dialog in the User Interface.

The effects of dssrefresh can be customized by defining various actions in the channel. For example, defining the action Restrict with parameter RefreshCondition allows the refresh of only certain rows in the table. Parameter Context can be used with option <b>-C</b> to allow restrictions to be enabled dynamically. Another form of customization is to employ SQL views; Refresh can read data from a view in the source database and row-wise refresh can also select from a view in the target database, rather than a real table when comparing the incoming changes.

If row-wise dssrefresh is connecting between different DBMS types, then an ambiguity can occur because of certain data type coercions. For example, DSS coercion maps an empty string from other DBMS's into a null in an Oracle varchar. If Ingres location ing contains an empty string and Oracle location ora contains a null, then should DSS report that these tables are the same or different? Command dssrefresh allow both behavior by applying the sensitivity of the 'write' location, not the 'read' location specified by option <b>-r</b>. This means that row-wise refreshing from location ing to location ora will report the tables were identical, but row-wise refreshing from ora to ing will say the tables were different.

Options

This section describes the options available for command dssrefresh.

Parameter Description
-amax_slices_per_tbl

Set the number of slices the table will be divided into. The default number of slices is 1, i.e., the table will be processed as a single piece. You can set the value to any integer.

This option cannot be combined with option -S.

In the User Interface, this option corresponds to the Number of Slices option.

This option is not supported since 6.3.0/0.

-Arows_per_slice

Set the number of rows per slice.

The default number of rows per slice is 10000000 (ten million). Requires option -a.

In the User Interface, this option corresponds to the Rows per slice option.

This option is not supported since 6.3.0/0.

-b Run in the background: do not wait for the Refresh to complete.
-Bslice_meths

Methods for slice suggestion (option -a).

Valid values for slice_meths are:

  • c: Suggest from last compare row count. Inspects previous Compare events and tries to find the Source_Rows_Selected result for the current table. Then uses this value to suggest the slicing.
  • C: Repeat the last compare slicing. Repeats slicing from the last Compare job.
  • r: Suggest from last refresh row count. Inspects previous Refresh events and tries to find the Source_Rows_Selected result for the current table. Then uses this value to suggest the slicing.
  • R: Repeat the last refresh slicing. Repeats slicing from the last Refresh job.
  • s: Suggest from DBMS statistics. Gets the row count from the DBMS_STATS Oracle package

Several -Bslice_meths instructions can be supplied together, e.g., -BCs.

In the User Interface, this option corresponds to the Slicing Suggestions dialog.

This option is not supported since 6.3.0/0.

-ccreateopts

Create new tables. Only 'basic' tables are created, based on the information in the channel. A basic table just has the correct column names and data types without any extra indexes, constraints, triggers, or table spaces.

Valid values for createopts are:

  • b: Create basic (absent) tables only (mandatory).

  • e: Keep the existing structure.

  • f: Force recreation (always recreate) of tables.

  • k: Create an index (unique key or non-unique index). If the original table does not have a unique key, then a non-unique index is created instead of a unique key.

  • o: Only create tables, do not Refresh any data into them.

  • p: Preserve (keep) existing data on recreating.

  • r: Recreate (drop and create) if the column names do not match.

Several -ccreateopts instructions can be supplied together, e.g. -cbkr which instructs dssrefresh to create new tables in the target database if they do not already exist and re-create if they exist but have the wrong column information.

Action DbObjectGeneration with parameter RefreshTableCreateClause can be used to add extra SQL to the Create Table statement which DSS will generate.

In the User Interface, this option corresponds to different target table creation configurations and Advanced Table Creation Options.

-Ccontext

Enable context.

This option controls whether actions defined with parameter Context are effective or are ignored. For more information, see the Refresh and Compare Contexts concept page.

Defining an action with parameter Context can have different uses. For example, if action Restrict with parameters RefreshCondition="{id}>22" Context=qqq is defined, then normally all data will be refreshed, but if context qqq is enabled (-Cqqq), then only rows where id>22 will be refreshed. Variables can also be used in the restrict condition, such as "{id}>{dss_var_min}". This means that dssrefresh -Cqqq -Vmin=99 will only Refresh rows with id>99.

Action ColumnProperties with parameter Context can also be defined. This can be used to define CaptureExpression parameters which are only activated if a certain context is supplied. For example, to define a Bulk Refresh context where SQL expressions are performed on the source database (which would slow down capture) instead of the target database (which would slow down Bulk Refresh).

In the User Interface, this option corresponds to the Contexts option.
-d

Remove (drop) scripts and scheduler jobs & job groups generated by the previous dssrefresh command.


-D Duplicate the last Refresh event. This option is used for repeating a Refresh operation, using the same arguments. Other command-line options supplied to dssrefresh -D will overwrite those from the duplicated event.
-e Automatically make a duplicate of the refresh event when it is done.
-Etime

Schedule time(s) time for the refresh job. Valid values for time are:

  • date_str_z: run the refresh job at a specific time once. Valid time formats for date_str_z are:
    • YYYY-MM-DD [HH:MM:SS] (in local time)

    • YYYY-MM-DDTHH:MM:SS+TZD

    • YYYY-MM-DDTHH:MM:SSZ

    • today

    • now[±SECS]

    • an integer (seconds since 1970-01-01 00:00:00 UTC)

  • str: a crono string, schedule the refresh job to run at specific times repeatedly.

Cannot be combined with option -e.

For specific usages, see Examples.

In the User Interface, this option corresponds to the Scheduling Options.
-f

Fire database triggers/rules while applying SQL changes for Refresh.

Normally for Oracle and SQL Server, DSS disables any triggers on the target tables before the Refresh and re-enables them afterward. On Ingres, the Refresh avoids firing databases rules using statement set no rules. This option prevents this, so if Refresh does an insert statement then it could fire a trigger. But note that refresh often uses a bulk-load method to load data, in which case database triggers will not be fired anyway. Other ways to control trigger firing are described in Managing Recapturing Using Session Names. For integration jobs into Ingres and SQL Server, action Integrate with parameter NoTriggerFiring can also be used.

In the User Interface, this option corresponds to the Disable Triggers option.
-Ffkops

Behavior for foreign key constraint in the target database which either reference or are referenced by a table which should be refreshed.

Valid values for fkops are:

  • i: Ignore foreign key constraints. Normally this would cause foreign key constraint errors. This cannot be combined with other letters.

  • xdefault: Disable all such constraints before Refresh and re-enable them at the end. If the DBMS does not support disable/re-enable syntax (e.g. Ingres) then constraints are instead dropped before Refresh and recreated at the end. Note that for online refresh (option -q) without a select moment supplied (option -M) the actual re-enabling of disabled foreign key constraints is not done by the Refresh itself but is instead delayed until the end of the next cycle of integration.

If option -F is not supplied, by default, all foreign-key constraints will be disabled before Refresh and re-enabled afterward unless DSS has no capability for foreign keys at all (value i).

Refreshing from a source location is supported only on certain location types. For the list of supported source location types, see section Refresh and Compare in Capabilities.

-ggran

Granularity of refresh in database locations. For more information, see Refresh Types.

Valid values for gran are:

  • bdefault: Bulk refresh using bulk data load.

  • r: Row-by-row refresh of tables.

In the User Interface, this option corresponds to the Bulk Load and Repair Granularity options.
-iisolate_when

Define when to start isolation for table refresh. Isolated table refresh allows integrate jobs to continue for tables that are not part of the isolated refresh. The integrate jobs for the isolated tables will resume once the refresh is complete.

Valid values for isolate_when are:

  • now: Start isolation immediately.

  • at_start: Start isolation when the refresh starts. This is useful for scheduled refreshes that will not start immediately.

-Isrange

Refresh event only performs a subset of slices implied by -S (table slices) option.

This option is only allowed with option -S.

Value srange should be a comma-separated list of one of the following:

  • N: Only perform 'sub slices' number N. Note that these slices are numbered starting from zero.
  • N-M: Perform from slices from N to M inclusive.
  • N-: Perform from slices from N onwards.
  • -M: Perform from slices from the first slices until slice M.
-Jtask

Job chaining. After the refresh job is completed, start the capture and/or integrate jobs.

Valid values for task are:

  • cap: start capture job
  • integ: start integrate job

-lloc

Target location to refresh to. This means that data will be written into location loc.

If this option is not supplied then all other locations except the source location are considered as target location(s).

Several -lloc instructions can be supplied together.

The source (read location) is specified with option -r.

Valid values for loc are:

  • locname: Only location locname.

  • l1-l2: All locations that fall alphabetically between l1 and l2 inclusive.

  • !locname: All locations except locname.

  • !l1-l2: All locations except for those that fall alphabetically between l1 and l2 inclusive.

    The character '!' can be treated as a special (reserved) character in certain shells. In such cases, use single quotes (' ') or a back slash (\) when specifying the location(s) to be excluded. For example, dssrefresh -r mysrc -l '!myloc' myhub mychn or dssrefresh -r mysrc -l \!myloc myhub mychn

  • pattern: All locations matching the specified pattern. Pattern matching can be done using the special symbols *, ? or [characters], where '*' is any sequence of characters, '?' matches any character (exactly one), and '[]' matches a selection of characters. For example:

    • 'loc*' matches location names starting with 'loc'.
    • '*loc' matches location names ending with 'loc'.
    • 'loc?' matches location names 'loc1', 'loc2', 'loc3', but not 'loc12'.
    • 'a[c0-9]' matches location names with the first letter 'a' and the second letter 'c' or a digit.
    • 'a*|b*' Multiple patterns may be specified. In this case, the pattern matches location names starting with 'a' and 'b'.
  • @filename: All locations listed in file filename (a .txt file containing location names, one per line).

In the User Interface, this option corresponds to the Locations option.
-mmask
Since v6.2.5/6

Mask (ignore) some differences between the tables that are being refreshed.

Valid values of mask can be:

  • d: Skip deleting rows that are present in the target but not in the source.

  • i: Skip inserting rows that are present in the source but not in the target.

  • u: Skip updating rows where the primary key exists in both source and target, but non-key column values differ.

The values/letters can be combined, for example -mid means ignore inserts and deletes. If a difference is ignored, then the verbose option (-v) will not generate SQL for it and dssrefresh will not rectify it.
The -m option can only be used with row-wise granularity (option -gr).

-Mselect_moment

Select data from each table from the same consistent moment in time.

Valid values for select_moment are:

  • time: Flashback query with select … as of timestamp.
    Valid formats are YYYY-MM-DD [HH:MM:SS] (in local time) or YYYY-MM-DDTHH:MM:SS+TZD or YYYY-MM-DDTHH:MM:SSZ or today or now[[±]SECS] or an integer (seconds since 1970-01-01 00:00:00 UTC). Note that if a symbolic time like -Mnow is supplied then a new "SCN time" will be retrieved each time the refresh job is run (not only when the dssrefresh command is called). So if dssrefresh -Mnow is run on Monday, and the refresh job it creates starts running at 10:00 Tuesday and runs again 10:00 on Wednesday, then the first refresh will do a flashback query (for all tables) with an SCN corresponding to Tuesday at 10:00 and the second job run will use flashback query with an SCN corresponding to Wednesday at 10:00.

  • scn=val: Flashback query with select … as of scn. Value val is an Oracle SCN number, either in decimal or in hex (when it starts with 0x or contains hex digits).

  • dss_tx_seq=val: Value from column dss_tx_seq is converted back to an Oracle SCN number (by dividing by 65536) and used for flashback query with select … as of scn. Value is either in decimal or in hex (when it starts with 0x or contains hex digits).

  • snapshot (SQL Server only): Select data from a source database using SQL snapshot isolation level, which requires enabling ALLOW_SNAPSHOT_ISOLATION database option in SQL Server.

This parameter only affects the selects of the leftmost (source) database, not any selects on the rightmost (target) database.

In the User Interface, this option corresponds to the Select moment with Oracle flashback query option.
-n
Since v6.1.5/2

Add the timekey truncate record as the first record of each table that is refreshed from the source location. This implies all previous timekey records for this table can be ignored by the system or application that consumes the data in the target location.

This option is only supported if the target is a file or Kafka location.

This option can only be used for tables with a TimeKey column.

This option cannot be used in combination with option -gr (row-by-row granularity).

In the User Interface, this option corresponds to the Add a Truncate Record Before First Rows option.

-pjob_quota Sets job_quota refresh job group attribute. It defines a number of jobs jobs_quota which can be run simultaneously.

-Pparallel_sessions

Perform Refresh for different tables in parallel using parallel_sessions sub-processes. The refresh will start by processing parallel_sessions tables in parallel; when the first of these is finished the next table will be processed, and so on.

In the User Interface, this option corresponds to the Parallel Sessions option.
-qpurge

Online refresh of data from a database that is continuously being changed. This requires that capture is enabled on the source database. The integration jobs are automatically suspended while the online refresh is running, and restarted afterward. The target database is not yet consistent after the online refresh has finished. Instead, it leaves instructions so that when the replication jobs are restarted, they skip all changes that occurred before the refresh and perform special handling for changes that occurred during the refresh. This means that after the next replication cycle consistency is restored in the target database. If the target database had foreign key constraints, then these will also be restored.

Valid values for purge are:

  • wo:

    Write only. Changes before the online refresh should only be skipped on the write side (by the integrate job), not on the read side (by the capture job). If changes are being replicated from the read location to multiple targets, then this value will avoid skipping changes that are still needed by the other targets.

  • rw:

    Read/Write. Changes before the online refresh should be skipped both on the read side (by the capture job) and on the write side (by the integrate job). There are two advantages to skipping changes on the capture side; performance (those changes will not be sent over the network) and avoiding some replication errors (i.e. those caused by an alter table statement). The disadvantage of skipping changes on the capture side is that these changes may be needed by other replication targets. If they were needed, then these other integration locations need a new 'online' refresh, but without -qrw, otherwise, the original targets will need yet another refresh.

  • no:

    No skipping. Changes that occurred before the refresh are not skipped, only special handling is activated for changes that occurred during the refresh. This is useful for the online refresh of a context-sensitive restriction of data (dssrefersh -Ccontext and action Restrict with parameters RefreshCondition and Context).

Internally, the online refresh uses 'control files' to send instructions to other replication jobs (see command dsscontrol). These files can be viewed using command dssrouterview with option -s.

Online refresh (with option -q) can give errors if duplicate rows are actually changed during the online refresh (see parameter NoDuplicateRows in action TableProperties).

In the User Interface, this option corresponds to the Online refresh controls to affect replication of changes that occurred before and during refresh option.

-rloc

Source location to refresh from. This means that data will be read from location loc and written to the other location(s).

In the User Interface, this option corresponds to the Locations option.

-Rurl


Remote hub server. Access the hub server running on a remote machine, via the REST interface.

This option is required for remote CLI access. When using this option, command dsslogin should be run first, for authentication.

-s

Schedule invocation of a refresh script by leaving a refresh job in the SUSPEND state. Without this option, the default behavior is to start the refresh job immediately.

The refresh job can be invoked using the dssstart command. For example, executing the command dssstart -u -w hub channel-refr-src-tgt unsuspends (moves to RUNNING state) the jobs and instructs the scheduler to run them. Output from the jobs is copied to the dssstart command's stdout and the command finishes when all jobs have finished. Jobs created are cyclic which means that after they have run they go back to the PENDING state again. They are not generated by a trig_delay attribute which means that once they are complete they will stay in the PENDING state without getting retriggered.
-Ssliceexpr

Refresh large tables using Slicing. Value sliceexpr can be used to split the table into multiple slices.

The column used to slice a table must be 'stable', i.e., values in it should not change while the job is running. For example, customer_id is a stable column, while last_login is not. Otherwise, a row could 'move' from one slice to another while the job is running. As a result, the row could be processed in two slices (causing errors) or no slices (causing data loss). If the source database is Oracle, this problem can be avoided using a common select moment (option -M).

In the User Interface, this option corresponds to the Slicing section.

For more information on slicing limitations, see Slicing Limitations.

Value sliceexpr must have one of the following forms:
  • col%num
    More details

    In this slicing method (Modulo Slicing), DSS groups the data set by performing a modulo operation using the values from the column (e.g. mycol of your choice.

    If -Smycol%3 is supplied then the conditions for the three slices are:

    mod(round(abs(coalesce(mycol, 0)), 0), 3)= 0
    mod(round(abs(coalesce(mycol, 0)), 0), 3)= 1
    mod(round(abs(coalesce(mycol, 0)), 0), 3)= 2

    Note that the use of extra SQL functions (e.g. round(), abs() and coalesce()) ensure that slicing effect fractions, negative numbers and NULL too. Modulo slicing can only be used on a column with a numeric data type.

    For Modulo slicing, parameters SliceCountCondition and SliceSeriesCondition in action Restrict must not be defined.

  • col<b1[<b2]… [<bN]
    More details In this slicing method (Boundary Slicing), DSS groups the data set depending on the boundaries defined for the chosen column (e.g. mycol).

    If N boundaries are defined then N+1 slices are implied.

    If -Smycol<10<20<30 is supplied then the conditions for the four slices are:

    mycol <= 10 mycol > 10 and 
    mycol <= 20 mycol > 20 and
    mycol <= 30 mycol > 30 or 
    mycol is null

    Strings can be supplied by adding quotes around boundaries, -Smycol<'x'<'y'<'z'.

    For very large tables consider the DBMS query execution plan. If the DBMS decides to 'walk' an index (with a lookup for each matched row) but this is not optimal (i.e. a 'serial-scan' of the table would be faster) then either use DBMS techniques ($DSS_SQL_SELECT_HINT allows Oracle optimizer hints) or consider modulo slicing (col%num) instead.

    Gathering column histogram statistics is required for this functionality to work. This can be done by calling the dbms_stats.gather_table_stats stored procedure.

    Examples
    1. Gathers statistics including column histograms, for table 'table_name', using all table rows, for all columns, and a maximum of 254 histogram buckets (therefore up to 254 slice boundaries can be suggested).

      exec dbms_stats.gather_table_stats('schema_name', 'table_name', estimate_percent=>100, method_opt=>'for all columns size 254');
    2. Gathers statistics including column histograms, for table 'table_name', using all table rows, for all indexed columns, and default number of histogram buckets.

      exec dbms_stats.gather_table_stats('schema_name', 'table_name', estimate_percent=>100, method_opt=>'for all indexed columns');
    3. Gathers statistics including column histograms, for table 'table_name', using 70% of table rows, for column 'table_column', and a maximum of 150 histogram buckets (therefore up to 150 slice boundaries can be suggested).

      exec dbms_stats.gather_table_stats('schema_name', 'table_name', estimate_percent=>70, method_opt=>'for columns table_column size 150');
    4. Gathers statistics including column histograms, for table 'table_name', for all columns, and maximum of 254 histogram buckets. This is an obsolete way to generate statistics and there are much fewer options supported.

      analyze table table_name compute statistics for all columns size 254;

    For Boundary slicing, parameters SliceCountCondition and SliceSeriesCondition in action Restrict must not be defined.

  • num
    More details

    In this slicing method (Count slicing), DSS groups the data set by performing a modulo operation on any column, including the ones with string or date values.

    The number of each slice is assigned to substitution {dss_slice_num} which must be mentioned in action Restrict with parameter SliceCountCondition defined for the slice table. Substitution {dss_slice_total} is also assigned to the total number of slices.
  • val1[;val2]…
    More details In this slicing method (Series Slicing), DSS groups the data set based on a particular value, mostly with distinct data groups.

    Each slice has its value assigned directly into substitution {dss_slice_value} must be mentioned in action Restrict with parameter SliceSeriesCondition defined for the sliced table.

  • In all of the above slicing methods, the slice expression can be prefixed with a table name specifier table.sliceexpr to apply slicing only to a specific table. The specified table must be in scope of this refresh operation. For example, the expression mytable.mycol%5 applies modulo slicing to column mycol of table mytable.

    If you are specifying distinct slice expression for multiple tables then option -S must be supplied for each table specification (e.g. -S mytable1.mycol%10 -S mytable2.mycol%5).

-ttbl

Only refresh the specified table(s) tbl.

Several -ttbl instructions can be supplied together.

Valid values for tbl are:

  • tblname: Only refresh table tblname.

  • t1-t2: Refresh all table codes that fall alphabetically between t1 and t2 inclusive.

  • !tblname: Refresh all table names except tblname.

  • !t1-t2: Refresh all table codes except for those that fall alphabetically between t1 and t2 inclusive.

    The character '!' can be treated as a special (reserved) character in certain shells. In such cases, use single quotes (' ') or a back slash (\) when specifying the location(s) to be excluded. For example, dssrefresh -r mysrc -t '!myloc' myhub mychn or dssrefresh -r mysrc -t \!myloc myhub mychn

  • pattern: Refresh all tables matching the specified pattern. Pattern matching can be done using the special symbols *, ? or [characters], where '*' is any sequence of characters, '?' matches any character (exactly one), and '[]' matches a selection of characters. For example:

    • 'tbl*' matches table names starting with 'tbl'.
    • '*tbl' matches table names ending with 'tbl'.
    • 'tbl?' matches table names 'tbl1', 'tbl2', 'tbl3', but not 'tbl12'.
    • 'a[c0-9]' matches table names with the first letter 'a' and the second letter 'c' or a digit.
    • 'a*|b*' Multiple patterns may be specified. In this case, the pattern matches table names starting with 'a' and 'b'.
  • @filename: Refresh all tables listed in filename (a .txt file containing location names, one per line).
In the User Interface, this option corresponds to the Tables option.
-Ttsk

Specify an alternative name for a refresh task to be used for naming scripts and jobs. The task name must begin with an 'r'.

Thedefault task name is refr, so without this option, the Refresh jobs and scripts are named chn-refr-l1-l2.
-u
Since v6.1.5/2

Perform upsert refresh. This option merges the source rows into the target without initially deleting or truncating the target rows. Similar to Burst Integrate, the data is initially loaded into burst tables named tbl_ _rb and then the burst table is merged with the base table. Changes are applied as inserts or updates, similar to Resilient Integrate.

Deletes or key updates are not covered by this type of refresh. Rows that are deleted on the source will still exist in the target. Rows for which the key has been updated in the source will exist with both the old and new key in the target.

This option is only supported for target locations that support the Burst integratemethod.

This option cannot be used for tables with a TimeKey column.

This option cannot be used in combination with option -gr (row-by-row granularity).

In the User Interface, this option corresponds to the Merge into Target (No Delete/Truncate on Target) option.

-v

Verbose. This option creates binary diff files containing individual differences detected.

The diff files can be viewed using command dssrouterview. Section Analyzing Diff File explains how to view and interpret the contents of a diff file.

This option must be used in combination with option -gr (row-by-row granularity).

In the User Interface, this option corresponds to the Keep Difference Files option.

-Vname=value

Supply variable for refresh restrict condition.

This should be supplied if a RefreshCondition parameter of action Restrict contains string {dss_var_name}. This string is replaced with value.In the User Interface, this option corresponds to the Variables option.
-wprereads File prereaders per table. Define the number of prereader subtasks per table while performing direct file compare. This option is only allowed if the source or target is a file location.

Examples

This section provides examples of using the dssrefresh command.

Example 1. Bulk refresh

The following command can be used for bulk refresh of table order from location src to location tgt:

dssrefresh -r src -l tgt -t order myhub mychannel
Example 2. Row-by-row refresh

For row-by-row refresh (option <b>-gr</b>) from location src to location tgt, use the following command:

dssrefresh -r src -l tgt -gr myhub mychannel
Example 3. Schedule refresh
  • The following command is to schedule a bulk refresh at a specific time (option -E).

    dssrefresh -r src -l tgt -gb -E 2022-04-12T12:50:53+0300 myhub mychannel
    
  • The following command schedules a row-by-row refresh to repeat at the 1st day of each month at 10.30 a.m. (option -E).

    dssrefresh -r src -l tgt -gr -E '30 10 1 * *' myhub mychannel
    
Example 4. Isolate Bulk refresh

The following command can be used for isolate bulk refresh of table order from location src to location tgt:

dssrefresh -i now -r src -l tgt -t order myhub mychannel
Example 5. Scheduled Isolate refresh

The following command is to schedule an isolate bulk refresh at a specific time:

dssrefresh -i at_start -r src -l tgt -t order -gb -E 2022-04-12T12:50:53+0300 myhub mychannel