Action Reference
This section describes Data Source Solutions DSS actions and their parameters. Actions in DSS allow you to define the behavior of replication. When a replication channel is created, at least two actions Capture and Integrate must be defined on source and target locations respectively to activate replication.
| Parameter | Argument | Description |
| AddTablePattern | patt | Add new tables to channel if they match. |
| IgnoreTablePattern | patt | Ignore new tables which match pattern. |
| CaptureSchema | db_schema | Database schema for matching tables. |
| IntegrateSchema | db_schema | Generate schema for target location(s). |
| OnEnrollBreak | policy | Applies a policy to control the behavior of capture job for an existing table to handle break in the enroll information. |
| OnAddColumnWithDefault | policy | Applies a policy to customize behavior when AdaptDDL detects new columns with default values. |
| OnPreserveAlterTableFail | policy | Applies a policy to control the behavior of capture job for an existing table to handle any failure while performing ALTER TABLE on the target table. |
| RefreshOptions | refr_opts | Configure options for adapt's refresh of target. |
| OnDropTable | policy | Applies a policy that controls the replication behavior if a DROP TABLE is done to a replicated table. |
| KeepExistingStructure | | Preserve old columns in target, and do not reduce data types sizes. |
| KeepOldRows | | Preserve old rows in target during recreate. |
| Parameter | Argument | Description |
| Command | path | Call OS command during replication jobs. |
| DbProc | dbproc | Call database procedure dbproc during replication jobs. |
| UserArgument | str | Pass argument str to each agent execution. |
| ExecOnHub | | Execute agent on hub instead of location's machine. |
| Order | int | Specify order of agent execution. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |
| Parameter | Argument | Description |
| IgnoreSessionName | sess_name | Capture changes directly from DBMS logging system. |
| Coalesce | | Coalesce consecutive changes on the same row into a single change. |
| NoBeforeUpdate | | Only capture the new values for updated rows. |
| NoTruncate | | Do not capture truncate table statements. |
| AugmentIncomplete | col_type | Capture job must select for column values. |
| IgnoreCondition | sql_expr | Ignore changes that satisfy expression. |
| IgnoreUpdateCondition | sql_expr | Ignore update changes that satisfy expression. |
| HashBuckets | int | Hash structure to improve parallelism of captured tables. |
| HashKey | col_list | Hash capture table on specific key columns. |
| DeleteAfterCapture | | Delete file after capture, instead of capturing recently changed files. |
| Pattern | pattern | Only capture files whose names match pattern . |
| IgnorePattern | pattern | Ignore files whose names match pattern . |
| IgnoreUnterminated | pattern | Ignore files whose last line does not match pattern . |
| IgnoreSizeChanges | | Changes in file size during capture is not considered an error. |
| AccessDelay | secs | Delay read for secs seconds to ensure writing is complete. |
| UseDirectoryTime | | Check timestamp of parent dir, as Windows move doesn't change mod-time. |
| Parameter | Argument | Description |
| TreatCollisionAsError | | Do not resolve collisions automatically. |
| TimestampColumn | col_name | Exploit timestamp column col_name for collision detection. |
| AutoHistoryPurge | | Delete history table row when no longer needed for collision detection. |
| DetectDuringRefresh | colname | During row–wise refresh, discard updates if target timestamp is newer. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |
| Parameter | Argument | Description |
| CaptureOnly | | Only capture database sequences, do not integrate them. |
| IntegrateOnly | | Only integrate database sequences, do not capture them. |
| Name | seq_name | Name of database sequence in the DSS repository tables. |
| Schema | db_schema | Schema which owns database sequence. |
| BaseName | seq_name | Name of sequence in database if it differs from name in DSS. |
| Parameter | Argument | Description |
| Name | name | Name of environment variable. |
| Value | value | Value of environment variable. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |
| Parameter | Argument | Description |
| Method | method | Method of writing or integrating changes into the target location. |
| BurstCommitFrequency | freq | Frequency of commits. |
| Coalesce | | Enables coalescing on the same row into a single change. |
| CoalesceTimekey | | Causes coalescing on TimeKey channels when writing to a database target. |
| ReorderRows | mode | Control order in which changes are written to files. |
| Resilient | mode | Resilient integrate for inserts, updates and deletes. |
| OnErrorSaveFailed | | Write failed row to fail table. |
| DbProc | | Apply changes by calling integrate database procedures. |
| TxBundleSize | int | Bundle small transactions for improved performance. |
| TxSplitLimit | int | Split very large transactions to limit resource usage. |
| NoTriggerFiring | | Enable/Disable database triggers during integrate. |
| SessionName | sess_name | Integrate changes with special session name. |
| Topic | expression | Name of the Kafka topic. You can use strings/text or expressions as Kafka topic name. |
| MessageKey | expression | Expression to generate user defined key in a Kafka message. |
| MessageKeySerializer | format | Encodes the generated Kafka message key in a string or Kafka Avro serialization format. |
| MessageHeaders | key:value | Add custom headers to the Kafka messages. |
| OnDeleteSendTombstone | | Convert DELETE operations into Kafka tombstone messages. |
| RenameExpression | expression | Expression to name new files, containing brace substitutions. |
| ComparePattern | patt | Perform direct file compare. |
| ErrorOnOverwrite | | Error if a new file has same name as an existing file. |
| MaxFileSize | size | Limit each XML file to size bytes. |
| Verbose | | Report name of each file integrated. |
| TableName | apitab | API name of table to upload attachments into. |
| KeyName | apikey | API name of attachment table's key column. |
| CycleByteLimit | int | Max amount of routed data (compressed) to process per integrate cycle. |
| JournalRouterFiles | | Move processed router files to journal directory on hub. |
| JournalBurstTable | | Keep track of changes in the burst table during Burst Integrate. |
| Delay | N | Delay integration of changes for N seconds. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |
| Parameter | Argument | Description |
| BaseName | tbl_name | Name of a table in a database differs from the name in the DSS repository tables. |
| Absent | | Exclude table (which is available in the channel) from being replicated/integrated into target. |
| NoDuplicateRows | | Replication table cannot have duplicate rows. |
| Schema | schema | Database schema which owns table. |
| CoerceErrorPolicy | | Defines a policy to handle type coercion error. |
| CoerceErrorType | | Defines which types of coercion errors are affected by CoerceErrorPolicy. |
| SapUnpackErrorPolicy | policy | Defines a policy to handle type coercion error during SapUnpack |
| PackedInside | | Name of the SAP database table that holds the data for the pool or cluster table being unpacked. |
| TrimWhiteSpace | | Remove trailing whitespace from varchar. |
| TrimTime | policy | Trim time when converting from Oracle and SqlServer date. |
| MapEmptyStringToSpace | | Convert between empty varchar and Oracle varchar space. |
| MapEmptyDateToConstant | date | Convert between constant date (dd/mm/yyyy) and Ingres empty date. |
| CreateUnicodeDatatypes | | On table creation use Unicode data types, e.g. map varchar to nvarchar. |
| DistributionKeyLimit | int | Maximum number of columns in the implicit distribution key. |
| DistributionKeyAvoidPattern | patt | Avoid putting given columns in the implicit distribution key. |
| CharacterMapping | rules | Specify the replacement rules for unsupported characters. |
| MapBinary | policy | Specify how binary data is represented on the target side. |
| MissingRepresentationString | str | Inserts value str into the string data type column(s) if value is missing/empty in the respective column(s) during integration. |
| MissingRepresentationNumeric | str | Inserts value str into the numeric data type column(s) if value is missing/empty in the respective column(s) during integration. |
| MissingRepresentationDate | str | Inserts value str into the date data type column(s) if value is missing/empty in the respective column(s) during integration. |
| PartitionByDate | | Enables partitioning by date for Google BigQuery tables. |
| BQClusterKeys | col_name | Creates Google BigQuery clustered tables. |
| TransientTable | | Creates Snowflake transient tables. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |
| Parameter | Argument | Description |
| Command | path | Path to script or executable performing custom transformation. |
| CommandArguments | userarg | Value(s) of parameter(s) for transform (space separated). |
| SapUnpack | | Unpack the SAP pool, cluster, and long text table (STXL). |
| ExecOnHub | | Execute transform on hub instead of location's machine. |
| Parallel | n | Distribute rows to multiple transformation processes. |
| Context | context | Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare. |