Action Reference - DSS 6 | Data Source Solutions Documentation

Documentation: Action Reference - DSS 6 | Data Source Solutions Documentation

Action Reference

This section describes Data Source Solutions DSS actions and their parameters. Actions in DSS allow you to define the behavior of replication. When a replication channel is created, at least two actions Capture and Integrate must be defined on source and target locations respectively to activate replication.

AdaptDDL

ParameterArgumentDescription
AddTablePatternpattAdd new tables to channel if they match.
IgnoreTablePatternpattIgnore new tables which match pattern.
CaptureSchemadb_schemaDatabase schema for matching tables.
IntegrateSchemadb_schemaGenerate schema for target location(s).
OnEnrollBreakpolicyApplies a policy to control the behavior of capture job for an existing table to handle break in the enroll information.
OnAddColumnWithDefaultpolicyApplies a policy to customize behavior when AdaptDDL detects new columns with default values.
OnPreserveAlterTableFailpolicyApplies a policy to control the behavior of capture job for an existing table to handle any failure while performing ALTER TABLE on the target table.
RefreshOptionsrefr_optsConfigure options for adapt's refresh of target.
OnDropTablepolicyApplies a policy that controls the replication behavior if a DROP TABLE is done to a replicated table.
KeepExistingStructurePreserve old columns in target, and do not reduce data types sizes.
KeepOldRowsPreserve old rows in target during recreate.

AgentPlugin

ParameterArgumentDescription
CommandpathCall OS command during replication jobs.
DbProcdbprocCall database procedure dbproc during replication jobs.
UserArgumentstrPass argument str to each agent execution.
ExecOnHubExecute agent on hub instead of location's machine.
OrderintSpecify order of agent execution.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

Capture

ParameterArgumentDescription
IgnoreSessionNamesess_nameCapture changes directly from DBMS logging system.
CoalesceCoalesce consecutive changes on the same row into a single change.
NoBeforeUpdateOnly capture the new values for updated rows.
NoTruncateDo not capture truncate table statements.
AugmentIncompletecol_typeCapture job must select for column values.
IgnoreConditionsql_exprIgnore changes that satisfy expression.
IgnoreUpdateConditionsql_exprIgnore update changes that satisfy expression.
HashBucketsintHash structure to improve parallelism of captured tables.
HashKeycol_listHash capture table on specific key columns.
DeleteAfterCaptureDelete file after capture, instead of capturing recently changed files.
PatternpatternOnly capture files whose names match pattern .
IgnorePatternpatternIgnore files whose names match pattern .
IgnoreUnterminatedpatternIgnore files whose last line does not match pattern .
IgnoreSizeChangesChanges in file size during capture is not considered an error.
AccessDelaysecsDelay read for secs seconds to ensure writing is complete.
UseDirectoryTimeCheck timestamp of parent dir, as Windows move doesn't change mod-time.

CollisionDetect

ParameterArgumentDescription
TreatCollisionAsErrorDo not resolve collisions automatically.
TimestampColumncol_nameExploit timestamp column col_name for collision detection.
AutoHistoryPurgeDelete history table row when no longer needed for collision detection.
DetectDuringRefreshcolnameDuring row–wise refresh, discard updates if target timestamp is newer.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

ColumnProperties

ParameterArgumentDescription
Namecol_nameName of column in the DSS_COLUMN repository table.
DatatypeMatchdata_typeData type used for matching instead of Name.
BaseNamecol_nameDatabase column name differs from the DSS_COLUMN repository table.
ExtraColumn exists in base table but not in the DSS_COLUMN repository table.
AbsentColumn does not exist in base table.
CaptureExpressionsql_exprSQL expression for column value when capturing or reading.
CaptureExpressionTypeType of mechanism used by Capture, Refresh, and Compare job to evaluate value in parameter CaptureExpression .
IntegrateExpressionsql_exprSQL expression for column value when integrating.
ExpressionScopeexpr_scopeOperation scope for expressions.
CaptureFromRowIdCapture values from table's DBMS row-id.
TrimDatatypeintReduce width of data type when selecting or capturing changes.
KeyAdd column to table's replication key.
SurrogateKeyUse column instead of the regular key during replication.
DistributionKeyDistribution key column.
SoftDeleteConvert deletes to update of this column to 1. Value 0 means not deleted.
TimeKeyConvert all changes to inserts, using this column for time dimension.
IgnoreDuringCompareIgnore values in column during compare and refresh.
Datatypedata_typeData type in database if it differs from the DSS_COLUMN repository table.
LengthintString length in database if it differs from the length in the DSS repository tables.
PrecisionintPrecision in database if it differs from the precision in the DSS repository tables.
ScaleintInteger scale in database if it differs from the scale in the DSS repository tables.
NullableNullability in database if it differs from the nullability in the DSS repository tables.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

DbObjectGeneration

ParameterArgumentDescription
NoCaptureInsertTriggerInhibit generation of capture insert trigger.
NoCaptureUpdateTriggerInhibit generation of capture update trigger.
NoCaptureDeleteTriggerInhibit generation of capture delete trigger.
NoCaptureDbProcInhibit generation of capture database procedures.
NoCaptureTableInhibit generation of capture tables.
NoIntegrateDbProcInhibit generation of integrate database procedures.
IncludeSqlFilefileSearch directory for include SQL file.
IncludeSqlDirectorydirSearch directory for include SQL file.
BurstTableStorageStorage for integrate burst table creation statement.
RefreshTableStorageStorage for base table creation statement during Refresh.
CaptureTableCreateClausesql_exprClause for trigger-based capture table creation statement.
StateTableCreateClausesql_exprClause for state table creation statement.
BurstTableCreateClausesql_exprClause for integrate burst table creation statement.
FailTableCreateClausesql_exprClause for fail table creation statement.
HistoryTableCreateClausesql_exprClause for history table creation statement.
RefreshTableCreateClausesql_exprClause for base table creation statement during refresh.
RefreshTableGrantExecutes a grant statement on the base table created during Refresh.
BurstTableSchemaschemaDefine a schema for storing burst tables, overriding the default schema configuration.
StateTableSchemaschemaDefines a schema for storing state tables, overriding the default schema configuration.

DbSequence

ParameterArgumentDescription
CaptureOnlyOnly capture database sequences, do not integrate them.
IntegrateOnlyOnly integrate database sequences, do not capture them.
Nameseq_nameName of database sequence in the DSS repository tables.
Schemadb_schemaSchema which owns database sequence.
BaseNameseq_nameName of sequence in database if it differs from name in DSS.

Environment

ParameterArgumentDescription
NamenameName of environment variable.
ValuevalueValue of environment variable.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

FileFormat

ParameterArgumentDescription
XmlTransform rows from/into xml-files.
CsvTransforms rows from/into csv files.
AvroTransforms rows into Apache AVRO format. Integrate only.
JsonTransforms rows into JSON format. The content of the file depends on the value for parameter JsonMode. This parameter only has an effect on the integrate location.
ParquetRead and write files as Parquet format.
CompactWrite compact XML tags like <r> and <c> instead of <row> and <column>.
CompressalgorithmCompress/uncompress while writing/reading.
EncodingencodingEncoding of file.
HeaderLineFirst line of file contains column names.
FieldSeparatorstr_escField separator.
LineSeparatorstr_escLine separator.
QuoteCharacterstr_escCharacter to quote a field with, if the fields contains separators.
EscapeCharacterstr_escCharacter to escape the quote character with.
FileTerminatorstr_escFile termination at end-of-file.
NullRepresentationesc_strString representation for columns with NULL value.
JsonModemodeStyle used to write row into JSON format.
BlockCompresscodecCompression codec for Avro and Parquet.
AvroVersionversionVersion of Apache AVRO format.
PageSizeParquet page size in bytes.
RowGroupThresholdMaximum row group size in bytes for Parquet.
ParquetVersionversionCategory of data types to represent complex data into Parquet format.
BeforeUpdateColumnsprefixMerges the 'before' and 'after' versions of a row into one.
BeforeUpdateColumnsWhenChangedAdds the prefix (defined in BeforeUpdateColumns) only to columns in which values were updated.
ConvertNewlinesTostyleWrite files with UNIX or DOS style newlines.
CaptureConverterpathRun files through converter before reading.
CaptureConverterArgumentsuserargArguments to the capture converter.
IntegrateConverterpathRun files through converter after writing.
IntegrateConverterArgumentsuserargArguments to the integrate converter program.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

Integrate

ParameterArgumentDescription
MethodmethodMethod of writing or integrating changes into the target location.
BurstCommitFrequencyfreqFrequency of commits.
CoalesceEnables coalescing on the same row into a single change.
CoalesceTimekeyCauses coalescing on TimeKey channels when writing to a database target.
ReorderRowsmodeControl order in which changes are written to files.
ResilientmodeResilient integrate for inserts, updates and deletes.
OnErrorSaveFailedWrite failed row to fail table.
DbProcApply changes by calling integrate database procedures.
TxBundleSizeintBundle small transactions for improved performance.
TxSplitLimitintSplit very large transactions to limit resource usage.
NoTriggerFiringEnable/Disable database triggers during integrate.
SessionNamesess_nameIntegrate changes with special session name.
TopicexpressionName of the Kafka topic. You can use strings/text or expressions as Kafka topic name.
MessageKeyexpressionExpression to generate user defined key in a Kafka message.
MessageKeySerializerformatEncodes the generated Kafka message key in a string or Kafka Avro serialization format.
MessageHeaderskey:valueAdd custom headers to the Kafka messages.
OnDeleteSendTombstoneConvert DELETE operations into Kafka tombstone messages.
RenameExpressionexpressionExpression to name new files, containing brace substitutions.
ComparePatternpattPerform direct file compare.
ErrorOnOverwriteError if a new file has same name as an existing file.
MaxFileSizesizeLimit each XML file to size bytes.
VerboseReport name of each file integrated.
TableNameapitabAPI name of table to upload attachments into.
KeyNameapikeyAPI name of attachment table's key column.
CycleByteLimitintMax amount of routed data (compressed) to process per integrate cycle.
JournalRouterFilesMove processed router files to journal directory on hub.
JournalBurstTableKeep track of changes in the burst table during Burst Integrate.
DelayNDelay integration of changes for N seconds.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

Restrict

ParameterArgumentDescription
CaptureConditionsql_exprRestrict during capture.
IntegrateConditionsql_exprRestrict during integration.
RefreshConditionsql_exprRestrict during refresh and compare.
CompareConditionsql_exprRestrict during compare.
RefreshJoinConditionsql_exprFilter rows during refresh.
CompareJoinConditionsql_exprFilter rows during compare.
SliceCountConditionsql_exprDefines a SQL expression that determines which rows are included in a specific slice during Refresh or Compare when slicing by Count.
SliceSeriesConditionsql_exprDefines a SQL expression that determines which rows are included in a specific slice during Refresh or Compare when slicing by Series.
HorizColumncol_nameHorizontal partition table based on value in col_name .
HorizLookupTabletbl_nameJoin partition column with horizontal lookup table.
DynamicHorizLookupChanges to lookup table also trigger replication.
AddressToaddrOnly send changes to locations specified by address.
AddressSubscribeaddrGet copy of any changes sent to matching address.
SelectDistinctFilter duplicate records during Refresh or Compare.

Scheduling

ParameterArgumentDescription
CaptureStartTimestimesTrigger capture job at specific times, rather than continuous cycling.
CaptureOnceOnStartCapture job runs for one cycle after trigger.
IntegrateStartAfterCaptureTrigger integrate job only after capture job routes new data.
IntegrateStartTimestimesTrigger integrate job at specific times, rather than continuous cycling.
IntegrateOnceOnStartIntegrate job runs for one cycle after trigger.
LatencySLAthresholdThreshold for the latency.
TimeContexttimesTime range during which the LatencySLA is active/valid.

TableProperties

ParameterArgumentDescription
BaseNametbl_nameName of a table in a database differs from the name in the DSS repository tables.
AbsentExclude table (which is available in the channel) from being replicated/integrated into target.
NoDuplicateRowsReplication table cannot have duplicate rows.
SchemaschemaDatabase schema which owns table.
CoerceErrorPolicyDefines a policy to handle type coercion error.
CoerceErrorTypeDefines which types of coercion errors are affected by CoerceErrorPolicy.
SapUnpackErrorPolicypolicyDefines a policy to handle type coercion error during SapUnpack
PackedInsideName of the SAP database table that holds the data for the pool or cluster table being unpacked.
TrimWhiteSpaceRemove trailing whitespace from varchar.
TrimTimepolicyTrim time when converting from Oracle and SqlServer date.
MapEmptyStringToSpaceConvert between empty varchar and Oracle varchar space.
MapEmptyDateToConstantdateConvert between constant date (dd/mm/yyyy) and Ingres empty date.
CreateUnicodeDatatypesOn table creation use Unicode data types, e.g. map varchar to nvarchar.
DistributionKeyLimitintMaximum number of columns in the implicit distribution key.
DistributionKeyAvoidPatternpattAvoid putting given columns in the implicit distribution key.
CharacterMappingrulesSpecify the replacement rules for unsupported characters.
MapBinarypolicySpecify how binary data is represented on the target side.
MissingRepresentationStringstrInserts value str into the string data type column(s) if value is missing/empty in the respective column(s) during integration.
MissingRepresentationNumericstrInserts value str into the numeric data type column(s) if value is missing/empty in the respective column(s) during integration.
MissingRepresentationDatestrInserts value str into the date data type column(s) if value is missing/empty in the respective column(s) during integration.
PartitionByDateEnables partitioning by date for Google BigQuery tables.
BQClusterKeyscol_nameCreates Google BigQuery clustered tables.
TransientTableCreates Snowflake transient tables.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.

Transform

ParameterArgumentDescription
CommandpathPath to script or executable performing custom transformation.
CommandArgumentsuserargValue(s) of parameter(s) for transform (space separated).
SapUnpackUnpack the SAP pool, cluster, and long text table (STXL).
ExecOnHubExecute transform on hub instead of location's machine.
ParallelnDistribute rows to multiple transformation processes.
ContextcontextAction is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.