Importing in a multithreaded environment
To import data on multiple threads, configure the following options to specify multithreaded functionality:
For information about these options, see Data Import command-line utility options.
Tips for running the Data Import Utility in a multithreaded environment
When running the Data Import utility in a multithreaded environment, remember these tips:
- You must include a form name in the Data Import utility command. (If a mapping file is provided and includes a form name, the form name is optional.)
- If any data files in the specified data directory contain data from a different form, the Data Import utility cannot resolve the difference. The utility imports the data in specified form only.
- If a mapping file is specified and if the specified form uses the same mapping file for all data files, the Data Import utility imports all file types (.arx, .csv, .xml, and .ascii ).
- For .arx and .xml files, if you run the Data Import utility without including a form name and a mapping file, data from the individual file is imported to their respective forms, so make sure that data files are not interdependent. For example, when importing Group form and User form data, users belong to groups. The User form data is dependant on Group form data, and the Data Import utility cannot resolve this dependency.
For CSV and ASCII files, you must include a form name in the command because these files do not include form information.
- The Data Import utility does not validate workflow on forms where data is imported.
- For AR System 7.1.00 and later versions, the Data Import utility uses bulk APIs (unless the -e option is used). When a bulk API fails in its first attempt, it rolls back the entire operation and retries up to the last completed entries again with bulk API. The remaining records are imported it by using individual APIs.
- Options that you specify to handle duplicate records (-D ), bad records, and multiple matches (-t ) are common to all of the threads.
There is no explicit command-line option for bad-records handling. By default, the Data Import utility skips the bad records.
In an .armx mapping file, you can specify bad-records handling as follows:
In an .arm mapping file, you can specify:
<datahandling *badrecords="SKIP"* duplicaterecords="GEN_NEW_ID" stripleading="false" striptrailing="false" transactionSize="0" truncate="false"/>