Contents
- Overview
- Symptoms
- Affected Versions
- Investigation & Troubleshooting Workflow
- Mitigations and Configuration Checks
- If the Issue Persists
- Status of a Permanent Fix
- Frequently Asked Questions
Overview
Large Data Management Module (DMM) imports in Pivotal Client may show incorrect completion percentages (for example 33%, 38%, or 37%) and may display the message "An error occurred during the import. Verify the log file" even when records are actually updated successfully.
In some environments, imports may also stall near completion (for example, stuck at 97% on a specific record) without an explicit error. This behavior was reported in Pivotal Client 6.6.3.10 and 6.6.4.11, and appears related to large-run progress/logging and/or COM+ timeout/transaction handling rather than data not being imported.
Symptoms
- Progress/completion percentage is incorrect or varies between runs for the same data (e.g.,
33%on one run and38%on another). - The client shows the error message:
"An error occurred during the import. Verify the log file". ImportLog.txtindicates updates only for a subset of rows (e.g., showsModifiedfor records1...<n>), even though the source file contains many more rows (example observed: log entries stopping around record~3,740while the source file contained~11,000+rows).- After some configuration changes, imports may stall near completion (e.g., stuck at
97%on record#<record_number>) with no explicit error dialog.
Affected Versions
- Reported/observed: Pivotal Client
6.6.3.10and6.6.4.11 - Not reproduced during engineering testing: Pivotal Client
6.6.5.7.355(upgrading may help, but should be validated in your environment—especially if you have previously encountered regressions in other6.6.5.xbuilds).
Solution
What this typically indicates
- The import operation may be completing (records are updated) while progress calculation and/or logging fails to reflect the full run.
- For very large imports, the behavior may be influenced by COM+ timeouts/transaction handling and/or environment-specific performance (database size, locking, concurrent load).
Investigation & troubleshooting workflow
1) Capture evidence from a single run (keep items from the same import)
Collect the following artifacts for one affected import run:
- The exact source file used for the import (or a sanitized copy preserving row count/structure).
- The generated
ImportLog.txtfrom that same run. - Screenshots (or a screen recording) showing:
- Import configuration steps
- Start time → completion/stall
- The completion percentage shown
- The exact message
"An error occurred during the import. Verify the log file"(if shown)
Why this matters: Comparing the source row count to what ImportLog.txt reports is key to determining whether this is a logging/progress defect versus a data processing failure.
2) Confirm whether records were actually updated
If ImportLog.txt appears incomplete:
- Select several rows that are missing from the log.
- Verify whether the corresponding records in Pivotal were updated with the imported values.
If updates exist but logs/progress are incomplete, treat it as a logging/progress problem (not necessarily a failed import).
3) Reproduce with a small control import
Run a small import (for example, ~100 rows):
- If it completes at 100% and logs all rows, that strongly suggests the problem is size/time/performance related.
Mitigations and Configuration Checks
Mitigation A (workaround): Split large imports into smaller batches
When large imports (e.g., ~10,000+ rows) show partial completion, false errors, or logging issues:
- Split the source file into multiple files (example:
~500records per file). - Run the imports sequentially.
- Confirm each
ImportLog.txtcontains all records for its batch and reaches 100%.
- No officially documented maximum number of records per DMM import was identified in the scenario described.
- This is a productivity-impacting workaround, but it was the only consistently successful mitigation observed.
Mitigation B: Verify the DMM server task transaction setting (Toolkit)
A suspected trigger is wrapping the entire import in a COM+ transaction with a default timeout.
In Pivotal Toolkit:
- Navigate to: Server Tasks → List of Server Tasks
- Open: DMM Import Utility Root
- Ensure Execute Uses Transaction is unchecked
- Save changes
- Restart the PBS COM+ application / PBS services (per your standard procedure)
If this setting is already unchecked, continue to the next mitigation.
Mitigation C: Increase PBS COM+ timeout and restart PBS
If large imports appear to time out or stall:
- Review your PBS COM+ timeout configuration.
- Increase the timeout (example tested: from
60seconds to600seconds). - Restart PBS after applying the change.
- Retest an import and observe:
- whether progress reaches 100%
- whether
ImportLog.txtfully reflects all records - whether the import stalls (e.g., stuck at
97%)
Important: In the documented investigation, increasing timeout did not fully resolve the issue (imports could still stall), but it remains a relevant diagnostic step.
If the Issue Persists After the Above
If the import still misreports progress/logging or stalls (for example, stuck at 97% with no error), collect additional environment-specific diagnostics to help isolate root cause:
- Provide a screen recording of the full import run (start → stall) plus the exact source file used.
- Gather environment context that affects performance/timeouts:
- Approximate database size
- Whether other users/processes are active during the import
Status of a Permanent Fix
- We tested on a newer client build (
6.6.5.7.355) did not reproduce the issue. Upgrading to a newer version may help, but should be validated in a non-production environment first. - No universally confirmed fix was established in the reported investigation; batching large imports remains the most reliable workaround to keep imports and logs consistent.
Frequently Asked Questions
- 1. How can I tell if I’m experiencing this same issue?
- You see
"An error occurred during the import. Verify the log file"and/or the progress bar reports a partial percentage (e.g.,33–38%) even though spot-checking shows records were updated. Another indicator isImportLog.txtlisting only a subset of records (e.g.,Modifiedentries stop well before the source row count). - 2. Does this mean my data was not imported?
- Not necessarily. In the documented cases, records were updated even when the progress/logging was incomplete. Validate by checking several records that are missing from
ImportLog.txtto confirm whether their imported fields were updated. - 3. Is there a maximum number of records DMM can import?
- No official limit was identified in the investigation. However, behavior strongly suggested a practical threshold in some environments. As a workaround, splitting large imports into smaller batches (e.g.,
~500records) consistently completed with accurate logs. - 4. What configuration should I check first if large imports fail or misreport progress?
- In Pivotal Toolkit, check the DMM Import Utility Root server task and ensure Execute Uses Transaction is unchecked. If you change it, restart the PBS COM+ application/PBS services before retesting.
- 5. I increased the PBS COM+ timeout but the import still stalls (for example, stuck at 97%). What should I do next?
- If the stall persists after timeout changes (including increasing the timeout from
60seconds to600seconds), collect a full screen recording (start → stall), the exact source file used, and theImportLog.txtfrom the same run. Also document environment factors such as database size and concurrent activity during the import.
Priyanka Bhotika
Comments