Hello Everyone,
I am reading from a text file into a transformer, where I am defaulting a couple of columns to null (SetNull()) and one other column to CurrentTimestamp() which then flows to Terada API stage to do Upserts. On any given day, the number of records in the text file are not more than 500 records.
On average, it takes about 3 to 5 hours in the step "Logging Delayed Metadata" and eventually gets to "Requesting Delayed Metadata"
I am looking at the run time stats in Teradata and there is no huge CPU are I/O consumtion for this job?
Having hard time to understand on where is it spinning its wheels for about 3 to 5 hours.
Please shed some light and let me know if you need more information on this.
Thanks
TD API Load Performance
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54
- Joined: Mon Dec 24, 2007 9:27 am
-
- Participant
- Posts: 54
- Joined: Mon Dec 24, 2007 9:27 am
Thanks for that assistance.
Yes, as stated I checked already on the distribution and unbalance AMP's. It is prettly evenly distributed as it can be.
The table is a set table and the API stage is doing the upserts based on the key.
Yes, I have been working with the DBA's on this one and it doest not seems to be a quiet obvious one.
Thanks
Yes, as stated I checked already on the distribution and unbalance AMP's. It is prettly evenly distributed as it can be.
The table is a set table and the API stage is doing the upserts based on the key.
Yes, I have been working with the DBA's on this one and it doest not seems to be a quiet obvious one.
Thanks
Hi,
I hope you know the difference between a SET table and MULTISET table.
A SET table does not allow 2 rows to be exactly the same. So whenever an insert happens the duplicate row checking happens at the background.
How many columns does the table have? If there are too many columns then the duplicate row checking would take more time.
Why dont you convert the table structure to a MULTISET table and try running your job.
HTH
--Rich
I hope you know the difference between a SET table and MULTISET table.
A SET table does not allow 2 rows to be exactly the same. So whenever an insert happens the duplicate row checking happens at the background.
How many columns does the table have? If there are too many columns then the duplicate row checking would take more time.
Why dont you convert the table structure to a MULTISET table and try running your job.
HTH
--Rich