Teradata Enterprise Stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
DS_FocusGroup
Premium Member
Premium Member
Posts: 197
Joined: Sun Jul 15, 2007 11:45 pm
Location: Prague

Teradata Enterprise Stage

Post by DS_FocusGroup »

Hi,

Just a few general questions pertaining to the TDE stage in the parallel edition which seem more of bugs than logical errors. I develop a job which is getting data from source and has the layout :
Oracle--->Trans--->SortStage--->AggregatorStage--->TDE. i sort the incoming records in ascending order on the grouping keys and specify the sort option in the aggregator stage.I try to load about 12 million records from oracle into teradata. In the target i set the option to append. The job fails and the error relates to the scratch disk being full. The strange thing which i do not understand is that when i change the option from append to truncate in the target stage for the same job design for the same number of records the job works all O.K without any space issues ? Any logical reason for this?
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

When you change the option from append to truncate in the target stage aren't you are making it easy for datastage? By truncating the target table you are satisfying the main criteria for invoking FastLoad directly (Remeber: For Teradata fastload target table must be empty). If you can, post more details on where you are sighting "Scratch disk being full" and exact error messages.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
DS_FocusGroup
Premium Member
Premium Member
Posts: 197
Joined: Sun Jul 15, 2007 11:45 pm
Location: Prague

Post by DS_FocusGroup »

Yes it does invoke the fast load utility, but i do not think it should have any direct effect on the scratch disk space which is, correct me if i am wrong, used by DS for temp calculations. In this case for sorting and aggregations. if i understand, a work table is being created in the db and then data is fast loaded from there. this won't effect the system memory as such.
the error is generated in the director log where it says,
"Scratch Disk Full". i think no matter what the option in the target table this error pertains to the running out of disk space which DS is using for temp calculations and storage of data ?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Please post the exact error message (all of it) rather than assuming.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
throbinson
Charter Member
Charter Member
Posts: 299
Joined: Wed Nov 13, 2002 5:38 pm
Location: USA

Post by throbinson »

A work table is created/loaded when the Write Mode is Append. This is because a Fastload Utility can only load an empty table. The append is using Fastload to get the data into a temp table and then doing a Insert Select From type of SQL from the Temp table to the target table. With Truncate the Fastload goes directly to the target table.
I share your confusion and would like to see the exact message and job design.

I would like to say that the Aggregator (Using Sort Method?) is creating a lot of temp files in your scratch space but all of this would be regardless of the DB write method. Could it be that one DB write method is slower then the other and consequently causing data to be buffered to disk? Just enough to blow your scratch space?
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

do you have enough space in your scratch disk and resource disk??
what is the size of the 12 million records?
the scratch space must be atleat 2.5 time of your source size if you are using any stage which uses sorting
DS_FocusGroup
Premium Member
Premium Member
Posts: 197
Joined: Sun Jul 15, 2007 11:45 pm
Location: Prague

Post by DS_FocusGroup »

Well i think its not about the scratch disk here or maybe it is. the thing i am trying to figure out is how can the options in the target DB effect this. i.e. for append it does'nt work ok and for truncate mode it does. if some temp work is being done in the scratch disk then its for the sort and aggregator stage. so the error over scratch disk being running out of space should be there even when the target option is set from append to truncate.
nishadkapadia
Charter Member
Charter Member
Posts: 47
Joined: Fri Mar 18, 2005 5:59 am

Post by nishadkapadia »

Few options which could be tried { if not already }

You could try breaking the job and having target as sequential file, with the combinable operators being disabled.
Alternatively, could check on o/s whether there is another job already running when the various options were tried.

Or as suggested could post the exact error.
Post Reply