Search found 97 matches

by videsh77
Tue Apr 19, 2011 4:15 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to run a job efficiently?
Replies: 3
Views: 2630

Re: How to run a job efficiently?

I have a parallel job, its a plain one to one mapping. Source is Oracle( oracle connector ) and the target is DB2( DB2 Bulk Load ). My problem is the source has 2.36 billion records and I have to optimize the job so that it shud finish within 5-6 hours. Plz suggest. My experience to such high volum...
by videsh77
Mon May 31, 2010 11:04 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Speed up ETL Process in Datastage job
Replies: 12
Views: 12445

Another possibility to improve the load performace, but not using the DataStage, which we used on database other than Oracle. Have your Load files exported per partition. As others recommended drop the constraints & indexes on the table, this can happen at the same time when your load files are ...
by videsh77
Thu May 27, 2010 4:04 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Expect to abort job after ...
Replies: 8
Views: 4624

I have adopted the path suggested by chulette, & I have modified ExecSH subroutine, to raise Fatal error, in case a script happens to run into failure.

This worked for me ...

Thanks all for your contribution.
by videsh77
Wed May 26, 2010 3:43 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Expect to abort job after ...
Replies: 8
Views: 4624

chulette - Are you referring to write a Shell script, which checks file size & call DS routine ? or write a DS routine itself ExecSH ? Gaurav - We already have that solution elsewhere, but limitation with this approach, you will never realize which all entries are offending, because after very f...
by videsh77
Tue May 25, 2010 11:38 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Expect to abort job after ...
Replies: 8
Views: 4624

Expect to abort job after ...

In our job design, We are filtering out offending entries from the DB, after the previous copletion of ETL operation. These filtered entries we are expected to direct to a sequential file. If this sequential file at the end of the job is non-empty, is there any way we can abort the job, so further s...
by videsh77
Tue May 25, 2010 4:35 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Want to suppress warnings for a job.
Replies: 5
Views: 3786

I have handled the nulls after retrieved from the database. While running a select, whichever columns were nullable, I have applies coalesce (db2 function) & replaced it with a space.
This suppressed the warning we received.

Thanks all for your valuable comments.
by videsh77
Tue May 25, 2010 1:02 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Want to suppress warnings for a job.
Replies: 5
Views: 3786

Want to suppress warnings for a job.

Hi After job execution, we are getting following warnings - APT_CombinedOperatorController,3: Field '<fieldname>' from input dataset '0' is NULL. Record dropped. In my job I have following sequence - Db2 EE Stage -> Transformer -> Seq File As the number of logs exceed the view limit, there is no ind...
by videsh77
Thu May 20, 2010 3:27 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Aborting job for warning on the DB insert / update...
Replies: 3
Views: 2231

For an DB EE stage, one can attach reject link, which could indicate insert or update SQL failure. On this Reject link, having transformer with constraint abort after 1 rows, would ensure the job gets aborted. But my problem still not solved, as I cannot use DB EE stage, since the target database I ...
by videsh77
Wed May 19, 2010 5:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Aborting job for warning on the DB insert / update...
Replies: 3
Views: 2231

Hi Ajay

We are not calling this DS job using sequencer. We are calling DS job using dsjob through ContolM.

Anyway, is there a possibility of trapping warning, from one particular stage using sequencer ?

Thanks & regards,
Vikram.
by videsh77
Wed May 19, 2010 3:11 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Aborting job for warning on the DB insert / update...
Replies: 3
Views: 2231

Aborting job for warning on the DB insert / update...

We came across situation where we insert / update to the database, at that time if that operation fails we received warning in job execution log. Even though insert / update did not work as expected, DS job finished successfully. But the negative effect of this is seen in further execution. What we ...
by videsh77
Fri Mar 05, 2010 2:45 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Stopping the job - Diffce ?
Replies: 4
Views: 2679

Thanks it helps...
by videsh77
Wed Mar 03, 2010 10:08 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Stopping the job - Diffce ?
Replies: 4
Views: 2679

Stopping the job - Diffce ?

What is the difference of Stopping a job through DS Director & through kill -9 option for the DS job execution ?

I have noticed kill -9, not necessarily work always.
by videsh77
Mon Dec 31, 2007 2:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: orchadmin diskinfo Vs df -m
Replies: 0
Views: 1137

orchadmin diskinfo Vs df -m

Hi After I did 'orchadmin diskinfo -a' it showed information for all nodes. In this information it had displayed total diskspace. But the same is not matching with if I check using df command on unix. df command is showing more diskspace is allocated for node than shown by orchadmin. Is there any re...
by videsh77
Tue Nov 13, 2007 7:33 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: FLTABSZ setting has any relation with DSJE_TIMEOUT?
Replies: 4
Views: 3652

This means FLTABSZ has no relation with timeout error. As the sequential files which are read, are not Big. Some of them are empty. Hence for this DataStage job, which is being run in multiple instance reading these sequential files, which setting can solve this problem of timeouts? As we have many ...
by videsh77
Tue Nov 13, 2007 1:40 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: FLTABSZ setting has any relation with DSJE_TIMEOUT?
Replies: 4
Views: 3652

Hi Richdhan We get this error mentioned when job is accessed to run by scheduling component. Interestingly in DS Director, we dont get the log of this message, as this was a timeout. Its more I could see many number of jobs try to read files from the disk. Also which environment variable best define...