Search found 19 matches

by lakshya
Fri Mar 03, 2006 8:18 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: ETL errors while running batch schedule
Replies: 13
Views: 10566

Hi All- Thanks for your responses on the topic.The issue is resolved now. We have increased the number of processes allowed per user on the unix box from the existing 500 and increased it to a higher limit which was sufficient to handle all the processes kicked off by the ETL's. The batch finished s...
by lakshya
Tue Feb 28, 2006 11:48 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: ETL errors while running batch schedule
Replies: 13
Views: 10566

ETL errors while running batch schedule

Hi- We are getting the following errors while running our batch schedule.Our batch runs group wise based on the dependencies where we have a bunch of jobs that kick off at the same time.The jobs run fine when they are run individually,but as a group they start throwing all different errors mentioned...
by lakshya
Tue Feb 14, 2006 3:21 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

Hi All- Thank you very much for your inputs on this issue. Atlast the jobs are able to create datasets with size more than 2 GB. Earlier we had changed the ulimit settings to maximum for the ID thro which we run our jobs,but the jobs kept aborting giving the same error. The jobs were being passed wi...
by lakshya
Fri Feb 10, 2006 12:36 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

Hi- I ran the job after adding the "ulimit -a" in a before subroutine am getting the following XXX..BeforeJob (ExecSH): Executed command: ulimit -a *** Output from command was: *** time(seconds) unlimited file(blocks) 4194303 data(kbytes) 131072 stack(kbytes) 32768 memory(kbytes) 65536 cor...
by lakshya
Fri Feb 10, 2006 11:30 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

Arndw-

Can you please help me on where/how to add the "ulimit -a" external command into my job to make sure that the background process is getting the same limitations;

Thanks
by lakshya
Fri Feb 10, 2006 11:08 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

There are no processes hanging for the userid in question
by lakshya
Fri Feb 10, 2006 10:58 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

The dataset gets deleted from the nodes as soon as the the job aborts.If it completes,it writes to the processing folder.
by lakshya
Fri Feb 10, 2006 10:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

Hi Arndw-

Yes! That has been done after the limits were changed.

Thanks
by lakshya
Fri Feb 10, 2006 10:31 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Write to dataset failed: File too large
Replies: 14
Views: 7126

Write to dataset failed: File too large

Hi- One of our jobs is aborting when the dataset size reaches over 2 GB throwing the following error CpyRecs,0: Write to dataset failed: File too large The error occurred on Orchestrate node node2 (hostname XXX) We have got the limits changed to maximum for the userid thro which we are running our j...
by lakshya
Tue May 10, 2005 11:03 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Error getting value from a routine
Replies: 2
Views: 1400

Error getting value from a routine

Hi all, I have a routine that gets a timestamp from a sequential file and this timestamp is used in the next activity where we are comparing it with another field that is a timestamp.The routine is returning a value MaxTS= 2005-04-29 15:41:46.000000,but i am getting the following error. AX..JobContr...
by lakshya
Tue Apr 05, 2005 9:15 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: SQLFetch: Error retrieving results from server
Replies: 2
Views: 1508

Hi,

There was a data type mismatch from the source to the target field and I had overlooked it.I got it right.

Thanks
by lakshya
Tue Apr 05, 2005 7:46 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: SQLFetch: Error retrieving results from server
Replies: 2
Views: 1508

SQLFetch: Error retrieving results from server

Hi all, I am running a job with update statements and am getting the following warnings.I have set the limit of warnings to 50 and hence the job is getting aborted.Any suggestions? SQLFetch: Error retrieving results from server [IBM][CLI Driver] CLI0112E Error in assignment. SQLSTATE=22005 Thanks in...
by lakshya
Tue Mar 29, 2005 8:23 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: capture warning message
Replies: 15
Views: 7234

Hi

Hi all, The job in a sequence has warnings only.... and the sequence finishes successfully.Can I attach a E-Mail notification activity to the end of the sequence and instead of getting all the details about all the jobs,can I only get the warning messages e-mailed .From the documentation I have read...
by lakshya
Wed Mar 23, 2005 8:16 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: convert date from (mm-dd-yyyy) to (yyyy-mm-dd)
Replies: 2
Views: 1483

convert date from (mm-dd-yyyy) to (yyyy-mm-dd)

Hi all,

Whats the easiest way to convert the date from mm-dd-yyyy to yyyy-mm-dd form.


Thanks in advance
by lakshya
Sun Mar 20, 2005 10:12 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: capture warning message
Replies: 15
Views: 7234

Hi

The actual problem is....I have lot of sequences to run and each sequence inturn has multiple jobs.If a job fails , the sequence aborts....when there is a warning it just moves to run the next job in the sequence.As there are over 200 jobs in the sequences,i dont want to go to each and every log to ...