Search found 40 matches
- Fri Sep 26, 2008 11:05 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance Tune Teradata Load
- Replies: 4
- Views: 2462
Re: Performance Tune Teradata Load
Enterprise stage will be much faster and in fact it may take only few minutes or even less and I hope this is just an truncate and load to the table.
- Tue Jan 08, 2008 12:01 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Sun Solaris & Datastage 7.5 Problems
- Replies: 10
- Views: 7375
Hi Ray, Here is what i found out from our environment. etlt01:/home/c6262cn $ swap -l swapfile dev swaplo blocks free /dev/md/dsk/d1 85,1 16 65553776 65553776 /dev/vx/dsk/swap_dg/swapvol 292,53000 16 141408240 141408240 etlt01:/home/c6262cn $ Do you think mouting swap on /tmp will solve this issue? ...
- Thu Jan 03, 2008 12:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Sun Solaris & Datastage 7.5 Problems
- Replies: 10
- Views: 7375
- Thu Jan 03, 2008 12:13 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Sun Solaris & Datastage 7.5 Problems
- Replies: 10
- Views: 7375
I am sorry to answer this so late. I think my point here is that Datastage with Sun Solaris always takes so much space compare to Datastage with AIX combination, which is not expected and I think all these lookup and join are not candidate for sparse lookup, since input has huge no of records. I wou...
- Fri Dec 28, 2007 9:39 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Sun Solaris & Datastage 7.5 Problems
- Replies: 10
- Views: 7375
I was having this doubt that lookup might have taken all the space available, so i replaced all lookup with join stage and result is the same. Yes job runs fine, if i run it using 4 node and 2 node. Space utilization for 4 Node config is taking 60GB space and 8 node fails and we have 130 GB space Th...
- Fri Dec 28, 2007 9:26 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: sub sequence aborted but the main sequence didn't
- Replies: 7
- Views: 4198
Re: sub sequence aborted but the main sequence didn't
Major,
Check if you have selected following property at the job sequence property level.
Automatically Handle Activities that fail.
Check if you have selected following property at the job sequence property level.
Automatically Handle Activities that fail.
- Fri Dec 28, 2007 9:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Sun Solaris & Datastage 7.5 Problems
- Replies: 10
- Views: 7375
Hi Arnd, Thanks for more info in this issue. Here is some information we gathered during this process. Sun solaris It runs 752 processes on 8 nodes. AIX It runs 733 processes on 8 nodes. I thought SWAP space utilization is directly propotional to the no of process gets exeuted in a job, so i created...
- Thu Aug 16, 2007 3:29 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Fatal Error: Fork faile - Previous posts didn't help much
- Replies: 8
- Views: 6071
I am sorry for delayed response
I am sorry for delayed response. I was trying pin point this issue with particular issue, so i was creating a job with 128 stages and 1) Job ran sucessfully with 4node. 2) Job failed, if i ran with 8node. While monitoring it used nothing but no of process. IBM sugested us increase the SWAP from 10 G...
- Mon Aug 13, 2007 1:33 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Fatal Error: Fork faile - Previous posts didn't help much
- Replies: 8
- Views: 6071
Thanks
Thank you for comming up with various solutions. I am sorry it is not 300 or more operator it is actually no of processes We will go with the path of spliting job, if we dont get solution for the same. All we are tring to do here is We get a file, which is source data for 6 tables and this comes wit...
- Mon Aug 13, 2007 10:13 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Fatal Error: Fork faile - Previous posts didn't help much
- Replies: 8
- Views: 6071
Fatal Error: Fork faile - Previous posts didn't help much
Hi All, I tried with all the posts available regarding the error i am getting and i couldn't get one which solves this problem. We get this error when we have job with 300 or more operator in it. node_node7: Fatal Error: Unable to start ORCHESTRATE process on node node7 (etld01): APT_PMPlayer::APT_P...
- Thu Jul 26, 2007 4:36 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Teradata syntax
- Replies: 5
- Views: 2866
- Tue Jul 24, 2007 3:54 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator - Partitioning
- Replies: 12
- Views: 6295
Design one: Transformer Stage => Remove_Duplicate Stage => Aggregator Stage. Grouping is TIME_ID, RCM_CD, SUB_ACT_CD, QUAL_ID, PI_ID Count on REGIST_NUM Running on 2 Nodes. Incorrect counts. Count reside on each node per group Design 2 Transformer Stage => Sort Stage => Remover Dup Stage => Aggrega...
- Thu Jul 19, 2007 1:16 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job Parameter in Shared Container
- Replies: 1
- Views: 1299
Re: Job Parameter in Shared Container
We would not be able to use the parameters defined in the job in shared container.
Define the parameters in shared container and map parameter in the job to the shared container parameter in the properties of the shared container stage in the job.
Hope this helps...
Define the parameters in shared container and map parameter in the job to the shared container parameter in the properties of the shared container stage in the job.
Hope this helps...
- Wed Jul 18, 2007 12:50 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How is this schema getting into this automatically
- Replies: 0
- Views: 900
How is this schema getting into this automatically
I have job, which is reading from Teradata database and passes it to Peek stage. Here is DDL for table CREATE SET TABLE TEST_DB.test1 ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( col1 VARCHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC, col2 CHAR(4) CHARACTER SET LATIN NOT C...
- Tue Jul 17, 2007 11:32 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Trim function
- Replies: 9
- Views: 4528
Re: Trim function
I have a field that I am bringing in from a flat file. I am using the TrimB and TrimF function but when I do a length in the database, it includes a place for the space/blank. How can I make it only include real data and not spaces? Check the flat file to see, if you are getting spaces or un printa...