Search found 101 matches
- Sun Jan 22, 2006 10:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: sequencer
- Replies: 3
- Views: 1829
sequencer
Hi All, I have sequencer where I have 6 dimensiond , two facts etc, job1---->job2----> job3---->job4----> sequencer(ALL)-------->Fact job5---->job6----> All six dimensions are independent , all are running independently , but when one of the job is getting aborted , if I run the sequence , it runs f...
- Thu Jan 05, 2006 6:12 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: change capture
- Replies: 6
- Views: 1778
Hi
Hi ,
I am using hash partitioning for both before and after dataset , I used sort stage , and same partitioning keys for sort and change capture , the meta data is same for both , though the data has not been changed but I am getting updates ,
pl suggest what may be the cause ,
thanks
I am using hash partitioning for both before and after dataset , I used sort stage , and same partitioning keys for sort and change capture , the meta data is same for both , though the data has not been changed but I am getting updates ,
pl suggest what may be the cause ,
thanks
- Wed Jan 04, 2006 4:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: change capture
- Replies: 6
- Views: 1778
all values
Thanks for the replay ,
But I have used the option explicite keys and all values , and the before and after meta data table definition are loaded from the same that is target table meta data ,
then also the meta data will cause the problem ,
pl suggets ,
thanks
But I have used the option explicite keys and all values , and the before and after meta data table definition are loaded from the same that is target table meta data ,
then also the meta data will cause the problem ,
pl suggets ,
thanks
- Wed Jan 04, 2006 10:18 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: change capture
- Replies: 6
- Views: 1778
change capture
Hi , I am using scd type 2 , I am using change capture stage , which is working fine with the sample data , but when I am trying to load the real data , update is failing , though the records in before and after sets are similar it is giving update that is change_code=3 , I ceated two table with 5 r...
- Wed Dec 28, 2005 12:42 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: change capture
- Replies: 2
- Views: 1237
change capture
Hi , I want to find out the records which got deleted from source but still in the target , set the delete flag yes for those records. I used change capture stage so I am getting the deleted records but all fields are null as I am getting data from source which is my after data set , there I dont ha...
- Wed Dec 14, 2005 12:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: error
- Replies: 2
- Views: 1624
error
Thanks kenneth , as this is happing quite often , evey 5 min , I am saving the job with anothe name and compiling , as i have to load the data by aftenoon I am looking against table , which has null , though I used option in modify stage colunm name = Handle_Null(colmn name , "Null") its g...
- Wed Dec 14, 2005 10:08 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: job being monitored eror
- Replies: 11
- Views: 3854
job being monitored eror
Hi ,
Now recently whenever I am tying to compile a job , I am often getting the message , job is being monitored I am enable to compile the job ,
pl suggest , as we have critical dead line , I am stuck in between
thnks
Now recently whenever I am tying to compile a job , I am often getting the message , job is being monitored I am enable to compile the job ,
pl suggest , as we have critical dead line , I am stuck in between
thnks
- Tue Dec 13, 2005 11:55 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: nls error
- Replies: 1
- Views: 1092
nls error
Hi , We had a problem with datastage metadata when we installed it without NLS. The jobs when compiled gave the error 'NLS has to be enabled'. So i reinstalled the server with NLS enabled, but still I am getting the error as : The NLS character map <ASCL_ASCII> is specified, but NLS_LANG is not set;...
- Mon Nov 14, 2005 11:35 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to run job failed
- Replies: 3
- Views: 1247
- Mon Nov 14, 2005 11:01 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to run job failed
- Replies: 3
- Views: 1247
how to run job failed
hi all, I am new this forum , please help me out with few ?s , Suppose there are 20 jobs in sequencer and 4 th job is failed , how can i handled it through unix command , i know it is DSRUNJOB , but can u elaborate and how to reset is and rerun the job again, Waiting for the replay , thanks , K
- Fri Oct 28, 2005 9:47 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Migrating DS 6 to 7.5
- Replies: 2
- Views: 2649
Migrating DS 6 to 7.5
Hi all ,
1: If I want to store intermediate data , which stage I should use , data set , file set or sequencial file set in parallel jobs ?
2: How to migrate jobs from vrsion 6 to 7.5 and change them from parallel to server ?
Thanks in advance ,
kw
1: If I want to store intermediate data , which stage I should use , data set , file set or sequencial file set in parallel jobs ?
2: How to migrate jobs from vrsion 6 to 7.5 and change them from parallel to server ?
Thanks in advance ,
kw