Search found 63 matches
- Tue Feb 21, 2006 4:56 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: delete records from table
- Replies: 2
- Views: 1320
delete records from table
Hello All, Apologies if this topic is covered earlier. I searched for the answers before posting but didn't find any. I want to know how we delete the records from the table using datastage job. I mean I have a table I want to delete some data based on some of the filter conditions. Source and targe...
- Fri Feb 10, 2006 4:01 pm
- Forum: Site/Forum
- Topic: Thanks for the 10,000 posts Ray
- Replies: 10
- Views: 7849
- Thu Feb 09, 2006 12:23 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Equal Usage of CPU
- Replies: 6
- Views: 1460
- Wed Feb 08, 2006 3:25 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Equal Usage of CPU
- Replies: 6
- Views: 1460
Equal Usage of CPU
Hello All,
We have a 8 CPU production machine where EDW and DWH jobs will run. Can I know how we do a load balance so that both the environments EDW and DWH share the same amount of CPU usage?
Any ideas will be appreciated.
Thanks
We have a 8 CPU production machine where EDW and DWH jobs will run. Can I know how we do a load balance so that both the environments EDW and DWH share the same amount of CPU usage?
Any ideas will be appreciated.
Thanks
- Wed Feb 01, 2006 10:46 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Help in fixing Errors in Routine
- Replies: 5
- Views: 1465
- Wed Feb 01, 2006 12:35 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Help in fixing Errors in Routine
- Replies: 5
- Views: 1465
Help in fixing Errors in Routine
Hello All, I need some help in fixing the routine. The purpose of this routine is to know the job status, jobinfo, starttime endtime and linkcounts. I am calling the after job routine in the same job. The code is as follows $INCLUDE DSINCLUDE JOBCONTROL.H JobVar=trim(field(InputArg,",",1))...
- Thu Jul 28, 2005 1:40 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Date Calculations
- Replies: 1
- Views: 980
Date Calculations
Hello All, Can I know how to calculate the age by susbstracting two dates. For example (2004-12-01) - (2000-12-01) should give me the result as 4 years and I also want to round of the number if I get any decimals. For example if the result is 5.75 I want to take it as 5 and if the result is less tha...
- Wed Jul 06, 2005 2:57 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: lookup Functionality
- Replies: 1
- Views: 768
lookup Functionality
Hello All,
Does everyone agree that, a hash lookup without any constraint is like left outer join and with the constraint as Not(refLink.NOTFOUND)is a equi join?
Your comments are appreciated
-rcil
Does everyone agree that, a hash lookup without any constraint is like left outer join and with the constraint as Not(refLink.NOTFOUND)is a equi join?
Your comments are appreciated
-rcil
- Sat Jul 02, 2005 6:34 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Multiple Transformers?
- Replies: 1
- Views: 892
Multiple Transformers?
Hello All, If I have 6 hash file lookups to be performed on a job then how good is it use 6 transformers for each lookup and how good is to use one transformer for all the lookup? Which is the good way of doing and what is the difference? I usually use one trasformer for all the 6 hash lookup but on...
- Wed Jun 15, 2005 2:46 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to append the data????
- Replies: 4
- Views: 1383
Re: How to append the data????
Raj,
You can use a UNIX cat command to concatinate all the 5 files into one by executing the command using ExecSh in before or after routine based on your requirement.
Hope that helps.
You can use a UNIX cat command to concatinate all the 5 files into one by executing the command using ExecSh in before or after routine based on your requirement.
Hope that helps.
- Mon May 16, 2005 10:52 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Continuous Delimiters
- Replies: 5
- Views: 1479
Thanks for all the help. The problem I have is I am not able to read my source file through datastage. The file is in the format below with carriage return (^M) at the end of each line and all the rows are not with the same number of columns 123;abc;42001;10.00;56789;;;;^M 128;bdc;32111;12.00;^M 890...
- Sun May 15, 2005 11:22 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Continuous Delimiters
- Replies: 5
- Views: 1479
Continuous Delimiters
Hello All, I need help in reading the input flat file which is in the format below 123;000456;32.11;89765;;;;; I like to write it to two different files one taking out the extra semicolon and the second populating 0 to those extra columns Result: 1) 123;000456;32.11;89765 2)123;000456;32.11;89765;0;...
- Wed May 11, 2005 4:16 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: DataStage Case Conditions
- Replies: 1
- Views: 1043
DataStage Case Conditions
Hello All, I am using teradata as my source stage which is not case specific meaning which will get you ABC and abc when you do a query like where col1 = 'ABC' unless you use the caseSpecific funtion for the specific case. The question I have here is when I try to use the terdata stage and write use...
- Wed May 11, 2005 11:56 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Improve Seq Stage Performance
- Replies: 5
- Views: 2189
Re: Improve Seq Stage Performance
Thank you for the inputs. As the hash file size limit is 2GB and in the UAT environment I have 24 million records and it could be more in production. Will the hashfile handle this big?
thanks
thanks
- Wed May 11, 2005 10:43 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Improve Seq Stage Performance
- Replies: 5
- Views: 2189
Improve Seq Stage Performance
Hello All, I have total of three dsjobs in which the first two are the extracts from the database joining 5 tables in each with total of 40 columns and in the third job I am sorting and concatinating those two tab delimited output files using ExecSH as before routine and in the dsjob I split into fo...