Search found 28 matches

by abhi989
Tue Nov 27, 2007 10:29 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: insert update record bounce going into DB2 enterprise stage
Replies: 2
Views: 1748

insert update record bounce going into DB2 enterprise stage

I have a parallel job in which i generate a default record through row generator stage and insert into a DB2 Enterprise stage. My problem is if a record exists i do not want to update it i want that record to be ignored due to change of business rules and upsert will set the record value to somethin...
by abhi989
Thu Nov 15, 2007 10:06 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: sort going through remove duplicate error
Replies: 1
Views: 2395

sort going through remove duplicate error

I have a PX job in which it reads from a Database table - sorts on two keys and remove duplicates on one of those keys and inserts records into the database. Current job is sorting Ascending on the frist key and Decending on the second key. Based on new requirements I have to sort decending order on...
by abhi989
Mon Oct 22, 2007 7:14 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: add_to_heap() - Unable to allocate memory hashfile error
Replies: 2
Views: 1848

My falut! Forgot to search, I found lots of links on this topic! I will read them over. Thank you
by abhi989
Mon Oct 22, 2007 12:02 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: add_to_heap() - Unable to allocate memory hashfile error
Replies: 2
Views: 1848

add_to_heap() - Unable to allocate memory hashfile error

I am loading sequential file to Hashfile and I am getting the following warning message.

"add_to_heap() - Unable to allocate memory"

does anyone know why it's causing this warning, and how would I fix it.

Thanks
by abhi989
Wed Sep 26, 2007 11:07 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Sequence Performance Query
Replies: 4
Views: 1916

You will also need to take into consideration that if you are firing these many jobs at the same time, CPU might run out of processing power and starts crapping out. (need to take hardware into consideration). Also logging information into the job log can be done by calling a custom routine that doe...
by abhi989
Tue Sep 25, 2007 2:25 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Improving Job Performance
Replies: 5
Views: 2903

In server job -By doing most of the things in your source query (database lever - if the source is database). -Going to job properties - performance - messing around with row buffers -eliminating redundancy -combining stages if it's possible -Also creating indexes on database level (if updating larg...
by abhi989
Tue Sep 25, 2007 2:20 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Job Locked
Replies: 3
Views: 2210

You can also do it through DataStage shell. I wouldn't recommend if you are not familiar with this process, as you could potentially corrupt the job!!
by abhi989
Tue Sep 25, 2007 2:12 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: DSSendMail - Attachement - DataStage Server 7.0
Replies: 7
Views: 5624

two ways you can do it. 1) through unix add an execute command activity stage in a sequence. type as follow Under command : /bin/ls Under Parameters : directory_where_the_file_is; mail -s 'subject email_address_you_want_to_mail_to<full_path_to_the_location_to_the_file 2) Add Notification Activity st...
by abhi989
Tue Sep 25, 2007 1:56 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: warning
Replies: 4
Views: 2547

You have number(10) defined in your source column, and you decimalhave defined 38,10 defined in DS. Change Decimal 38,10 just to 38 (remove the scale value) should fix the problem
by abhi989
Tue Sep 25, 2007 1:50 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: special characters in sequentail file
Replies: 3
Views: 2665

you can use cat -tv file_name to see if you have null padding. (^@ - if you see this symbol that means you have null padding.) if you do that you can use the following to get rid of that. Add the transformer between the input_col and the output_col. Set input_col nullable to yes. And output_col null...
by abhi989
Tue Sep 25, 2007 1:41 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: ABOUT HASH PERTITIONING
Replies: 5
Views: 2637

Hash is a key based partitioning algorithm. It can be used for any data type for the key value. The bytes (or the characters) making up the key are processed through a function that yields a positive interger called a hash value. This number is divided by the number of partitions and the remainder i...
by abhi989
Thu Aug 30, 2007 8:15 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Max Multiple instance
Replies: 2
Views: 1653

This depends on what other processes you are going to run simultaneously when you will be running your multiple instance job.
by abhi989
Thu Aug 30, 2007 8:11 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Job Creating Table Lock
Replies: 4
Views: 1671

one approach could be you send 1 set of records from transformer to the DB2 table and the other set of records you can dump it into a sequential file. Then take the file and process against the DB2 table in a second job.
by abhi989
Thu Aug 30, 2007 6:41 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: WriteHash() - Write failed error when loading hashfile
Replies: 5
Views: 3081

disabling 'stage write cache' produces the same result.
by abhi989
Thu Aug 30, 2007 4:56 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: WriteHash() - Write failed error when loading hashfile
Replies: 5
Views: 3081

WriteHash() - Write failed error when loading hashfile

Hi everyone, I have job in my production environment which loads around 40 million records from a sequential file to a hashfile. It works fine in production. When I imported the same job in my qa environment it gives me the following errir after it loads around 95% of recrods. Error : JobDs155SeqToA...