Replace existing rows

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

sheema wrote:ok,but when just select key1 as key in oraOCI and run the job i am getting the warning

ORA-01008: not all variables bound
I would bet that's because you are not letting the stage generate the sql, which would let it automaticallt 'adjust' to your changes. You are using 'user-defined sql' which is causing your problem.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sheema
Premium Member
Premium Member
Posts: 204
Joined: Sat Jul 22, 2006 10:20 am

Post by sheema »

No,I am using the update action:Replace existing completely ,but i am getting the warning
Not all variables bound.
what would be the reason.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

In that case, it sounds like a bug. There is a patch on the IBM Information Integration Solutions Portal that may be what you need:
User-defined SQL in Oracle OCI Stage ALL DataStage 7.5.1 05/26/2005 e73423_ALL_751.zip

Description:
Oracle OCI stage (ORAOCI9) incorrectly binds the lookup column values to the SQL statement which is being executed. This results in the Oracle database error "ORA-1008: Not all variables bound" [ECASE 73423].

Also included in this patch is a fix for the SQL Builder in the Oracle OCI stage, which incorrectly converted a "WHERE" clause to a "HAVING" clause [ECASE 74270].

Compatibility Notes:
This zip archive contains the DataStage Client patch for the SQL Builder, and the server plug in packages for all DataStage 7.5.1 server platforms.

Install Notes:
Follow the instructions in the readme contained within this patch to complete the installation. The client patch installation must be completed on every DataStage client computer.
Or call your support provider and request it.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sheema
Premium Member
Premium Member
Posts: 204
Joined: Sat Jul 22, 2006 10:20 am

Post by sheema »

Thanks.

Now i see that Replace Existing rows completely does not serve my purpose.For example if i have 3 rows as below already existing in the Oracle table

1001,abc
1001,def
1001,rst

where as i have below 2 rows in the source file

1001,abc
1001,def

I should design a job where i should just have the records
1001,abc
1001,def in the oracle table finally.The row 1001,rst should be deleted.
How do i do this in a single job.
urshit_1983
Participant
Posts: 73
Joined: Wed Jun 28, 2006 3:27 pm
Location: NJ

Post by urshit_1983 »

Please correct me if I am wrong but this could work :

Seqfile-----------TX1------------Temp_Table----------TX2-------Target_table


In Temp_table do insert and in Target_table do Delete existing rows WHERE temp_table.column1=target_table.column1 and temp_table.column2 != target_table.column2
"Nobody is expert in Everything,
But Everybody is expert in Something."
rachitha
Participant
Posts: 30
Joined: Wed Jun 28, 2006 10:53 am

Post by rachitha »

sheema wrote:Thanks.

Now i see that Replace Existing rows completely does not serve my purpose.For example if i have 3 rows as below already existing in the Oracle table

1001,abc
1001,def
1001,rst

where as i have below 2 rows in the source file

1001,abc
1001,def

I should design a job where i should just have the records
1001,abc
1001,def in the oracle table finally.The row 1001,rst should be deleted.
How do i do this in a single job.
Hi Sheema,

It seems that you need only current source file data in target table. Try with the following option, hope this works according to your requirement.
First truncate target table, then insert rows from source table.
implement this in datastage , by giving two links to target odbc
one: to truncate target table
two: to insert current rows from source file.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Truncating table wont work, becasue you ll end up in loosing other key which are not in you current list. Now its time to accept the existing design. One improvement that you can make is, no need to aggregate to get the distinct list of keys, anyways you can use "Delete exisintg rows only" option with single key marked in it.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
pbatchu
Charter Member
Charter Member
Posts: 20
Joined: Thu Aug 17, 2006 11:53 am
Location: Boise

Post by pbatchu »

I think you can do by making multiple jobs.

1. Create a dataset as the target.
2. Get the distinct first column values from dataset and delete records from target table by using using criteria (where targettable.column=datasetfirstcolumn).

3. Insert the values from dataset into target table.

- Pavan Batchu
rachitha
Participant
Posts: 30
Joined: Wed Jun 28, 2006 10:53 am

Post by rachitha »

Update Action: Clear the table, then insert rows

Will this work?
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

rachitha wrote:Update Action: Clear the table, then insert rows

Will this work?
It again deletes the whole table and inserts the new rows.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
sheema
Premium Member
Premium Member
Posts: 204
Joined: Sat Jul 22, 2006 10:20 am

Post by sheema »

Yes,i do not want to delete the whole table and insert the rows.
I want to delete the rows depending on the key1 and insert the rows with key1 and key2 as keys.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Build two jobs - one to get unique values of Key1 from the data and issue the deletes and the second job can then load the data.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sheema
Premium Member
Premium Member
Posts: 204
Joined: Sat Jul 22, 2006 10:20 am

Post by sheema »

Ok Craig,when i did it earlier in 2 seperate jobs it works fine.I was trying to implement it in a single job.Do you think it is a better idea to do it in 2 different jobs.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Your mixture of key fields makes a single job solution... messy at best. I'm sure one could come up with a way (I've done things like this in the past) but you're much better off with a two job solution. IMHO.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sheema
Premium Member
Premium Member
Posts: 204
Joined: Sat Jul 22, 2006 10:20 am

Post by sheema »

Thanks Craig .
Post Reply