Replace simple px jobs by server jobs DS 7.5.2

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
kjaouhari
Participant
Posts: 325
Joined: Thu Mar 16, 2006 10:22 am

Replace simple px jobs by server jobs DS 7.5.2

Post by kjaouhari »

Hello all !

I have to replace PX jobs by server jobs. The PX jobs take long time. These jobs are very simple because they are just a copy of table from ORACLE to an other DB ORACLE.

My question is what is the best choice for replce these PX jobs ? I precise that we don't have PX but just DS server.

And do you think server jobs can run more quicky than PX jobs ?

Thanks in advance !
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Server jobs start up more quickly than parallel jobs, even parallel jobs on a single-node configuration. But I don't think this is your issue. How are you loading the rows into the target table? That is, what write rule are you following? If they are all new (insert) rows, prefer to use a bulk loader.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: Replace simple px jobs by server jobs DS 7.5.2

Post by chulett »

kjaouhari wrote:My question is what is the best choice for replce these PX jobs ?
A DBA? Sounds like a simple exp and imp could be a better choice if there's no transformations going on. Or as Ray notes, sqlldr.
-craig

"You can never have too many knives" -- Logan Nine Fingers
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

We get so many server job to parallel job questions, this might be the first conversion the other way, are you completely dropping parallel edition? There is a summary in DataStage tip: What do you mean I need to optimise small jobs? You can add a config file job parameter to your job to restrict a small volume parallel job to just one node or you can replace it with a server job to reduce job start up time.

The Oracle change capture stage may help you synch these tables more efficiently.
kjaouhari
Participant
Posts: 325
Joined: Thu Mar 16, 2006 10:22 am

Post by kjaouhari »

Today, a parallel job can take 6 hours to load tables from oracle to oracle.
We want to drop the px jobs by the ds jobs. In fact there is no transformation between the source and the target.
The table are very big and contains lot of fields...
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It is almost certain that the fastest solution will involve Oracle to Oracle utilities, and not use DataStage at all.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

In other words, as noted before - pawn it off on your DBAs. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

Change data capture or replication between the databases so you don't have to move all the data every day. Both of these are not in the standard DataStage product though they are both options for the IBM Information Server along with DataStage 8.
Chuah
Participant
Posts: 46
Joined: Thu May 18, 2006 9:13 pm
Location: Melbourne

Post by Chuah »

kjaouhari wrote:Today, a parallel job can take 6 hours to load tables from oracle to oracle.
We want to drop the px jobs by the ds jobs. In fact there is no transformation between the source and the target.
The table are very big and contains lot of fields...
Hi,
If you can drop and create the target tables, then you can write simple SQL script to
1. Drop target tables
2. CREATE TABLE AS SELECT * FROM <SOURCE TABLE>
Add in the appropriate tablespace options if you must

but that's pretty fast and if your tables are partitioned even better.
This way you don't land the data onto disk first and write it to the database.

Chin
Post Reply