Page 2 of 2

Posted: Tue Jun 29, 2010 4:28 pm
by kduke
Your requirement is to load into a table. You also need start and end time. I think modifying my job is the easiest. Shell scripts are nice if you are comfortable with them. Calling a routine needs to place within the framework of a job.

Posted: Tue Jun 29, 2010 9:20 pm
by adityavarma
Duke,

I have taken ur job and have modified it and it is working fine, but issue is we need to implement the same in parallel.

In your job, the sequential file reads the data from the output of dsjob -report and writes to a Xml file and the xml input stage is used to read that data and it is further loaded into a table.

but in parallel i am unable to find out an alternative stage for XML input as the XML input stage is not supported in parallel.

Can you please suggest me on how to proceed further.

Posted: Tue Jun 29, 2010 11:19 pm
by ray.wurlod
adityavarma wrote:we need to implement the same in parallel
Why ?!!

You are processing ONE ROW. Where's the sense in using parallel execution technology for that?!!

Resist stupid requirements!

Posted: Tue Jun 29, 2010 11:41 pm
by adityavarma
Ray,

I am sorry, my mistake i should have written the reason for using PX jobs.

the production environment is been supported by IBM. they will not be supporting server jobs.

Posted: Wed Jun 30, 2010 1:30 am
by ray.wurlod
You are the customer. Server jobs are - and will remain - part of the product. You are within your rights to demand that IBM support server jobs. Particularly since so few things go wrong with server jobs!

Posted: Wed Jun 30, 2010 6:11 am
by chulett
I fail to see how any vendor can refuse to support their own product. Not something I would stand for. :?

Posted: Wed Jun 30, 2010 1:05 pm
by kduke
By the way it is one row per per partition per stage for PX jobs. You need to be careful it needs to aggregate all the partitions for PX or you need the partition id added to the grain.