Hi,
If i have two job activities in a sequence. first one being parallel job and second one being server job.
If the output of first job is input to second job.
Do we need to convert the .ds (dataset for eg.) to sequential file and then pass on to server or will server job be able to read the .ds files.
I cant test it right now because havent got parallel extender installed.
And one more thing. I read in the documentation that we cant use unix commands such as mv or rm on datasets. Does that mean we cant use any unix commands on it like cat or anything?
Regards
parallel and server job in a sequence
Moderators: chulett, rschirm, roy
A dataset consists of a descriptor (the file you actually refer to in the job) which contains such things as the schema layout and links to the physical data files. You can move the descriptor file around at will, it is very small. You cannot move the actual data files around without modifying your descriptor.
Server jobs can only read sequential files. You could, theoretically, write a server sequential job to binary read the data files once you know their layout but I cannot see that the effort is worth it. To save disk I/O on very big files you can write a PX job to read the dataset and then write it to a named pipe, and a server job that reads that named pipe.
Server jobs can only read sequential files. You could, theoretically, write a server sequential job to binary read the data files once you know their layout but I cannot see that the effort is worth it. To save disk I/O on very big files you can write a PX job to read the dataset and then write it to a named pipe, and a server job that reads that named pipe.
Parallel datasets are stored within the PX workspaces. You must use PX commands to browse or list those datasets. If you need a Server job to access PX datasets, you must pull out the datasets and place them where the Server job can see them.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Server job doesnt have Dataset stage.
Sequential file will be the only option.
You may need Orchestrate DUMP the dataset into a sequential file either in afterjob subroutine of PX or before job subroutine of the server job.
Sequential file will be the only option.
You may need Orchestrate DUMP the dataset into a sequential file either in afterjob subroutine of PX or before job subroutine of the server job.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Once you install your client, you will be able to get Tutorial parallel jobs, Reg orchestrate manuals, You can find in IBM or Ascentila site.
Or do a serach, I recall ray.wurlord has provided a direct link to orchestrate guide.
If you cant reach, revert back.
Or do a serach, I recall ray.wurlord has provided a direct link to orchestrate guide.
If you cant reach, revert back.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Login twice to this link to find a another link to the site.
http://dsxchange.com/viewtopic.php?t=92 ... 257fd966e7
http://dsxchange.com/viewtopic.php?t=92 ... 257fd966e7
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'