All objects in the Job become PLUG objects.
Moderators: chulett, rschirm, roy
All objects in the Job become PLUG objects.
Hi,
I have come across this particular issue and would like to know if anyone have faced similar issues:
The issue is that all the objects in the job are converted to Plug object. I would need to the reason behind this conversion.
Also I cannot save a job using "Save as" option and also the parallel job gets converted to server jobs.
I have contacted the Data Stage support people and their first investigation points towards maximum disk space, inodes and directory inner project.
Currently the max disk space : ~ 26 GB
The point of concern is that the support suggested the max no of jobs in the project should be around 1000-1200.
Is there some limitation on the maximum No of Jobs in a project ?
What is reason behind the objects getting converted to Plug objects?
Thanks a lot for your time.
regards
Pankaj
I have come across this particular issue and would like to know if anyone have faced similar issues:
The issue is that all the objects in the job are converted to Plug object. I would need to the reason behind this conversion.
Also I cannot save a job using "Save as" option and also the parallel job gets converted to server jobs.
I have contacted the Data Stage support people and their first investigation points towards maximum disk space, inodes and directory inner project.
Currently the max disk space : ~ 26 GB
The point of concern is that the support suggested the max no of jobs in the project should be around 1000-1200.
Is there some limitation on the maximum No of Jobs in a project ?
What is reason behind the objects getting converted to Plug objects?
Thanks a lot for your time.
regards
Pankaj
Failures push you towards Success.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
That normally happens when the client machine is running out of memory. Close some other windows and/or re-boot. In particular, it is NOT a problem on the server. You have sent your DataStage support people on a wild goose chase.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
Thanks for the reply, though I am not sure if its a client problem, because when I tried opening the same job on other system, I got the same plug object appear again. Incase if it was a client issue I could have still opened the job at different system. Can you please help?
Well the IBM Support came up with limiting the number of jobs in the project to 1000-1200 which I believe is not a good thing .. wonder if this could be argued.
Please help.
Thanks in advance
Regards
Pankaj
Thanks for the reply, though I am not sure if its a client problem, because when I tried opening the same job on other system, I got the same plug object appear again. Incase if it was a client issue I could have still opened the job at different system. Can you please help?
Well the IBM Support came up with limiting the number of jobs in the project to 1000-1200 which I believe is not a good thing .. wonder if this could be argued.
Please help.
Thanks in advance
Regards
Pankaj
Failures push you towards Success.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Might be worth asking them why.
Certainly the client memory gets hit hard when uploading all of the jobs (which happens, for example, for the drop down list in a Job activity in a job sequence or, more generally, when refreshing the Repository view).
Certainly the client memory gets hit hard when uploading all of the jobs (which happens, for example, for the drop down list in a Job activity in a job sequence or, more generally, when refreshing the Repository view).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Dear all,
When I took the problem of plug object to this is what they came up with.
"My investigation reveals that there was some sort of corruption of the job during a 'rename' or 'save as'. The most likely reason is that /tmp space fill low or one of the important processes died unexpectedly."
And they also informed that we should look at limiting the number of jobs to 1000-1200 per project, which is the main cause of concern.
Now as I understand from one of the posts that I had authored to get the maximum number of jobs in a project, I understand that DS_JOBOBJECTS is that file that stores all the design information about the jobs and if that file is within 2.2 GB on a 32-bit system, then I can easily have in as many jobs. (my current DS_JOBOBJECTS sizing is just 0.2 GB) , now I am counting the number of jobs that I have.
viewtopic.php?p=200980#200980
I am trying to link the number of jobs and the file size, would that be the right thing to do.. any other perspective???
Let me know
Thanks all for you replies
When I took the problem of plug object to this is what they came up with.
"My investigation reveals that there was some sort of corruption of the job during a 'rename' or 'save as'. The most likely reason is that /tmp space fill low or one of the important processes died unexpectedly."
And they also informed that we should look at limiting the number of jobs to 1000-1200 per project, which is the main cause of concern.
Now as I understand from one of the posts that I had authored to get the maximum number of jobs in a project, I understand that DS_JOBOBJECTS is that file that stores all the design information about the jobs and if that file is within 2.2 GB on a 32-bit system, then I can easily have in as many jobs. (my current DS_JOBOBJECTS sizing is just 0.2 GB) , now I am counting the number of jobs that I have.
viewtopic.php?p=200980#200980
I am trying to link the number of jobs and the file size, would that be the right thing to do.. any other perspective???
Let me know
Thanks all for you replies
Failures push you towards Success.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
On Solaris that limit is 32K subdirectories in a directory. That would allow in excess of 5000 jobs. Do you really have that many?
Code: Select all
SELECT COUNT(*) FROM DS_JOBS WHERE NAME NOT LIKE '\\%';
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I've seen that limit hit here with just over 3,000 jobs. This is because of all the other type 30 (Dynamic) files created. The temporary solution was to change all LOG and STATUS type hashed files to type 2 - then go about making the project smaller. Too many CopyOfCopyOfCopy... jobs out there.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>