job freezing
Moderators: chulett, rschirm, roy
job freezing
Hi people.
I have a job join any tables, big amount of data, and write a sequencial file in parallel. This run fine ok on development environment, but don't run on certification or production environments.
On this environments, the job freezing after extract some tables. Don't abend, don't success, only "freezing".
Can anyone help-me? :(
Excuse-me for my english...
I have a job join any tables, big amount of data, and write a sequencial file in parallel. This run fine ok on development environment, but don't run on certification or production environments.
On this environments, the job freezing after extract some tables. Don't abend, don't success, only "freezing".
Can anyone help-me? :(
Excuse-me for my english...
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Welcome aboard! :D
You really haven't given enough information.
The obvious question; what's different between development (where it works) and other environments (where it doesn't)?
How many rows are processed before the job "freezes"? Do any of these get written into the sequential file?
What stage types exist in the job? Are you doing the joins in the database or in the DataStage job? How much sorting are you doing? Are there any warnings logged in the certification environment? Are there any null values in the data and, if so, how are you handling them?
You really haven't given enough information.
The obvious question; what's different between development (where it works) and other environments (where it doesn't)?
How many rows are processed before the job "freezes"? Do any of these get written into the sequential file?
What stage types exist in the job? Are you doing the joins in the database or in the DataStage job? How much sorting are you doing? Are there any warnings logged in the certification environment? Are there any null values in the data and, if so, how are you handling them?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
[quote="ray.wurlod"]Welcome aboard! :D
You really haven't given enough information.
The obvious question; what's different between development (where it works) and other environments (where it doesn't)?
How many rows are processed before the job "freezes"? Do any of these get written into the sequential file?
What stage types exist in the job? Are you doing the joins in the database or in the DataStage job? How much sorting are you doing? Are there any warnings logged in the certification environment? Are there any null values in the data and, if so, how are you handling them?[/quote]
Hi Ray,
Thanks for your aid.
The answers:
Don't have differences between environments, except about amount data;
The job "freeze" aftere +/- 1,5thowsands;
Stages: DB2, Sequencial, Transformer, join, aggregator;
Join in the datastage job;
Pre-sort in the sequential file;
No warnings in anything environment;
I discovery problem is in transformer stage, the design is more or less like this:
SEQ -->> TRANSF -->> TRANSF -->> JOIN...
The data stoped on first TRANSF, but i don't know why...
Thanks
You really haven't given enough information.
The obvious question; what's different between development (where it works) and other environments (where it doesn't)?
How many rows are processed before the job "freezes"? Do any of these get written into the sequential file?
What stage types exist in the job? Are you doing the joins in the database or in the DataStage job? How much sorting are you doing? Are there any warnings logged in the certification environment? Are there any null values in the data and, if so, how are you handling them?[/quote]
Hi Ray,
Thanks for your aid.
The answers:
Don't have differences between environments, except about amount data;
The job "freeze" aftere +/- 1,5thowsands;
Stages: DB2, Sequencial, Transformer, join, aggregator;
Join in the datastage job;
Pre-sort in the sequential file;
No warnings in anything environment;
I discovery problem is in transformer stage, the design is more or less like this:
SEQ -->> TRANSF -->> TRANSF -->> JOIN...
The data stoped on first TRANSF, but i don't know why...
Thanks
Wahil,
Since the problem is difficult to localize, can you take out the the DB/2 write stage and test to see if that might be causing your problems in the non-development environments? Are you writing to the same DB/2 instance in development? Can you monitor the database locks or have the DBA to this while your job is running?
Since the problem is difficult to localize, can you take out the the DB/2 write stage and test to see if that might be causing your problems in the non-development environments? Are you writing to the same DB/2 instance in development? Can you monitor the database locks or have the DBA to this while your job is running?
Try this!
This is a well-known problem in PX7.5. Try to the following:
1. Add environment variable from Reporting-->$APT_JOB_NOMON to your job parameter list.
2. Change its value from False to True.
3. Recompile the job and run
You can do it from Sequencer level you are running the job from squencer, but make sure you only set it in ONE place. Do not set it both in Job and Sequencer.
Let me know if this works for you and I will explain how this parameter works with th job.
Cheers!
Pneuma.
416-828-4338
pneumalin@yahoo.com
1. Add environment variable from Reporting-->$APT_JOB_NOMON to your job parameter list.
2. Change its value from False to True.
3. Recompile the job and run
You can do it from Sequencer level you are running the job from squencer, but make sure you only set it in ONE place. Do not set it both in Job and Sequencer.
Let me know if this works for you and I will explain how this parameter works with th job.
Cheers!
Pneuma.
416-828-4338
pneumalin@yahoo.com
Re: Try this!
Hi. Pneumalin,
we changed the job and it's not "freezing" more. Now, we have other problem: "APT_CombinedOperatorController(1),1: Expected identifier; got: "0001"". We don't use the job sequencer.
I open a case in Ascential, but at the moment nothing...
Thanks a lot!
Wagner
HSBC BRAZIL
we changed the job and it's not "freezing" more. Now, we have other problem: "APT_CombinedOperatorController(1),1: Expected identifier; got: "0001"". We don't use the job sequencer.
I open a case in Ascential, but at the moment nothing...
Thanks a lot!
Wagner
HSBC BRAZIL
Re: Try this!
Wagner,wahil wrote:Hi. Pneumalin,
we changed the job and it's not "freezing" more. Now, we have other problem: "APT_CombinedOperatorController(1),1: Expected identifier; got: "0001"". We don't use the job sequencer.
I open a case in Ascential, but at the moment nothing...
Thanks a lot!
Wagner
HSBC BRAZIL
Glad to hear the freezing issue goes away! However, the problem you mentioned above is not related to the JOB_NOMON. You should take a look at the job design and test it to work. You can export you job and send it to me, if you want me to take a look!
Pneuma.
416-828-4338
pneumalin@yahoo.com
Re: Try this!
Great pneumalin.
Thanks a lot! :D
The problem ""APT_CombinedOperatorController(1),1: Expected identifier; got: "0001"". also went resolved. We did:
Only delete some stages and re-create...
Regards
Wagner HSBC BRASIL
Thanks a lot! :D
The problem ""APT_CombinedOperatorController(1),1: Expected identifier; got: "0001"". also went resolved. We did:
Only delete some stages and re-create...
![Laughing :lol:](./images/smilies/icon_lol.gif)
Regards
Wagner HSBC BRASIL