Job compile issue
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Job compile issue
I have a job with quite a few join stages and 4 or 5 transformer stages.
This is necessary since columns generated in one transformer is used in deducing columns in the subsequent transformer stage.
I've been experiencing a peculiar problem when i go to compile this job - it never compiles the first time. I get an error saying:
##F IIS-DSEE-TFCM-00005 11:52:54(011) <main_program> Fatal Error: Added field has duplicate identifier(): APT_TOMainloop (JxXXX.Tr_CI_001)
This is clearly not true since when I recompile immediately, everything seems fine. thus the only irritant is I cannot compile this particular job along with the others in the project, thus not being able to make full use of the 'multiple job compile' function - what's bizarre is that it always compiles the second time around.
Anyone faced a similar error?
Tony
This is necessary since columns generated in one transformer is used in deducing columns in the subsequent transformer stage.
I've been experiencing a peculiar problem when i go to compile this job - it never compiles the first time. I get an error saying:
##F IIS-DSEE-TFCM-00005 11:52:54(011) <main_program> Fatal Error: Added field has duplicate identifier(): APT_TOMainloop (JxXXX.Tr_CI_001)
This is clearly not true since when I recompile immediately, everything seems fine. thus the only irritant is I cannot compile this particular job along with the others in the project, thus not being able to make full use of the 'multiple job compile' function - what's bizarre is that it always compiles the second time around.
Anyone faced a similar error?
Tony
Tony
BI Consultant - Datastage
BI Consultant - Datastage
With something like this I would suspect some potential metadata corruption causing the issue.
Are you doing a regular compile on your recompile or are you doing a force compile? See if the error repeats each time with a force compile.
If you haven't already, export the job, delete it, and then reimport it.
If that doesn't help, then start narrowing the issue down by removing a stage at a time.
Check your join stages to make sure that every non-key column has a unique name.
Mike
Are you doing a regular compile on your recompile or are you doing a force compile? See if the error repeats each time with a force compile.
If you haven't already, export the job, delete it, and then reimport it.
If that doesn't help, then start narrowing the issue down by removing a stage at a time.
Check your join stages to make sure that every non-key column has a unique name.
Mike
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Will try this asap.Mike wrote:With something like this I would suspect some potential metadata corruption causing the issue.
Are you doing a regular compile on your recompile or are you doing a force compile? See if the error repeats each time with a force compile.
This won't solve the problem since after each modification I export the job from our development environment to our test environment and it never compiles on the first tryMike wrote:If you haven't already, export the job, delete it, and then reimport it.
I have many other jobs with multiple joins where the columns are probably not unique. in my experience though, i've noticed that in this case DataStage gives a warning. Compilation does not really fail and that too just once.Mike wrote:If that doesn't help, then start narrowing the issue down by removing a stage at a time.
Check your join stages to make sure that every non-key column has a unique name.
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Does anyone have any tips for this?
I have a new problem in the same job, i.e. after managing to compile the job
I've noticed that the partitioning that I've set upstream in the transformer stage is not preserved.
The next stage is a Pivot Enterprise stage which needs incoming data sorted and hash partitioned. Thus to prevent warnings I have to clear the partition on the previous transformer stage.
However on saving the job, compiling it, closing it and repoening it, I see that the upstream partiioning comes back to Default (Propagate)
This is troublesome since the job runs with a warning and my sequencer which calls this job stops right away.
I have a new problem in the same job, i.e. after managing to compile the job
I've noticed that the partitioning that I've set upstream in the transformer stage is not preserved.
The next stage is a Pivot Enterprise stage which needs incoming data sorted and hash partitioned. Thus to prevent warnings I have to clear the partition on the previous transformer stage.
However on saving the job, compiling it, closing it and repoening it, I see that the upstream partiioning comes back to Default (Propagate)
This is troublesome since the job runs with a warning and my sequencer which calls this job stops right away.
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I vaguely recall seeing this behaviour. Try opening the job, making the change, and saving the job. Then, without compiling, open the job again, make the change again if necessary, and save. Don't make any other change to the job.
You can try opening the job a third time to check that the change has stuck, or just compile it.
You can try opening the job a third time to check that the change has stuck, or just compile it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 425
- Joined: Sat Nov 19, 2005 9:26 am
- Location: New York City
- Contact:
TonyInFrance,
One of the developers in our team faced a variant of this issue. By making a copy of the job, deleting the original, rename the copy to a complete different name and compile, then rename it to the original name...fixed the issue
Give a minute or two after you delete the original job... When the XMETA database is busy, in our case we have it in a very busy shared database, it takes some time to reflects the changes...
One of the developers in our team faced a variant of this issue. By making a copy of the job, deleting the original, rename the copy to a complete different name and compile, then rename it to the original name...fixed the issue
Give a minute or two after you delete the original job... When the XMETA database is busy, in our case we have it in a very busy shared database, it takes some time to reflects the changes...
Julio Rodriguez
ETL Developer by choice
"Sure we have lots of reasons for being rude - But no excuses
ETL Developer by choice
"Sure we have lots of reasons for being rude - But no excuses
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
This worked for one of the jobs - I had two quasi identical jobs (i.e. same logic) with different filters.ray.wurlod wrote:I vaguely recall seeing this behaviour. Try opening the job, making the change, and saving the job. Then, without compiling, open the job again, make the change again if necessary, and save. Don't make any other change to the job.
You can try opening the job a third time to check that the change has stuck, or just compile it.
So while saving a couple of times without compiling and then compiling once and for all worked for the fist copy, it didn't for the second.
This didn't work either, since I created a copy of the job & deleted the original. however the new job showed the same symptoms.JRodriguez wrote:One of the developers in our team faced a variant of this issue. By making a copy of the job, deleting the original, rename the copy to a complete different name and compile, then rename it to the original name...fixed the issue
Give a minute or two after you delete the original job... When the XMETA database is busy, in our case we have it in a very busy shared database, it takes some time to reflects the changes...
The work around I thus used was inserting a copy stage in between the transformer and pivot enterprise stage. The upstream partitioning thus is at default (propagate) since it refuses to remain at clear and in the subsequent copy stage I have had to clear the same so that when my data is sent to the pivot enterprise stage its partitioning is cleared.
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg