Non integer value returned for field. Unable to set to Null

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
sumesh.abraham
Participant
Posts: 153
Joined: Thu May 11, 2006 1:52 am
Location: Bangalore

Non integer value returned for field. Unable to set to Null

Post by sumesh.abraham »

I am using a BASIC transformer stage in a parallel job. There is a column which is defined as Integer and the value for
this column is mapped from the job parameter. The column is not nullable and there will be always an integer value passed to it.

I am seeing intermittent failures with this job and error is
Non integer value returned for field id. Unable to set to Null.
Did anyone face this issue? Thoughts on what could be causing this issue?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Are you SURE it's not nullable? Something in the stage wants to set the field to null (perhaps because it's failed lookup?). Seems that it's detecting non-integer values, so that your assertion that there will always be an integer passed to it is looking fairly shaky too.

Check in the job log (or detailed status view) to determine the value passed to the parameter.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
sumesh.abraham
Participant
Posts: 153
Joined: Thu May 11, 2006 1:52 am
Location: Bangalore

Post by sumesh.abraham »

Thanks for the reply.

The column is non-nullable. I believe the problem happens when the value of the job parameter is assigned to the column using the column derivation. Are there any known issues in BASIC transformer stage in such cases?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I am not aware of any. But there are unknown unknowns ...

Create a reproducible case and submit it to your official support provider.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
sumesh.abraham
Participant
Posts: 153
Joined: Thu May 11, 2006 1:52 am
Location: Bangalore

Post by sumesh.abraham »

Thanks for the inputs.

The issue is now happening consistently with strangely different errors.

Sometimes the job fails at either or both BASIC transformer stages with error
Unable to run job - -2.
In other cases it fails at either BASIC Transformsr stages with error
Non-Integer value returned for field vendor_product_id. Unable to set to null
The only difference that I see between the BASIC Transformer stage for the column definition is that at one place it is defines as Decimal 5 and it is Integer 5 at the other place (My bad! They should have been the same data type)

There is no node map constraint defined on each BASIC transformer stage. Is there a requirement that both BASIC transformer stages should run on local nodes?. I checked the configuration file and there are 4 nodes and the fastname for each node shows the same server. I am thinking to set the node pool to node 1 for both BASIC Transformer and see how the job runs.

Appreciate your inpurs.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I don't believe running in one node will have any effect in this case. But by all means try it. Just run using a single node configuration file - you don't need to set up node pools, particularly in an SMP environment. Or compile in trace mode and force sequential execution by that means.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
sumesh.abraham
Participant
Posts: 153
Joined: Thu May 11, 2006 1:52 am
Location: Bangalore

Post by sumesh.abraham »

Thanks for the inputs.

Our DataStage Admin could find a huge number of locks on the job which seemed to have caused the job to fail. When the locks were cleared, job ran fine. The issue did not happen thereafter.

This left me recheck the job design again. All that he job does is read a file, do lookups against table and then call few server routines for business logic checks and then load to few tables. The Admin pointed that the job locks are created when BASIC transformer stages were run. Is anyone aware of similar cases where jobs with BASIC transformer stages cause locks to be created. I am inclined to check the server routines that are called from the BASIC transformer and see whether I can achieve them in a parallel transformer to not have such an issue to pop up again.

Appreciate your inputs here.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Locks in Information Server, locks in DataStage or locks in the database?

What kind of locks?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
sumesh.abraham
Participant
Posts: 153
Joined: Thu May 11, 2006 1:52 am
Location: Bangalore

Post by sumesh.abraham »

Sorry for not being clear!. The locks were in Datastage. When the job failed there were few mutex errors too.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Mutex (mutually exclusive semaphores) are simply how locks are implemented on your operating system. Looks like you have had some timeouts and whatever has caused them is in the past. This makes current diagnosis pretty much impossible.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply