Not Null Columns of Dataset showing as Nullable

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rohit_mca2003
Participant
Posts: 41
Joined: Wed Oct 08, 2008 9:19 am

Not Null Columns of Dataset showing as Nullable

Post by rohit_mca2003 »

Hi,

I am writting data in Dataset, which has few columns defined as Not Null. All the values are properly populating in the dataset.

But when I am looking the dataset schema in Data Set Management, it is showing 2 columns as NULLABLE. However I have declared those columns as NOT NULL.

Run Time Column Propogation is also OFF for the dataset and job.

Could you please help me resolving this issue.

Regards,
Rohit
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Check your work, double check, and have a colleague check it. I can't see how this could happen. What exact version of DataStage?

Maybe have the janitor look at it as well (old joke, but a good one).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
guilherme007
Participant
Posts: 4
Joined: Thu May 31, 2007 6:53 am

Similar Problem...

Post by guilherme007 »

I am having a simlar problem, using Datastage 7.5.2, in a parallel job.

In this job I am using RCP, and I only have the fields defined in the SQL Query and in the SCHEMA File which I use to define the columns of the output sequential file.

The job is only this:
======================================
ORACLE_STAGE -> TRANFORMER -> SEQUENTIAL_FILE
======================================

The problem I am having is that *one* field that is surely not null in Oracle cannot be written in the output file (datastage says it is null, but it is not). This field is a String in Oracle and I want it as a number in the output file, so I am converting it with TO_NUMBER oracle function (I am testing with a single line which I am sure is ok, and although the query in pl/sql returns correctly but not datastage).

I also tried to write it as a String (without the TO_NUMBER function) and then: It works, but with a wrong number!!! I really don't understand this!

I already tried a lot of things to solve this, and I don't know what to look for in the documentation. I am starting wondering if this is a bug in RCP with the schema usage of sequential files.

If I write to a new file without the SCHEMA option, only the RCP, the problematic field is written correctly, so the SCHEMA (which I set directly in the properties of the output file) seems to be related to the problem!

The schema datatype field is this:
FIELD :nullable string[11];

If someone has some Idea of how to solve this problem, I would be very grateful!

Thanks in advance,

Guilherme
girija
Participant
Posts: 89
Joined: Fri Mar 24, 2006 1:51 pm
Location: Hartford

Post by girija »

You can use IsValid function in transformer to see whether is it a valid number or not. If not try to assign a default value. If you are using oracle TO_NUMBER, use NVL(TO_NUMBER(TRIM(Field)),<default value>.
guilherme007
Participant
Posts: 4
Joined: Thu May 31, 2007 6:53 am

Solved

Post by guilherme007 »

Hi Girija,

I Just solved the problem, and I opened a question in another related subject:
viewtopic.php?p=321253#321253

The c_format was causing the problem as datastage was expecting an int64 instead of int32 in the schema file. The error showed in this case is
"FIELD is null but no null export handling is defined"

The solution is below:

From
====================================
FIELD :nullable int32 { c_format='%+012d',width=12,default=0 };
====================================
To
====================================
FIELD :nullable int64 { c_format='%+012llu',width=12,default=0 };
====================================


Thanks very much for your answer, I will test if your solution works for me.

Guilherme
Post Reply