When is a NULL not a NULL?
Moderators: chulett, rschirm, roy
When is a NULL not a NULL?
Hi All
Weird things are a foot - or at least I think that they are weird.
I have a job which reads an input XML file and extracts several Seq files.
Subsequent jobs then process these sequential files. So far so good.
However I have a situation where a record in the XML file does not come with a particular value so this value is set to NULL. This is what should happen. The subsequent job then takes the sequential file with the NULL value and loads it into and Oracle DB using OCI stage.
Now the fun part - the NULL value is part of the key and the in the DB the column is set to not accept null values. Yet when I view the record it shows no data. I did a select length(fieldname) on the offending record and it came back with a length of 1 - clearly not empty. So I tried to find out the ascii value of the character by using select Ascii(fieldname) and it comes back with 0 which as far as I know is NULL?
This is reinforced as another job then reads that table with the "NULL" value and tries to insert it elsewhere and it is rejected for being NULL.
Does any of this make sense? Does anybody know of a way of determining what the offending value is, preferably via some oracle command ... the temporary sequential files are long gone.
Any help is appreciated.
Weird things are a foot - or at least I think that they are weird.
I have a job which reads an input XML file and extracts several Seq files.
Subsequent jobs then process these sequential files. So far so good.
However I have a situation where a record in the XML file does not come with a particular value so this value is set to NULL. This is what should happen. The subsequent job then takes the sequential file with the NULL value and loads it into and Oracle DB using OCI stage.
Now the fun part - the NULL value is part of the key and the in the DB the column is set to not accept null values. Yet when I view the record it shows no data. I did a select length(fieldname) on the offending record and it came back with a length of 1 - clearly not empty. So I tried to find out the ascii value of the character by using select Ascii(fieldname) and it comes back with 0 which as far as I know is NULL?
This is reinforced as another job then reads that table with the "NULL" value and tries to insert it elsewhere and it is rejected for being NULL.
Does any of this make sense? Does anybody know of a way of determining what the offending value is, preferably via some oracle command ... the temporary sequential files are long gone.
Any help is appreciated.
Re: When is a NULL not a NULL?
If you want to check if a particular column is NULL in Oracle, do the followingShaneMuir wrote:Does anybody know of a way of determining what the offending value is, preferably via some oracle command ...
Say you need to check if ColumnA is NULL
Code: Select all
Select dump(ColumnA) from TableA;
If you have a 'NULL' string you get result like 'Typ=1 Len=4: 78,85,76,76'
If you have a space you get result like 'Typ=1 Len=1: 32'
Hope that helps!
Narasimha Kade
Finding answers is simple, all you need to do is come up with the correct questions.
Finding answers is simple, all you need to do is come up with the correct questions.
-
- Premium Member
- Posts: 210
- Joined: Wed Feb 16, 2005 7:17 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Char(0) is not NULL, it is NUL. It is a valid character from Control Set 0.
NULL is "the value that means unknown" and is different in different databases. Internally in DataStage it is represented using Char(128). When reading/writing sequential files you specify how NULL is represented.
Len(Char(0)) returns 1.
Len(@NULL) returns 0, which is sometimes a useful fact.
NULL is "the value that means unknown" and is different in different databases. Internally in DataStage it is represented using Char(128). When reading/writing sequential files you specify how NULL is represented.
Len(Char(0)) returns 1.
Len(@NULL) returns 0, which is sometimes a useful fact.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ray.wurlod wrote:Char(0) is not NULL, it is NUL. It is a valid character from Control Set 0.
Ok then - now for my next question. How did this NUL value end up in the data. It wasn't parsed in the XML (there was no value for this field) so the XMLInput stage should have replaced that value with a SQL NULL. (As the "Replace empty values with NULLs" is selected).ray.wurlod wrote: When reading/writing sequential files you specify how NULL is represented.
In all target/source sequential stages, on the format tab the Default NULL string field remains empty as does the NULL string column on the columns tab.
So somewhere this NUL value has magically appeared. Quite obviously I have missed something.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Maybe the "Replace empty value with NULL" should read "Replace empty value with NUL"? Please continue to investigate, and share your results. If nothing else you will create the reproducible cases demanded by IBM support.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Ok A bit more digging and testing and I have more information.ray.wurlod wrote:Maybe the "Replace empty value with NULL" should read "Replace empty value with NUL"? Please continue to investigate, and share your results. If nothing else you will create the reproducible cases demanded by IBM support.
I have found the job where the NULL vs NUL is occurring. The job has the following structure:
Code: Select all
SeqFile ---> Trf1 ---> HashFile ----> Trf2 ---- DB
- 1. The HashFile Stage actually takes 2 inputs from Transfrom1, one for header, one for transaction. The key for the hash file is made up of 4 fields. The field containing the null value is one of these key fields.
2. The output from the hash file is actually 1 stream (Header) and 2 reference (transaction depending on type).
3. No transformation is done on the field containing the null value. It is parsed as is.
The sequential files had the following results:
From Trf1: Field = NULL - ie null was returned so obvioiusly the value parsed was NULL. I also did another test for If isNull(field) then 'NULL' else 'Not NULL' and the value returned was 'NULL'
From Trf2: Field = 128 - the value read from the hash file was char(128) which is SQLNULL (according to the conversion table)
When loaded into the Oracle DB it seems that the char(128) is interpreted as char(0)?
Can anyone shed any light on whether this is normal behaviour? Especially around the operation of the Hash file, as I thought that a NULL value could not be part of the hash key. It looks like NULL is inserted into that key field, but this field is then read out as char(128). Does this sound right?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Given that 128 is 10000000 in binary it may be that Oracle, set to use ASCII (which, technically, is a seven-bit encoding) ignores the high-order bit. I'm just guessing here, of course.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Actually I am more interested in the treatment of a NULL in a HashFile where that NULL is part of the key. Will this be interpreted as a @NULL.STR when entered as opposed to null?ray.wurlod wrote:Given that 128 is 10000000 in binary it may be that Oracle, set to use ASCII (which, technically, is a seven-bit encoding) ignores the high-order bit. I'm just guessing here, of course.
After trawling the forum - I think I have found part of my answer.ShaneMuir wrote:Actually I am more interested in the treatment of a NULL in a HashFile where that NULL is part of the key. Will this be interpreted as a @NULL.STR when entered as opposed to null?
I always thought that you couldn't enter NULL into a key field in a hash file, put other posts suggest that you can when it is part of a composite key? According to documentation, the internal representation of NULL is char(128) or @NULL.STR, so I am assuming that in the Hash File it would be stored as char(128)? This seems to be confirmed as when i read the hash file to a sequential file and using the seq function it returns 128.
The weird part is that when loaded it becomes char(0).
And why does it only do this on key fields? I have plenty of other non key fields with null values read from the hash file and these are NULL