load Problem from sequential file
Moderators: chulett, rschirm, roy
load Problem from sequential file
Hi all,
I am trying to load data into Oracle data base from a sequential file. However the particular column is showing no values when I am trying to view in TOAD. All the other columns are loaded Properly. But the particular problem is when i do a length(missing column name) it shows the correct value. I had tried to nls_lang as environment variable in the Job with value as the value of the NLS_LANG setting in the database. However this does not help. The column has a datatype varchar2(4000).
Can anybody help on this.
I am trying to load data into Oracle data base from a sequential file. However the particular column is showing no values when I am trying to view in TOAD. All the other columns are loaded Properly. But the particular problem is when i do a length(missing column name) it shows the correct value. I had tried to nls_lang as environment variable in the Job with value as the value of the NLS_LANG setting in the database. However this does not help. The column has a datatype varchar2(4000).
Can anybody help on this.
Regards,
Anupam Mukherjee.
Anupam Mukherjee.
No actually there is no null handling thats happening. Its just a direct column mapping. The database has a NLS setting of American_America.WE8MSWIN1252. I have tried this a an environment variable setting for the Job and its not working. There are no warnings. The only problem is that the data cannot be viewed. Any suggestion/feedback on this will be of great help.
Regards,
Anupam Mukherjee.
Anupam Mukherjee.
Please correct me if i am wrong, You are not able to see the value of one column through toad but the length of the column gives the correct value. If so try to see the data from SQL prompt.
As well view the data from sequential file from Data Stage Designer (Sequential File --> View Data) and check that column and make sure the availability of data.
As well view the data from sequential file from Data Stage Designer (Sequential File --> View Data) and check that column and make sure the availability of data.
Hi oacvb,
Yes thats the problem. There is data in the sequential file. When I check the data in the Oracle OCI stage view data I can view some of the rows partially(only 4 characters are showing while the length is 16) not all. If I try to view the data in SQL promt then I see the data exactly as I see it in view data of the Oracle OCI stage. In TOAD nothing shows.
Yes thats the problem. There is data in the sequential file. When I check the data in the Oracle OCI stage view data I can view some of the rows partially(only 4 characters are showing while the length is 16) not all. If I try to view the data in SQL promt then I see the data exactly as I see it in view data of the Oracle OCI stage. In TOAD nothing shows.
Regards,
Anupam Mukherjee.
Anupam Mukherjee.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Get your DBA to perform a raw export of some of the rows from this table so you can see precisely what's stored in them.
Create another DataStage job that selects from the table and writes to a text file. This will help you to establish whether storage is correct and the fault is in the viewer(s), or somewhere else.
Create another DataStage job that selects from the table and writes to a text file. This will help you to establish whether storage is correct and the fault is in the viewer(s), or somewhere else.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
You export is from Oracle to sequential file again isn't?Anupam_M wrote: If this might help..I tried load the data to a sequential file and it loaded perfectly and I can see al the data. Just when I am trying to load it to Oracle I cant see it. Although the load is fine.
If so, it shows you data is loaded correctly into table, its just the problem with the view. Check with the display options in Datastage.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
We are using the server edition, but we just ran into this same issue this morning. If you do a "select dump(column) from table" you can see what is actually in the column.
Our job is defined as follows:
DS job column definition as VARCHAR 4000, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
The the sequential file we are loading into the table looks okay. After the job runs the column appears empty in PL/SQL Developer. When we do a dump of the column we see the following:
Typ=1 Len=24: 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,84,82,65,67,75
The first 19 bytes are ascii nulls (which differ from Oracle nulls - a dump of a null column returns NULL).
The following situations resulted in the column containing the correct result:
1) DS job column definition as VARCHAR 500, Oracle column definition as VARCHAR2(500), Update action: Update existing rows only.
2) DS job column definition as VARCHAR 500, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
3) DS job column definition as VARCHAR 3999, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
Our conclusion is it's a DataStage issue since the update works correctly whether the Oracle column definition is 4000 or less, but once the DataStage column definition is 4000 the update pads the front of the column with ascii nulls.
By the way, inserting with a DS job column definition as VARCHAR 4000 works fine, it only seems to be an issue on an update!
Our job is defined as follows:
DS job column definition as VARCHAR 4000, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
The the sequential file we are loading into the table looks okay. After the job runs the column appears empty in PL/SQL Developer. When we do a dump of the column we see the following:
Typ=1 Len=24: 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,84,82,65,67,75
The first 19 bytes are ascii nulls (which differ from Oracle nulls - a dump of a null column returns NULL).
The following situations resulted in the column containing the correct result:
1) DS job column definition as VARCHAR 500, Oracle column definition as VARCHAR2(500), Update action: Update existing rows only.
2) DS job column definition as VARCHAR 500, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
3) DS job column definition as VARCHAR 3999, Oracle column definition as VARCHAR2(4000), Update action: Update existing rows only.
Our conclusion is it's a DataStage issue since the update works correctly whether the Oracle column definition is 4000 or less, but once the DataStage column definition is 4000 the update pads the front of the column with ascii nulls.
By the way, inserting with a DS job column definition as VARCHAR 4000 works fine, it only seems to be an issue on an update!