Issues with metadata copied from server job
Moderators: chulett, rschirm, roy
Issues with metadata copied from server job
Hi there,
I have a server job that reads from a sequential file and processes the data. Recently we got EE at our site and we are now trying to convert this particular job to parallel job (for performance reasons)
I have saved metadata/schemaDefinition from the server job and used the same in parallel job. When I try to view(or run job) the data, its giving errors like "delimiter not seen, at offset XX".
This is a delimited file and I have all the settings correct. We have many char fields in this file and what its doing is eventhough i have specified it as DELIMITED file, it scans through the length number of charecters when it encounters a char fields and then looking for delimiter.
I have changed definition to varchar and its started behaving properly. But i dont think its a solution right??
This is also happening for decimal fields ..
Is there anything (settings??) I am missing??
thanks in advance
I have a server job that reads from a sequential file and processes the data. Recently we got EE at our site and we are now trying to convert this particular job to parallel job (for performance reasons)
I have saved metadata/schemaDefinition from the server job and used the same in parallel job. When I try to view(or run job) the data, its giving errors like "delimiter not seen, at offset XX".
This is a delimited file and I have all the settings correct. We have many char fields in this file and what its doing is eventhough i have specified it as DELIMITED file, it scans through the length number of charecters when it encounters a char fields and then looking for delimiter.
I have changed definition to varchar and its started behaving properly. But i dont think its a solution right??
This is also happening for decimal fields ..
Is there anything (settings??) I am missing??
thanks in advance
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Do you have CHAR fields that are empty? Even though it is delimited it could be the parallel job is expecting some type of blank padding character. Parallel jobs are more particular about metadata then server jobs. Sounds like changing it to varchar is the way to go.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Yes. We do have CHAR fields with no values. Even if we have non-empty values, if the length of value doesnt match length specified, we are getting errors.
We can safely change char to varchar .. but what about decimal fields? its also giving similar errors
I am not sure if anybody else ran into this situation..
thank you
We can safely change char to varchar .. but what about decimal fields? its also giving similar errors
I am not sure if anybody else ran into this situation..
thank you
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Server jobs have no data types.
Parallel jobs are strongly types. As already noted, a Char(x) column (in a schema string[x]) must have precisely x characters.
A decimal data type value must have not more than the correct number of scale digits (to the right of the decimal place).
All numeric data types must be in range. For example int8 must be between -128 and +127, uint8 must be between 0 and 255.
And so on. What exactly is the error for mismatched decimals?
Parallel jobs are strongly types. As already noted, a Char(x) column (in a schema string[x]) must have precisely x characters.
A decimal data type value must have not more than the correct number of scale digits (to the right of the decimal place).
All numeric data types must be in range. For example int8 must be between -128 and +127, uint8 must be between 0 and 255.
And so on. What exactly is the error for mismatched decimals?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Decimal and Numeric have a fixed number of decimal places and can therefore be represented exactly in the machine using scaling (within the bounds of machine precision).
Real and Double have an arbitrary number of decimal places and no guarantee of accurate storage at the limits of precision can be made. That's why your "solution" worked.
Better would be to force your data to x decimal places early in the job, perhaps using a Modify stage with decimal_from_dfloat (or decimal_from_decimal) transformation.
Real and Double have an arbitrary number of decimal places and no guarantee of accurate storage at the limits of precision can be made. That's why your "solution" worked.
Better would be to force your data to x decimal places early in the job, perhaps using a Modify stage with decimal_from_dfloat (or decimal_from_decimal) transformation.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks a lot for the info Ray.
I am reading from the delimited sequential file and that has decimal numbers.
I have the length defined as 5. But as I cant control what I get in input file, if the values is less than 5 digits .. it starts compalining .. what would be the best way to deal with this case?
I will check if I can read that value as varchar and use modify to convert it to decimal in the stream ? what do you think of this approach..
thanks again!!
I am reading from the delimited sequential file and that has decimal numbers.
I have the length defined as 5. But as I cant control what I get in input file, if the values is less than 5 digits .. it starts compalining .. what would be the best way to deal with this case?
I will check if I can read that value as varchar and use modify to convert it to decimal in the stream ? what do you think of this approach..
thanks again!!