Rounding off using schema files
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Rounding off using schema files
Morning / Afternoon / Evening folks...
So I have this schema file as follows:
record {delim=';',final_delim= 'end_of_record' } (
COETF:nullable string[max=3] { null_field=' '};
...
...
MKRDU:nullable decimal[11,2] {null_field=''};
)
The value in the flat file I am coming across is a decimal but which has 3 digits after the decimal point.
Thus my job is truncating this value and considering just 2 places.
Is there anyway to ensure that the entire value is read and rounded off to 2 decimal places?
Thanks & regards
Tony
So I have this schema file as follows:
record {delim=';',final_delim= 'end_of_record' } (
COETF:nullable string[max=3] { null_field=' '};
...
...
MKRDU:nullable decimal[11,2] {null_field=''};
)
The value in the flat file I am coming across is a decimal but which has 3 digits after the decimal point.
Thus my job is truncating this value and considering just 2 places.
Is there anyway to ensure that the entire value is read and rounded off to 2 decimal places?
Thanks & regards
Tony
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
In that case i can't use a generic job to process and integrate multiple flat files into respective datasets. I've got RCP activated and my job consists of a sequential input stahe, a column import stage (which uses the schema file) and a dataset stage.
This job is called around a 100 times and each time the sequential file name, schema file name and dataset name is passed as a parameter.
If I have to use a modify stage after the column import I would have to declare the name of the column to modify right? This column is probably present in one of the 100 files I'm processing. Won't this be a problem?
This job is called around a 100 times and each time the sequential file name, schema file name and dataset name is passed as a parameter.
If I have to use a modify stage after the column import I would have to declare the name of the column to modify right? This column is probably present in one of the 100 files I'm processing. Won't this be a problem?
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Actually you can take advantages of one of the more obscurely documented features of the Modify stage - entire specifications can be job parameters. So, even if the column name needing rounding is dynamic, you can manage this by creating the specification as a job parameter.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Thanks Ray. I will try this.
But I hope in case this column is not present at all it won't be an irritant.
This is because among my 100 odd files some have decimals (lets call them Dec001, Dec002, Dec003, etc) and others don't have 'em at all or don't need rounding off.
But I hope in case this column is not present at all it won't be an irritant.
This is because among my 100 odd files some have decimals (lets call them Dec001, Dec002, Dec003, etc) and others don't have 'em at all or don't need rounding off.
Tony
BI Consultant - Datastage
BI Consultant - Datastage
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
It purely depends on how you build your schema files. Manually? Automatically? Using a Unix script? An excel macro? Then you can build something like
Then with a command like below retrieve the modify spec and use it calling the job :
You can also build your own script to generate the modify spec by parsing your schema file
Something like (not tested) :
Of course you can do something much cleaner
[edit]
I forgot to mention that you might need to modify you schema file first, otherwise your field will be truncated before import. Maybe importing using unbounded varchar and then converting it using the right rounding function (a nice summary of the modify)?
[/edit]
Code: Select all
<schema file name 1>|<modify spec>
<schema file name 2>|<modify spec>
Code: Select all
grep <schema file name 1> <your conf file> | cut -d'|' -f2
Something like (not tested) :
Code: Select all
grep decimal <your schema file name> > tempfile
cat tempfile | while read line;do;
field=$(echo $line | cut -d':' -f1);
entPart=$(echo $line | cut -d'[' -f2 | cut -d',' -f1);
decPart=$(echo $line | cut -d',' -f2 | cut -d']' -f1);
if [ $decPart -le 2 ];
then continue ;
else
<create your spec here using $field and $entPart and your requirements >;
fi;
[edit]
I forgot to mention that you might need to modify you schema file first, otherwise your field will be truncated before import. Maybe importing using unbounded varchar and then converting it using the right rounding function (a nice summary of the modify)?
[/edit]