Page 1 of 1

Rounding off using schema files

Posted: Wed Jun 19, 2013 6:41 am
by TonyInFrance
Morning / Afternoon / Evening folks...

So I have this schema file as follows:

record {delim=';',final_delim= 'end_of_record' } (
COETF:nullable string[max=3] { null_field=' '};
...
...
MKRDU:nullable decimal[11,2] {null_field=''};
)

The value in the flat file I am coming across is a decimal but which has 3 digits after the decimal point.

Thus my job is truncating this value and considering just 2 places.

Is there anyway to ensure that the entire value is read and rounded off to 2 decimal places?

Thanks & regards

Tony

Posted: Wed Jun 19, 2013 5:25 pm
by ray.wurlod
Not in the schema file. That is a transformation; you would specify it in a Modify or Transformer stage.

Posted: Thu Jun 20, 2013 3:31 am
by TonyInFrance
In that case i can't use a generic job to process and integrate multiple flat files into respective datasets. I've got RCP activated and my job consists of a sequential input stahe, a column import stage (which uses the schema file) and a dataset stage.
This job is called around a 100 times and each time the sequential file name, schema file name and dataset name is passed as a parameter.

If I have to use a modify stage after the column import I would have to declare the name of the column to modify right? This column is probably present in one of the 100 files I'm processing. Won't this be a problem?

Posted: Thu Jun 20, 2013 3:54 am
by ray.wurlod
Actually you can take advantages of one of the more obscurely documented features of the Modify stage - entire specifications can be job parameters. So, even if the column name needing rounding is dynamic, you can manage this by creating the specification as a job parameter.

Posted: Thu Jun 20, 2013 4:50 am
by TonyInFrance
Thanks Ray. I will try this.
But I hope in case this column is not present at all it won't be an irritant.
This is because among my 100 odd files some have decimals (lets call them Dec001, Dec002, Dec003, etc) and others don't have 'em at all or don't need rounding off.

Posted: Thu Jun 20, 2013 9:15 am
by eph
Hi,

You could create a configuration file at the same time you create the schema file. This way, you can parse your configuration before calling the job, sending the right modification spec.

Eric

Posted: Thu Jun 20, 2013 9:17 am
by TonyInFrance
Please elaborate a bit Eric. I'm not sure I understand your solution but it seems super interesting.

Posted: Thu Jun 20, 2013 9:46 am
by eph
It purely depends on how you build your schema files. Manually? Automatically? Using a Unix script? An excel macro? Then you can build something like

Code: Select all

<schema file name 1>|<modify spec>
<schema file name 2>|<modify spec>
Then with a command like below retrieve the modify spec and use it calling the job :

Code: Select all

grep <schema file name 1> <your conf file> | cut -d'|' -f2
You can also build your own script to generate the modify spec by parsing your schema file

Something like (not tested) :

Code: Select all

grep decimal <your schema file name> > tempfile
cat tempfile | while read line;do;
field=$(echo $line | cut -d':' -f1);
entPart=$(echo $line | cut -d'[' -f2 | cut -d',' -f1);
decPart=$(echo $line | cut -d',' -f2 | cut -d']' -f1);
if [ $decPart -le 2 ];
then continue ;
else
<create your spec here using $field and $entPart and your requirements >;
fi;
Of course you can do something much cleaner :)

[edit]
I forgot to mention that you might need to modify you schema file first, otherwise your field will be truncated before import. Maybe importing using unbounded varchar and then converting it using the right rounding function (a nice summary of the modify)?
[/edit]