I am reading Comma delimited file using Sequential file stage.I defined one column length as 20.But some times i am getting data for that column more than 20 like 25.Still the job is running by truncating the remaining 5 characters.Is there any way to check the length of the field dynamically? What i mean by this I would like to do
The reason i want to do this is because i want to check the length of all fields that are defined as varchar. I can easily do this through a function if i can make the check dynamic.
Unfortunately, there is no builtin or documented way to do this dynamically. The column information is stored in the job details and (if used) in the table definitions file. Neither location is documented or described by Ascential.
Instead of "hacking" that information from the repository, could you approach it from another direction? What about making your first data line a dummy one which has each column filled with unused data that denotes the maximum length. Then you could use a stage variable to store the maximum column length and use that to compare subsequent column widths.
Upstream of the Transformer stage define it to be larger, say VarChar(255). Downstream of the Transformer stage define it to be the desired size. Use a constraint expression in the Transformer stage to direct rows where it is of the appropriate size onto one output link, and use a reject or alternative output link to capture the rows with an oversized column.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ray.wurlod wrote:.... Use a constraint expression in the Transformer stage to direct rows where it is of the appropriate size onto one output link, and use a reject or alternative output link to capture the rows with an oversized column.
Ray,
that's fine and dandy when you are dealing with one or two columns of data that i need to use a constraint expression on where the
ArndW wrote:Unfortunately, there is no builtin or documented way to do this dynamically. The column information is stored in the job details and (if used) in the table definitions file. Neither location is documented or described by Ascential.
Instead of "hacking" that information from the repository, could you approach it from another direction? What about making your first data line a dummy one which has each column filled with unused data that denotes the maximum length. Then you could use a stage variable to store the maximum column length and use that to compare subsequent column widths.
Arnd,
I like your solution but that might require too much of a rework on th DS side. People in my current project don't feel that comfortable with DS, they rather add some additional SQL scripts to solve the problem.. Which may not be that bad , but i am trying to avoid adding another 2 hrs to a job stream that already runs for 6-7 hrs.
ray.wurlod wrote:How do you propose to source "(whatever the size of the target field is)"?
Ray,
that IS the challenge. I mean I will have the target data structure in the metadata definition. The challenge i am facing is to use that definition in the conditional statement that needs to be put inside the transformer..