Page 1 of 2

How to read .dtl files

Posted: Tue Oct 17, 2006 7:10 am
by rajan.n
Hi frnz,

am creating a detail file from the before sub routine execSH the code am giving is (input)

#$DSPATH#/dsjob -report #$DSPROJECT# #JOBNAME# DETAIL >#$DSLOGDIR#/#JOBNAME#.detail
since this is my first trial , am not aware wether this is correct or not.

can anybody help me how to read that file in the same job which am creating thru routine and want to parse that data in to diff tables/files.

thanks much in advance.

Posted: Tue Oct 17, 2006 8:49 am
by ray.wurlod
To read the file specify #$DSLOGDIR#/#JOBNAME#.detail in the Filename property.

Do whatever you like with its rows in the job design. Multiple output files requires multiple output links, probably from a Transformer stage.

Posted: Tue Oct 17, 2006 8:57 am
by rajan.n
Hi ray ,
thanks.
what i understand is to read .dtl we have to use seq file giving the file name.ok fine.
when we are reading an XML we use XMLinput to pass the total single coloumn to make it ..individual..i mean multiple coloumns.right. like that do we have any shuch ..stage for accessing this .dtl file.or directly can we acces thru the transformer..??
once agin thanks for the reply

ray.wurlod wrote:To read the file specify #$DSLOGDIR#/#JOBNAME#.detail in the Filename property.

Do whatever you like with its rows in the job design. Multiple output files requires multiple output links, p ...

Posted: Tue Oct 17, 2006 7:22 pm
by ray.wurlod
The Sequential File stage reads a line at a time. You would typically pass this line into a Transformer stage and have as many outputs from that as there are different line types. You can derive whatever data you derive from each input line on its own output link. And then send them wherever you will. Constrain the Transformer stage to operate in sequential mode or in a single-node node pool - there's no point splitting five or six lines into parallel operation.

Posted: Wed Oct 18, 2006 12:47 am
by rajan.n
Hi Ray, i have linked the single coloumn comming from the seq file to transformer..and linked the same coloumn to all the coloumns which i have in diff other transformations.

seq->trans->4 other trans | remaining job.


but my job is aborting for several reasons..one is below..can u pls guide me what care to be taken to fix this.

ERROR :

T2,0: Un-handled conversion error on field "STAGENAME " from source type "string[max=255]" to destination type "decimal[10,0]":
source value="
**************************************************
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file
Generated: 2006-10-18 00:09:32
Job start time=2006-09-28 08:56:26
Job end time=2006-09-28 08:56:44
Job elapsed time=00:00:18
Job status=2 (Fin"; the result is non-nullable and there is no handle_null to specify a default value.
Un-handled conversion error on field "STAGENAME " from source type "string[max=255]" to destination type "decimal[10,0]":
source value="
**************************************************
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file
Generated: 2006-10-18 00:09:32
Job start time=2006-09-28 08:56:26
Job end time=2006-09-28 08:56:44
Job elapsed time=00:00:18
Job status=2 (Fin"; the result is non-nullable and there is no handle_null to specify a default value.

there are coloumns in other trans which has this decimal type , but how the single coloumn will divide in to my required fields.
if u have any doc's regarding this can u pls fwd me.
thnks much once again for ur early responce.
ray.wurlod wrote:The Sequential File stage reads a line at a time. You would typically pass this line into a Transformer stage and have as many outputs from that as there are different line types. You can derive wha ...

Posted: Wed Oct 18, 2006 5:54 am
by Kirtikumar
Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.

Posted: Wed Oct 18, 2006 7:28 am
by rajan.n
Hi thax a lot for ur reply Kirtikumar,
yes ..my seq file reads as a single colomn

and as u sugested i have entered teh the substring ..

Read1.CONTENT[Index(Read1.CONTENT, 'Job start time=') + 15, 19]

but this says not valid exp..(red color)
my input coloumn is Read1.CONTENT , can u tell me where the error is..or where am making a mistake.
thank u once again
rajan.




Kirtikumar wrote:Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.

Posted: Wed Oct 18, 2006 9:26 am
by ArndW
The INDEX() function takes 3 parameters.

Posted: Wed Oct 18, 2006 10:29 am
by rajan.n
Hi , can any body give me the code for executing this first line. since..
the whole file reads as a single coloumn..i need to extract ..each field am facing this prob ..
thanks much in advance.



Kirtikumar wrote:Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.

Posted: Wed Oct 18, 2006 4:57 pm
by ray.wurlod
Is your line (record) terminator property set correctly?

Posted: Tue Oct 24, 2006 2:24 pm
by splayer
Can anyone tell me what .dtl files are?

Posted: Tue Oct 24, 2006 3:46 pm
by ray.wurlod
.dtl = "detail". You obviously haven't installed the Read My Mind stage! :lol:

The first two lines can be extracted using Field() functions with ":" as the delimiter. Using stage variables allows you to preserve values through other lines.

svJobName:

Code: Select all

If Left(Line,21) = "STATUS REPORT FOR JOB" Then Field(Line, ":", 2, 1) Else svJobName
svJobGenerated:

Code: Select all

If Left(Line,10) = "Generated:" Then Field(Line, ":", 2, 99) Else svJobGenerated
Others can be extracted with Field() functions with "=" as the delimiter.
svJobStart:

Code: Select all

If Left(Line,14) = "Job start time" Then Field(Line, "=", 2, 1) Else svJobStart
Constraint expressions should eliminate empty/blank lines and lines starting with a few asterisks. Output only when the "Job status" line is processed.

Posted: Tue Oct 24, 2006 3:51 pm
by splayer
Where can I read up more about them? I did a search on all PDFs and could not find any information on them. I know what header, detail and trail files and what they mean. I was just wondering if there was a new file type that Datastage uses called .dtl that I was not aware of.

Posted: Tue Oct 24, 2006 3:55 pm
by ray.wurlod
It's not DataStage that creates *.dtl files, it's rajan.n (the original poster). This is why the RMM stage was indicated.

Posted: Wed Oct 25, 2006 6:16 am
by DSguru2B
Ray, is RMM stage part of the IBM Integration Server :wink: