How to read .dtl files

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

How to read .dtl files

Post by rajan.n »

Hi frnz,

am creating a detail file from the before sub routine execSH the code am giving is (input)

#$DSPATH#/dsjob -report #$DSPROJECT# #JOBNAME# DETAIL >#$DSLOGDIR#/#JOBNAME#.detail
since this is my first trial , am not aware wether this is correct or not.

can anybody help me how to read that file in the same job which am creating thru routine and want to parse that data in to diff tables/files.

thanks much in advance.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

To read the file specify #$DSLOGDIR#/#JOBNAME#.detail in the Filename property.

Do whatever you like with its rows in the job design. Multiple output files requires multiple output links, probably from a Transformer stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

Post by rajan.n »

Hi ray ,
thanks.
what i understand is to read .dtl we have to use seq file giving the file name.ok fine.
when we are reading an XML we use XMLinput to pass the total single coloumn to make it ..individual..i mean multiple coloumns.right. like that do we have any shuch ..stage for accessing this .dtl file.or directly can we acces thru the transformer..??
once agin thanks for the reply

ray.wurlod wrote:To read the file specify #$DSLOGDIR#/#JOBNAME#.detail in the Filename property.

Do whatever you like with its rows in the job design. Multiple output files requires multiple output links, p ...
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The Sequential File stage reads a line at a time. You would typically pass this line into a Transformer stage and have as many outputs from that as there are different line types. You can derive whatever data you derive from each input line on its own output link. And then send them wherever you will. Constrain the Transformer stage to operate in sequential mode or in a single-node node pool - there's no point splitting five or six lines into parallel operation.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

Post by rajan.n »

Hi Ray, i have linked the single coloumn comming from the seq file to transformer..and linked the same coloumn to all the coloumns which i have in diff other transformations.

seq->trans->4 other trans | remaining job.


but my job is aborting for several reasons..one is below..can u pls guide me what care to be taken to fix this.

ERROR :

T2,0: Un-handled conversion error on field "STAGENAME " from source type "string[max=255]" to destination type "decimal[10,0]":
source value="
**************************************************
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file
Generated: 2006-10-18 00:09:32
Job start time=2006-09-28 08:56:26
Job end time=2006-09-28 08:56:44
Job elapsed time=00:00:18
Job status=2 (Fin"; the result is non-nullable and there is no handle_null to specify a default value.
Un-handled conversion error on field "STAGENAME " from source type "string[max=255]" to destination type "decimal[10,0]":
source value="
**************************************************
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file
Generated: 2006-10-18 00:09:32
Job start time=2006-09-28 08:56:26
Job end time=2006-09-28 08:56:44
Job elapsed time=00:00:18
Job status=2 (Fin"; the result is non-nullable and there is no handle_null to specify a default value.

there are coloumns in other trans which has this decimal type , but how the single coloumn will divide in to my required fields.
if u have any doc's regarding this can u pls fwd me.
thnks much once again for ur early responce.
ray.wurlod wrote:The Sequential File stage reads a line at a time. You would typically pass this line into a Transformer stage and have as many outputs from that as there are different line types. You can derive wha ...
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.
Regards,
S. Kirtikumar.
rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

Post by rajan.n »

Hi thax a lot for ur reply Kirtikumar,
yes ..my seq file reads as a single colomn

and as u sugested i have entered teh the substring ..

Read1.CONTENT[Index(Read1.CONTENT, 'Job start time=') + 15, 19]

but this says not valid exp..(red color)
my input coloumn is Read1.CONTENT , can u tell me where the error is..or where am making a mistake.
thank u once again
rajan.




Kirtikumar wrote:Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The INDEX() function takes 3 parameters.
rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

Post by rajan.n »

Hi , can any body give me the code for executing this first line. since..
the whole file reads as a single coloumn..i need to extract ..each field am facing this prob ..
thanks much in advance.



Kirtikumar wrote:Is your seq file stage properties are set to read complete report of one job as single row or are you reading each line of the report from the sequential file?

If you are reading whole report from the sequential file as single row with single column then, you will have to use substring and index function combination to find out values of your target columns. E.g. column value is

Code: Select all

 
************************************************** 
STATUS REPORT FOR JOB: cntrct_actl_hist_idl_file 
Generated: 2006-10-18 00:09:32 
Job start time=2006-09-28 08:56:26 
Job end time=2006-09-28 08:56:44 
Job elapsed time=00:00:18 
Job status=2 (Fin
Then find the character number for text Job start time= in the column using index, then in it add lenght of the search string i.e. Job start time= and extract next 19 chars.
So target column JobStartTime in transformer should have derivation as -
InputCol[Index(InputCol, 'Job start time=') + 15, 19] and it will give you the value 2006-09-28 08:56:26.
In the same way you can get value for other columns. But to use this most imp thing is details for one job should be read as a single row with single column.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Is your line (record) terminator property set correctly?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
splayer
Charter Member
Charter Member
Posts: 502
Joined: Mon Apr 12, 2004 5:01 pm

Post by splayer »

Can anyone tell me what .dtl files are?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

.dtl = "detail". You obviously haven't installed the Read My Mind stage! :lol:

The first two lines can be extracted using Field() functions with ":" as the delimiter. Using stage variables allows you to preserve values through other lines.

svJobName:

Code: Select all

If Left(Line,21) = "STATUS REPORT FOR JOB" Then Field(Line, ":", 2, 1) Else svJobName
svJobGenerated:

Code: Select all

If Left(Line,10) = "Generated:" Then Field(Line, ":", 2, 99) Else svJobGenerated
Others can be extracted with Field() functions with "=" as the delimiter.
svJobStart:

Code: Select all

If Left(Line,14) = "Job start time" Then Field(Line, "=", 2, 1) Else svJobStart
Constraint expressions should eliminate empty/blank lines and lines starting with a few asterisks. Output only when the "Job status" line is processed.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
splayer
Charter Member
Charter Member
Posts: 502
Joined: Mon Apr 12, 2004 5:01 pm

Post by splayer »

Where can I read up more about them? I did a search on all PDFs and could not find any information on them. I know what header, detail and trail files and what they mean. I was just wondering if there was a new file type that Datastage uses called .dtl that I was not aware of.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's not DataStage that creates *.dtl files, it's rajan.n (the original poster). This is why the RMM stage was indicated.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Ray, is RMM stage part of the IBM Integration Server :wink:
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Post Reply