All the rows are written into log - how to avoid???

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vij
Participant
Posts: 131
Joined: Fri Nov 17, 2006 12:43 am

All the rows are written into log - how to avoid???

Post by vij »

Hi,

I face two problems:

1.Each and every row the input file reads is written on the log, eventhough i have not used the Peek Stage and the log says: "name of the stage.thelinkname_Peek,0: the values and the column names".

I used the job parameter - APT_DISABLE_COMBITATION, but currently i have disabled it and eventhen the log displays all the records, which inturn makes the run consume more time.

2.I cant open a job in a catogery on which i have access to. the error message i have got is :

This item has no design time information.

will the job got "corrupted" or has it been deleted?

Pls anyone help me solving these problems.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

1) The Peek stage has properties that determine how many rows will be logged. Amend these.

2) This job may have had only its executables exported, so that the import will have no design-time information. Or it may be that the job has become corrupted. If you have an export of the job design, try importing from that.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply