I have a job where i need to read the log file from DS and write it to a text file. Is there a way in the Universe SQL that i can fetch only the first 1000 rows from the RT_LOGxxx file.
Currently i have a counter in a transformer and do not write to the text file when the counter reaches 1000. This works however it still has to read the entire log file.
I need to put a limit because there are cases when there is some DB issue and the log file just grows to more than 2GB and i do not want to read the entire log as the first 100 or so entries will tell me what went wrong.
Thanks
Limit fetch in Universe Query
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 13
- Joined: Tue Nov 08, 2005 9:43 am
Use the limits tab of the job (when you run it) and set Stop stages after N rows.
-or-
>SELECT * FROM VOC FIRST 5;
NAME.......... TYPE DESC..........................
STAT V Verb - Produce the total and
average of values in named
field in a file
MONETARY K Keyword - NLS locale category
NLS.MAP.TABLES F F
OPTIM.SCAN K Keyword - SET.SQL Environment
SET.MODE V Verb - Display/modify the mode
of a file.
Sample of 5 records listed.
>
-or-
>SELECT * FROM VOC FIRST 5;
NAME.......... TYPE DESC..........................
STAT V Verb - Produce the total and
average of values in named
field in a file
MONETARY K Keyword - NLS locale category
NLS.MAP.TABLES F F
OPTIM.SCAN K Keyword - SET.SQL Environment
SET.MODE V Verb - Display/modify the mode
of a file.
Sample of 5 records listed.
>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
If your log grows to 2GB it will become corrupted and unusable. If this is likely, you need to resize that particular log table to use 64-bit internal addressing. Before it gets corrupted.
This may not be a universal panacea; the log table will also get corrupted if a write to it is blocked by the fact that the disk is full. And this becomes more likely as the theoretical maximum size is 1PB or higher.
This may not be a universal panacea; the log table will also get corrupted if a write to it is blocked by the fact that the disk is full. And this becomes more likely as the theoretical maximum size is 1PB or higher.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.