Hi,
I am trying to extract the source table information of a job from the universe reporsitory. I know that this information can be pulled from reporting assistant , but I am attempting to extract the source table information in a sequencer before a job runs, to verify if the source tables used in the job have been loaded. This way, if the job changes tommorow and more tables are added as lookups, there wont be a need to add the tables in the list to check since this information is extracted at run time. This needs to be accomplished at run time. I am trying to decipher the DS_JOBOBJECTS table for this. The job uses DB2 tables via the DB2 plug in stage. Any pointers would be greatly appreciated.
Thanks.
SOURCE TABLE INFO OF A JOB - UNIVERSE
Moderators: chulett, rschirm, roy
This SQL seems to work for what I want. Not yet sure if this is fool proof and the best way to do it. Some more testing needs to be done. I reverse engineered a very identical SQL code posted on here that figures out the Hashed file usage in a project .
Any comments ?
Code: Select all
select
EVAL DS_JOBOBJECTS."@RECORD<14,1>" AS FILE_NAME ,
DS_JOBS.NAME AS JOB_NAME FMT '35L',
DS_JOBOBJECTS.NAME AS LINK_NAME FMT '35L' ,
DS_JOBOBJECTS.OLETYPE AS OLE_TYPE FMT '35L'
from
DS_JOBOBJECTS,
DS_JOBS
where
DS_JOBOBJECTS.OLETYPE in ('CCustomOutput','CCustomInput')
and DS_JOBOBJECTS.OBJIDNO = DS_JOBS.JOBNO
group by
FILE_NAME,
JOB_NAME,
LINK_NAME,
OLE_TYPE
;
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Tables with DS_... names contain only design-time information; they contain nothing about when/whether the job has been run and how successfully. You can certainly obtain the values of the Table link property with your query, but not whether they have been loaded. That would probably need something like a call to DSGetLinkInfo() - and you probably should check the job log as well (DSGetLog... functions) to verify that the load was successful, that no rows were rejected.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
Yes, we capture every job statistic after its run, like what table was loaded and if it was successful. Once i get the tables used in a job, it will be fairly easy to figure out if the source table loads were successful. The only missing piece is, the query above does not give me tables used in User defined SQL. I dont see how those tables can be captured at the datastage level. As far as I know, a User defined SQL is stored more like a string without any place holder for tables internally in the repository.
Thanks.
Yes, we capture every job statistic after its run, like what table was loaded and if it was successful. Once i get the tables used in a job, it will be fairly easy to figure out if the source table loads were successful. The only missing piece is, the query above does not give me tables used in User defined SQL. I dont see how those tables can be captured at the datastage level. As far as I know, a User defined SQL is stored more like a string without any place holder for tables internally in the repository.
Thanks.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It can be retrieved, but you have to figure out (= hack) which particular fields store the user-defined INSERT and UPDATE statements in the DS_JOBOBJECTS record. Then you could use the EVAL "@RECORD<field#>" technique that has shown up occasionally here.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.