select colname from tabname where update_dt<'2012-09-30'
Every time we need to extract data for a different date, we have to go and change the contents of this query file which we are trying to avoid. We tried to parameterize it using
DataStage should resolve it if the query is in a stage. Make sure that the spelling and casing of the job parameter reference in your SQL query exactly match those of the actual job parameter defined in the Job Properties Parameters page. Also make sure that the value provided for that parameter is in a legal format.
HOWEVER, DataStage is not able to resolve parameter references in files.
If it was really necessary to use an SQL file, I would prefer an upstream job to update the SQL file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
We have multiple tables from the source that we want to load to the target. We're doing this using a single RCP job. The structures of all the tables vary. Also there are many decryptions that we need to perform in the SELECT query while selecting. We are achieving this by using query files which are located on our server. Every time we need to load a certain table all we need to do is pass the name of the file.
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn
Life is really simple, but we insist on making it complicated.
Referring to Ray's post: You need to generate the text of the SQL-file in a job running before your actual Load-job.
When generating your SQL-file at runtime You can change processing of your job by modifying the metadata being used without ever touching the job. Generating SQL - even including complex column-derivations, multiple join-conditions, where-clauses, etc. - is actually not that difficult. The result is a file you can easily test either within or outside of DataStage. You may need some user-defined tables to hold the metadata necessary to suit your case.
"It is not the lucky ones are grateful.
There are the grateful those are happy." Francis Bacon
I have in the past created generic jobs of this kind to move data from Oracle to MySQL. Five jobs for 837 tables. I read the metadata from Oracle system tables (you could use DESCRIBE) to determine which of the five generic jobs to use. The one that handled BLOB had the functions I mentioned in it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Pardon my persistence but how exactly do I generate the contents of an SQL file?
BI-RMA wrote:When generating your SQL-file at runtime You can change processing of your job by modifying the metadata being used without ever touching the job. Generating SQL - even including complex column-derivations, multiple join-conditions, where-clauses, etc
Can you please elaborate a bit more? Thank you
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn
Life is really simple, but we insist on making it complicated.
jerome_rajan wrote:Pardon my persistence but how exactly do I generate the contents of an SQL file?
You just need a pre-processing step to take your parameter value and incorporate it into the SQL... much like building dynamic SQL in PL/SQL. Where does the value come from? A small Server job could source or create that and then simply build the string in a transformer then write it out to your sql file. Brain dead example:
For example, you could use an Execute Command activity in a sequence executing an echo command and a redirection operator to overwrite the file. In the Parameters field include literal text and job parameter reference(s).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.