Does Before-SQL works only for DML ?
Moderators: chulett, rschirm, roy
Does Before-SQL works only for DML ?
Hi
Does Before-SQL in DB2 Connector stage works only for DML's ? We are trying to create the table in Before SQL and it throws an error saying table does not exist. It seems like before executing the Before-SQL, it looks for the table to exist and the meta data is same.
Please clarify.
Thanks
Does Before-SQL in DB2 Connector stage works only for DML's ? We are trying to create the table in Before SQL and it throws an error saying table does not exist. It seems like before executing the Before-SQL, it looks for the table to exist and the meta data is same.
Please clarify.
Thanks
Or just pre-create the table before the first run. Not sure anything else makes sense. You don't want to setup something you have to remember to change after the first run. Or be dropping and rebuilding a table when (I imagine) a simple truncate would suffice.
And unless I am mis-remembering, that "Before SQL" would run once on every node so be careful there in a multi-node environment.
And unless I am mis-remembering, that "Before SQL" would run once on every node so be careful there in a multi-node environment.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
SwathiCH - Problem is that whenever there is a change in the DDL, the job is going to fail the first time because of the same reason and moreover idea is to avoid the DBA communication in this case.
Craig - We do have the option of running once or once per node.
Ray - Table action works just fine but the problem is that the job will fail next time when it runs saying the table already exists. Workaround here is to drop the table in After-SQL but we would like to keep the table until the cycle runs next time for any debugging purpose.
Looks like we just have to live with the option of having to drop the table manually whenever we make changes to the table.
Craig - We do have the option of running once or once per node.
Ray - Table action works just fine but the problem is that the job will fail next time when it runs saying the table already exists. Workaround here is to drop the table in After-SQL but we would like to keep the table until the cycle runs next time for any debugging purpose.
Looks like we just have to live with the option of having to drop the table manually whenever we make changes to the table.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Table Action = Replace will both drop and create the table, without checking whether metadata matches the existing table. It does not fail if the table does not already exist.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
OK... that was the only scenario that made sense to me when one is dropping and recreating a table each run in case it changed. Otherwise you have to update the metadata and the job then redeploy it anyway, we would be including the full DDL with the job migration package. Table drops lose all of the grants - which would be a problem for us since we're never connecting as the owner, never mind the fact that we would be shot if we tried to drop a table.
As long as it works for you, that's the important part.
As long as it works for you, that's the important part.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Just thought of sharing what I found out.
REPLACE was failing when we run the 1st time as it expects the table to be there. So I selected REPLACE and select NO for generate SQL statement at run time and To drop the table, used a SP so it does not fail when the table is not there.
Bottom line is that If we specify SQL it checks metadata and if we let it create the SQL then it does not check the metadata. Issue with this approach is we can't mention the tablespace during the creation the table.
Craig - Yes. We only have one tablespace where we can create or drop the table.
REPLACE was failing when we run the 1st time as it expects the table to be there. So I selected REPLACE and select NO for generate SQL statement at run time and To drop the table, used a SP so it does not fail when the table is not there.
Bottom line is that If we specify SQL it checks metadata and if we let it create the SQL then it does not check the metadata. Issue with this approach is we can't mention the tablespace during the creation the table.
Craig - Yes. We only have one tablespace where we can create or drop the table.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I make a lot of use of this functionality loading to Teradata (replace the work table then populate it using FastLoad). It does not fail when the table does not exist. Must be a database-specific thing. Check also any "fail on error" properties that might exist in the Connector stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.