Does Before-SQL works only for DML ?

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Does Before-SQL works only for DML ?

Post by kaps »

Hi

Does Before-SQL in DB2 Connector stage works only for DML's ? We are trying to create the table in Before SQL and it throws an error saying table does not exist. It seems like before executing the Before-SQL, it looks for the table to exist and the meta data is same.

Please clarify.

Thanks
SwathiCh
Premium Member
Premium Member
Posts: 64
Joined: Mon Feb 08, 2010 7:17 pm

Post by SwathiCh »

You are correct.

DB2 connector stages verifies the table and its structure even before it tries to run before SQL.

Get the table created by DBA before the first run. Add the DROP table statement before create statement then from second run you will be good.
--
Swathi Ch
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Or just pre-create the table before the first run. Not sure anything else makes sense. You don't want to setup something you have to remember to change after the first run. Or be dropping and rebuilding a table when (I imagine) a simple truncate would suffice.

And unless I am mis-remembering, that "Before SQL" would run once on every node so be careful there in a multi-node environment.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You can create the table via the Table Action property in the Connector stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

SwathiCH - Problem is that whenever there is a change in the DDL, the job is going to fail the first time because of the same reason and moreover idea is to avoid the DBA communication in this case.

Craig - We do have the option of running once or once per node.

Ray - Table action works just fine but the problem is that the job will fail next time when it runs saying the table already exists. Workaround here is to drop the table in After-SQL but we would like to keep the table until the cycle runs next time for any debugging purpose.

Looks like we just have to live with the option of having to drop the table manually whenever we make changes to the table.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

kaps wrote:Looks like we just have to live with the option of having to drop the table manually whenever we make changes to the table.
So... this is an RCP enabled job?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Table Action = Replace will both drop and create the table, without checking whether metadata matches the existing table. It does not fail if the table does not already exist.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Ray - It worked. You the Man...I should have checked it after your first answer. Sorry I was lethargic...

Craig - No. It's not RCP enabled job but it still checks the Metadata. Ray's suggestion would work best for us.

Thanks all for the ideas...
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

OK... that was the only scenario that made sense to me when one is dropping and recreating a table each run in case it changed. Otherwise you have to update the metadata and the job then redeploy it anyway, we would be including the full DDL with the job migration package. Table drops lose all of the grants - which would be a problem for us since we're never connecting as the owner, never mind the fact that we would be shot if we tried to drop a table. :wink:

As long as it works for you, that's the important part.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Just thought of sharing what I found out.

REPLACE was failing when we run the 1st time as it expects the table to be there. So I selected REPLACE and select NO for generate SQL statement at run time and To drop the table, used a SP so it does not fail when the table is not there.

Bottom line is that If we specify SQL it checks metadata and if we let it create the SQL then it does not check the metadata. Issue with this approach is we can't mention the tablespace during the creation the table.

Craig - Yes. We only have one tablespace where we can create or drop the table.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I make a lot of use of this functionality loading to Teradata (replace the work table then populate it using FastLoad). It does not fail when the table does not exist. Must be a database-specific thing. Check also any "fail on error" properties that might exist in the Connector stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Yes. It failed because of the Fail on Error Property and when I set that to NO it just throws a warning message. I think I can convert that to Informational. We are doing a workaround for Tablespace issue !

Thanks for the help.
Post Reply