SQL data type '%1' defined on column %2 is not supported.

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
sbass1
Premium Member
Premium Member
Posts: 211
Joined: Wed Jan 28, 2009 9:00 pm
Location: Sydney, Australia

SQL data type '%1' defined on column %2 is not supported.

Post by sbass1 »

Hi,

Scenario:

We do a data load. We get word the source files provided were incorrect. We get new source files. We want to delete appropriate records from the target tables prior to the (re)load (i.e. any records in the target tables matching the effective date(s) in the new source files).

Some details:

The metadata for records to be deleted will be stored in a utility table in the database.

The SQL to delete the appropriate records will be custom, user generated SQL for each table. The SQL will be static, i.e. not parameter driven.

The deletions must be done in a particular order due to FK constraints.

Each deletion must run to completion before the next one begins.

Job Approaches:

1) Put all the SQL (hundreds or thousands of lines) in a single DRS stage (before or after tab?) to do the deletions. Dummy up some sequential file "input" and "output" to get the job to compile.

2) Write a stored procedure(s) and call that.

3) Use a separate DRS stage for each target table affected. Put sequential stages (/dev/null) between each one to ensure that each "parent" stage runs to completion before the next "child" stage starts.

My initial thoughts were to use #3. I threw together a quick test job as follows:

Code: Select all

seq file 1 --> DRS --> seq file 2
seq file 1:

Output tab:
Stage uses filter commands
File = /dev/null
Filter = "printf "Running next stage...\n" "
Link colum: Dummy, varchar 50

DRS:

Input tab:
User-defined SQL
Columns: Dummy, varchar 50
SQL: SELECT 'Running next stage...'

Output tab:
Select <column list> FROM <source table>
I might change this to "TOP 100" or distinct - whatever helps debugging best when selecting "View link data" in Designer

seq file 2:

/dev/null
Append to existing file
Columns: <column list> FROM <source table>

Questions:

1) What is your recommendation as best practice to approach this problem?

2) Regardless of #1, I'm experiencing the problem "SQL data type '%1' defined on column %2 is not supported." The problem occurs in the custom, user generated SQL in the Input tab of the DRS stage.

What does this cryptic message mean, and how do I get rid of it? Is it documented in the doc set anywhere? Note that every "View <linkname> data" RMB selection works - it just fails when I submit the job.

I did see viewtopic.php?t=88850&highlight=SQL+dat ... +supported. before posting. Tried various things in that post (:1 , : ?, ?) but couldn't get it to work.

Thanks,
Scott
Some people are the Michael Jordan's of Datastage. I'm more like Muggsy Bogues :-)
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

What database? And just as an FYI and AFAIK, the DRS stage always uses the ODBC style parameter markers which are unadorned positional question marks - no colons or numbers or anything else.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sbass1
Premium Member
Premium Member
Posts: 211
Joined: Wed Jan 28, 2009 9:00 pm
Location: Sydney, Australia

Post by sbass1 »

chulett wrote:What database? And just as an FYI and AFAIK, the DRS stage always uses the ODBC style parameter markers which are unadorned positional question marks - no colons or numbers or anything else.
Sorry...SQL Server 2005.

Perhaps the issue will go away once I write the "real" SQL, which will be something like (pseudocode:

Code: Select all

/* use Common Table Expression (SQL Server extension - like a dynamic view) to make coding easier */

;
WITH DeleteMe AS (
   SELECT 
      A.SK
   FROM
      dbo.TargetTable A
   INNER JOIN
      dbo.SourceDataDateList B
   ON
      A.SourceSystem = B.SourceSystem
      and A.EffectiveDate = B.EffectiveDate
      and A.SomeOtherKeys = B.SomeOtherKeys
)

DELETE FROM dbo.TargetTable
WHERE TargetTable.SK = DeleteMe.SK
But, I'm still going to be fighting the "SQL data type '%1' defined on column %2 is not supported." error unless I can work out how to get DS to accept this SQL in the user-generated section of the Input tab and be at peace with the column definitions in that tab.
Some people are the Michael Jordan's of Datastage. I'm more like Muggsy Bogues :-)
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You could try switching the DRS stage to ODBC or switching to the ODBC stage itself, it's the only one I'm aware of that will not care if the parameter markers (or lack thereof) do not match up to the columns in the stage.

I personally know the ODBC stage supports this as I've taken advantage of that fact in the past.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sbass1
Premium Member
Premium Member
Posts: 211
Joined: Wed Jan 28, 2009 9:00 pm
Location: Sydney, Australia

Post by sbass1 »

chulett wrote:You could try switching the DRS stage to ODBC or switching to the ODBC stage itself, it's the only one I'm aware of that will not care if the parameter markers (or lack thereof) do not match up to the columns in the stage.

I personally know the ODBC stage supports this as I've taken advantage of that fact in the past.
Thanks Craig!!! :D I'll certainly file this one in the back of my brain. I do like the before and after tabs of the DRS stage, however. Something the ODBC stage lacks.

Right now my test program is working. I won't mark this resolved yet until I get the actual programs working, but the problem is less "urgent" (on this all-volunteer forum :wink: )

Note that the separate ODBC stages are functionally no different than just whacking the entire SQL into a single stage. But it graphically "compartmentalizes" the code to a single target table. This may have some maintenance advantages.

Any further comments on #1 above, "What is your recommendation as best practice to approach this problem?" are still most welcome.

Thanks,
Scott
Some people are the Michael Jordan's of Datastage. I'm more like Muggsy Bogues :-)
jcthornton
Premium Member
Premium Member
Posts: 79
Joined: Thu Mar 22, 2007 4:58 pm
Location: USA

Post by jcthornton »

You asked for best practices. From this example, I can think or two:

1. I hope you don't take this as flippant. I don't mean it that way. Outside ETL, put some process in place to ensure that bad data is not delivered to you for loading.

I don't know how the source data was produced and sent, but in a production environment, it makes to have the process to produce the source data have checks to prevent issues like that.

How, specifically, to prevent that problem is going to be highly dependent upon what is producing the data.

2. To make it easier to clean up this type of mistake, adding a batch number to your target tables can help significantly. Like always, there are multiple ways and strategies to use for creating and using batch ids.

If you had batch ids, then you could have a single job that takes the batch id and a table name in as parameters and deletes the records that were in error. To fully automate it, you could build a sequence that performs the deletes on all tables.

Now, batch ids do help with this, but they can do a lot more for you too.



I have never done this, nor heard of it being done, so I cannot call it a best practice (hence, no number). If your load process is a predictable batch that covers everything, you could perform a database backup at the end of each batch. Then, if a bad batch gets loaded, you restore the prior backup and then run the good batches until you are caught up.
Jack Thornton
----------------
Spectacular achievement is always preceded by spectacular preparation - Robert H. Schuller
sbass1
Premium Member
Premium Member
Posts: 211
Joined: Wed Jan 28, 2009 9:00 pm
Location: Sydney, Australia

Post by sbass1 »

This is resolved. Thank you to Craig for the ODBC vs. DRS stage "trick", and Jack for his comments on best practice architectural approaches.
Some people are the Michael Jordan's of Datastage. I'm more like Muggsy Bogues :-)
Post Reply