Search found 296 matches

by throbinson
Mon Oct 27, 2008 3:11 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Dataset Management Utility Doesn't Work
Replies: 7
Views: 3648

Yes. Where you want the data to go is the path you'll need to put into the resource disk path of the nodes of the Config File. You've got a 2 node SMP currently writing to the same place on both nodes. Assuming there is no contention when writing to the same actual physical disk, this shouldn't be a...
by throbinson
Mon Oct 27, 2008 1:27 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Dataset Management Utility Doesn't Work
Replies: 7
Views: 3648

The Data File Path in the config file? That is not correct. You need to point to the descriptor file. It is this file which will contain the config file paths to the resource disks for the dataset. The descriptor file will provide orchadmin ont he command line or the Dataset Management tool in Desig...
by throbinson
Mon Oct 27, 2008 5:34 am
Forum: General
Topic: MQ Series Invoke DataStage Job.
Replies: 20
Views: 10396

Maybe I'm mis-reading the requirement. I've not done MQ in a while. Why not define a job and have it constantly run waiting for a message from the queue? As soon as a message arrives, it will be delivered to the job and processed. When you want to stop the job, schedule another DataStage job to inse...
by throbinson
Fri Oct 24, 2008 1:33 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: dynLUT files in /tmp
Replies: 1
Views: 942

They are the memory mapped files used when the lookup data in a job runs out of physical memory and starts getting swapped to virtual memory. The directory is configurable via the config file for the scratchdisk resource defined there. Currently it appears to be /tmp or is missing altogether and the...
by throbinson
Fri Oct 24, 2008 1:14 pm
Forum: General
Topic: MQ Series Invoke DataStage Job.
Replies: 20
Views: 10396

Using a shell script is not a good plan. there is no need.
Use the MQ Stage. Why use a shell script? Did you need this to be realtime? No problem. Read the documentation.
by throbinson
Mon Oct 20, 2008 9:12 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Sequential file
Replies: 13
Views: 3717

If your files are not fixed width, then do a search on this APT_IMPEXP_ALLOW_ZERO_LENGTH_FIXED_NULL.
The solution will be to do two things.
1. Add that env variable to your Project and set it to True
2. Define the default for any field that contains a NULL to "" (empty) at the line level.
by throbinson
Fri Oct 17, 2008 3:06 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Does Teradata API Stage Support Datatype - DATE
Replies: 4
Views: 3072

DATE!!! But no data pull is correct. It is as if DatgaStage converts it to an internal number but is using the wrong date format mask in doing so, therefore the OCONV back is not the correct date.
by throbinson
Fri Oct 17, 2008 8:22 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Does Teradata API Stage Support Datatype - DATE
Replies: 4
Views: 3072

Does Teradata API Stage Support Datatype - DATE

Teradata API Stage - DLL teradata.so, Version 1.2.4. DATE fields are ANSIDATE FORMAT yyyy-mm-dd. We routinely change the Table definition metadata of any DATE field to CHAR(10) because this appears to be the only way to handle a date field via the Teradata API stage. Can be this true or are we missi...
by throbinson
Mon Oct 13, 2008 1:44 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: XML - Zero output count
Replies: 5
Views: 1728

Could you explain what function against the data is being performed in the XSLT that cannot be done in the Transformer against parsed XML fields? I'm curious.
by throbinson
Tue Oct 07, 2008 5:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Automate metadata creation
Replies: 5
Views: 1921

I may be mis-reading this post but it seems like you have written an Excel macro to create a .dsx file for your table definitions. You are then asking why these .dsx s do not import into DataStage. If this is correct then the answer is easy; Your reverse engineering (hack) of the DataStage export/im...
by throbinson
Mon Oct 06, 2008 5:32 am
Forum: General
Topic: Teradata Multiload--TENACITY Settings
Replies: 1
Views: 1873

Compare the Starting Job... director log entries in the QA and Prod environment job runs to verify that the two parms are identically defined and populated to the job in both QA and Prod.
by throbinson
Wed Oct 01, 2008 5:27 am
Forum: General
Topic: DataSatage Job Names Convention
Replies: 4
Views: 1795

Each Job name must be unique within the Project, not category. I think I read that you can't name it ROOT either.
by throbinson
Fri Sep 19, 2008 11:00 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: prevent file creation
Replies: 3
Views: 1485

There is a kind of a neato hack that would work but it would involve some heavy derivation lifting and most likely is NOT worth it. Replace the Sequential File stage with a Folder stage with a column for the file name. This would create a "late binding" effect in that the file would not be...
by throbinson
Thu Sep 18, 2008 5:41 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problem With parsing the XML file.
Replies: 7
Views: 1852

What is the XPath for the Period repetition key?
by throbinson
Tue Sep 16, 2008 7:20 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to auto-generate metadata?
Replies: 8
Views: 4603

You could build a DataStage job to query the Oracle system tables for the source table. When a change is detected the job would create the proper schema files. Another DataStage job, RCP enabled, would read the source Oracle table and write out the fields to the just generated schema file in a Seque...