Long time coming, I know. Long story. Sorry.
No, the command by itself does not work. I did have to change the routien being compiled, and the .h file withing the routine to get it to compile without -D_LARGE_FILES and -qlonglong, but then when I added the options back, all the same errors.
Search found 7 matches
- Wed Oct 06, 2010 2:39 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: BuildOPs and large files
- Replies: 2
- Views: 1635
- Wed Oct 06, 2010 2:24 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Handling nulls in BuildOps
- Replies: 2
- Views: 1773
Figured out handling nulls in BuildOps
BuildOp Noob mistake
If you look at the code, the process defines a method for you called <fieldname>_null(). It returns true if it is null, false if it is not.
Two days trying to figure out something that was already done for me ... UGH!
If you look at the code, the process defines a method for you called <fieldname>_null(). It returns true if it is null, false if it is not.
Two days trying to figure out something that was already done for me ... UGH!
- Wed Oct 06, 2010 10:30 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Handling nulls in BuildOps
- Replies: 2
- Views: 1773
Handling nulls in BuildOps
I am having problems handling nulls in a BuildOp I have created. My input schema has four columns one of which, column_value, is defined as column_value:nullable string[max=255] {null_field='',prefix=2}. I am trying to move this column to my output schema, to a column with the same name, which is de...
- Sat Nov 07, 2009 2:54 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: BuildOPs and large files
- Replies: 2
- Views: 1635
BuildOPs and large files
I am trying to use a BuildOP called MultipleFileWriter in a parallel job on an AIX server. It works a lot like the folder stage in a server job. It takes two columns, the first being a file name, and the second being a record and writes multiple files. Unfortunately, it does not seem to be large fil...
- Fri Jun 22, 2007 9:25 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Pivot stage giving very poor performance
- Replies: 8
- Views: 3438
I have written wrapped stages that pivot tall and flat and run in parallel mode. I wonder what you consider good performance. By wrapping a simple C routine I can pivot a flat fixed file, 1.65MM records, 24 columns into 14.9MM rows (no blank columns) with key_id, column number, and value on an AIX b...
- Thu Jun 07, 2007 12:50 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Reverse Pivot
- Replies: 9
- Views: 2585
The databases I work with are deep not wide, so I do this a lot. Doing this in a parallel manner with a generic routine is difficult. I have done this with a C routine plug in. The trick is to get all the columns for the same record into the same partition. I do this by hash/sort partitioning on the...
- Mon May 21, 2007 2:36 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Pivot stage question
- Replies: 13
- Views: 4985
Pivoting
I am a newbie at this, it is my first post, so please bear with me. I do a tremendous amount of pivoting in the work I do, and have not been very impressed with the pivot stage. I have 2000+ input formats that I have to standardize into one of a few standard formats. These input formats range from 3...