Search found 51 matches
- Mon Mar 11, 2013 12:42 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vertica connection performance
- Replies: 7
- Views: 6885
Hi Buzzy, How much volume of data did you used while performing these tests? We were having huge volume of data 15-16 million to start with. It took around 12 mins for datsstage job to write into seq file and then another 1-2 mins for COPY command to load into the tables, using 4 virtual node config...
- Thu Mar 07, 2013 4:39 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vertica connection performance
- Replies: 7
- Views: 6885
Sure buzzy... this connector that you are using, is that provided by Vertica itself?..
Yeah, i also thought that extra I/O would cause the problem, but we did faced some issues while using ODBC connector to Vertica specially if you need to truncate the table before loading.
Do let me know your results
Yeah, i also thought that extra I/O would cause the problem, but we did faced some issues while using ODBC connector to Vertica specially if you need to truncate the table before loading.
Do let me know your results
- Thu Mar 07, 2013 3:38 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vertica connection performance
- Replies: 7
- Views: 6885
Re: Vertica connection performance
I was also using Vertica with Datastage 8.7 version, the way how we tackled the loads were to create a load ready sequential file in pipe delimited and then using Vertica's COPY utility command to load the data. Believe me ODBC can never give you that performance. You can try using this way and comp...
- Wed Oct 24, 2012 10:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: junk data issue using sequential stage
- Replies: 9
- Views: 2744
Got the root cause of this issue - The file schema that i loaded into the target sequential file stage was imported using a mainframe copybook (it had level nums, groups etc). I think some-how the column definitions were the culprit behind such junk display of data as the definition was for an ebcdi...
- Tue Oct 23, 2012 11:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: junk data issue using sequential stage
- Replies: 9
- Views: 2744
The input to the job is a sequential pipe delimited file, which i can read properly in UNIX prompt. So this rules out the possibility of having ebcdic data. Moreover the input consists of integer and decimal values and transformations are also simple, so i dount if any non-readable ascii character c...
- Tue Oct 23, 2012 8:22 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: junk data issue using sequential stage
- Replies: 9
- Views: 2744
I am using vim <file name> in unix. Normally for a ascii file, we can see the actual data. But here in this case i am seeing entire data as junk.. following is example 201210|^@Z|^@^@'^S|^@^@^@^\|^@^@^@~| |^@^@^@^@|^@^@^@^@| |^@^@^@^@| | |^@^@^@^A|^@^B|^@^@^@^@|^@^@^@^@|^@^@^@^@|^@^@^@^@|^@^@^@^@|^@...
- Tue Oct 23, 2012 6:21 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: junk data issue using sequential stage
- Replies: 9
- Views: 2744
junk data issue using sequential stage
Hi, I am facing an issue while creating a sequentail pipe delimited file using datstage job. My job design is like seq stg -> trfm -> seq stg Target is pipe delimited. Now the job runs fine without any warning, even when i view data from view data utility in seq stage, i can see it properly but when...
- Wed Aug 29, 2012 4:54 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance Impact on using ODBC to extract and load
- Replies: 3
- Views: 1774
Performance Impact on using ODBC to extract and load
Hi All, We are planning to use ODBC connector stage for extraction and loading of the data from vertica database. We are using DS version 8.7. My concern is that we will be extracting huge volume of data ~270 Million rows with around 30-40 columns, so performance of jobs would take a lot of hit. Can...
- Fri Jul 13, 2012 1:45 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Lookup Operator Error
- Replies: 4
- Views: 2716
- Thu Jul 12, 2012 11:53 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Lookup Operator Error
- Replies: 4
- Views: 2716
There is no OCCURS clause for the key column. In fact, there are only 2 columns in the file. Also, i tried reading the file in a separate job and wrote it to a dataset, and used the dataset for lookup. But again it gave the same fatal error. The key column is a char column. It might have some junk c...
- Thu Jul 12, 2012 8:23 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Lookup Operator Error
- Replies: 4
- Views: 2716
Lookup Operator Error
Hi, I am getting following error in my job:- main_program: Syntax error: Error in "lookup" operator: Error in output redirection: Error in output parameters: Error in modify adapter: Error in binding: Could not find type: "subrec", line 330 I am doing a lookup in my job where the...
- Thu Jul 12, 2012 6:33 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Flattening a fixed width file
- Replies: 12
- Views: 5038
- Mon Jul 02, 2012 6:39 am
- Forum: General
- Topic: Monitor window in Director showing job stopped
- Replies: 6
- Views: 4462
- Mon Jul 02, 2012 5:48 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Parameterizing where clause in db2 connector
- Replies: 5
- Views: 4874
- Mon Jul 02, 2012 4:24 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Creating History records using SCD stage
- Replies: 2
- Views: 1342