Search found 100 matches
- Mon Aug 04, 2008 1:03 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: write hash file in v8.0
- Replies: 5
- Views: 2718
write hash file in v8.0
We have problems with hash files in server 8.0. DS is installed on Unix with RAID5. The jobs have problems with performance. The job begin to work at 3000 row/seg but when the hash reache 260 mb it becomes at 200 row/seg, but after 10 seg it works ok with good performance another time. there is some...
- Fri Jul 18, 2008 5:21 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DB2API stage
- Replies: 2
- Views: 1330
- Thu Jun 19, 2008 7:19 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: xml input stage AIX
- Replies: 8
- Views: 3871
- Thu Jun 19, 2008 12:27 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: xml input stage AIX
- Replies: 8
- Views: 3871
- Wed Jun 18, 2008 6:25 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: xml input stage AIX
- Replies: 8
- Views: 3871
- Wed Jun 18, 2008 5:46 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: xml input stage AIX
- Replies: 8
- Views: 3871
xml input stage AIX
Hello, I have a problem with reading xml files up to 49 mb. My job is like this: Folder Stage -> XML input -> Transformer -> Seq file The problem is with memory exceed: (Abnormal termination of DataStage. Fault type is 4. Layer type is BASIC run machine. Fault occurred in BASIC program DSP.ActiveRun...
- Wed May 21, 2008 6:33 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Date Extraction
- Replies: 8
- Views: 1783
- Wed May 21, 2008 6:21 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Job reads its own link counts
- Replies: 12
- Views: 6552
- Wed May 21, 2008 6:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DS v8 Installation Problems with Oracle 10
- Replies: 3
- Views: 2626
- Wed May 21, 2008 6:07 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Date Extraction
- Replies: 8
- Views: 1783
- Wed May 21, 2008 6:04 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Big file process with Aggregator stage - New big problem !
- Replies: 12
- Views: 5800
a solution could be to group the 30000 files in n files of a prudent size, and then load it in Oracle. 1.-to group files you can use the unix command cat file* > file_grouped 2.-I think it could be better group in BD than in DS, so first load in table and after do: insert into table2 select columkey...
- Wed May 21, 2008 4:40 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: converting multiple rows to single row!!
- Replies: 2
- Views: 1123
- Wed May 21, 2008 4:12 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: kill process ids
- Replies: 10
- Views: 4261
- Wed May 21, 2008 3:47 am
- Forum: General
- Topic: Dynamic metadata
- Replies: 11
- Views: 3638
- Wed May 21, 2008 2:41 am
- Forum: General
- Topic: Dynamic metadata
- Replies: 11
- Views: 3638
If you are using text files, you can try to use a seq file with only a column defined, then with substrings funtion load the especific value of columns in vbles and then work with them depends on what file you are treating. example. file column varchar 255 in transformer you can use vbles: for file1...