Search found 8 matches
- Thu May 17, 2012 11:55 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: type conversion from type "raw[max=16]" to type &q
- Replies: 0
- Views: 905
type conversion from type "raw[max=16]" to type &q
I'd like to convert a varbinary to a bigint. The input data looks like {00 00 00 00 00 00 00 00 00 00 00 00 05 f5 e1 02} The target data would be 100000002 Using a server job, the following transformation gives the correct result Oconv(Oconv(%Arg1%, "MY"), "MCX") I'd like to use ...
- Thu May 03, 2012 4:28 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Slow running job
- Replies: 15
- Views: 20603
I ended up opening a ticket with IBM after several runs with varying times. They took a look at the previous job logs and recommended changing the partitioning method on the Lookup Stage from AUTO to HASH. Test 1 -- Run job pointing to a SQL Server 2005 database on a Windows 2003 64bit Ent R2 SP 2 s...
- Mon Apr 30, 2012 12:30 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Slow running job
- Replies: 15
- Views: 20603
We completed a test on a new VM with a clean DataStage 8.7 install on Windows 2008 R2 (64 bit) For job ODBC DSN created using c:\Windows\SysWOW64\odbcad32.exe Driver: SQL Server Native client 10 Test 1 -- Run job pointing to a SQL Server 2005 database on a Windows 2003 64bit Ent R2 SP 2 server with ...
- Thu Apr 26, 2012 10:49 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Slow running job
- Replies: 15
- Views: 20603
Thanks for the suggestions. On the Insert (last stage in job), the SQL Server Enterprise stage has an Insert Array Size = 20000 using a Write Method of Upsert. All the Lookups (SQL Server Enterprise stages) have a lookup type of Normal. Our last test, we modified the pagefile size on the TEST server...
- Wed Apr 25, 2012 5:43 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Slow running job
- Replies: 15
- Views: 20603
Slow running job
A parallel job was re-written by one of the ETL developers to replace an old 7.5 job that performance issues once we completed the upgrade to 8.7 The parallel job reads a flat file and does 28 lookups to SQL Server split across 7 Lookup Stages, a transform stage and then inserts rows to a SQL Server...
- Mon Jan 19, 2009 5:40 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Inconsistent job performance
- Replies: 7
- Views: 3871
Just to post the final findings on this issue. After many tests, performance monitoring, and explaining how DataStage works..... A registry change (by the Tech folks) to 'fix' a previous FTP issue we were having has been found to cause problems with file caching, clearing memory caches and some othe...
- Sat Dec 20, 2008 9:34 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Inconsistent job performance
- Replies: 7
- Views: 3871
The jobs so far that have had performance issues use only flat files and hash files. We haven't had issues extracting from DB2 on z\os. Likely due to our job design (one job per table\query extracted to flat file). We've been monitoring the read\write I/O on the various files and the processes aren'...
- Sat Dec 20, 2008 1:35 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Inconsistent job performance
- Replies: 7
- Views: 3871
Inconsistent job performance
Hi We have a DataStage server (7.5x2) with 2 projects; SysTest and UAT I have an issue where job performance is not consistent across these projects. Although job design, job sequencing, file sizes, directory structure - everything appears to be the same. I've confirmed the projects are setup the sa...