Search found 307 matches
- Mon Dec 17, 2007 5:38 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Problems with job control using sequences that run jobs
- Replies: 35
- Views: 9947
Additional information
removed garbage post
- Mon Dec 17, 2007 5:38 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Problems with job control using sequences that run jobs
- Replies: 35
- Views: 9947
Additional information
A friend suggested that I provide the structure of a job sequence to give you an idea of what I am doing with each table, so here goes: JobControl starts | BatchNumber job runs | Job Sequence starts | Check # of rows in Source | |----exit showing success if zero ---------------------------|---------...
- Mon Dec 17, 2007 4:59 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Restartablity in Datastage 7.0
- Replies: 14
- Views: 5592
- Mon Dec 17, 2007 4:57 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Restartablity in Datastage 7.0
- Replies: 14
- Views: 5592
- Mon Dec 17, 2007 4:54 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Problems with job control using sequences that run jobs
- Replies: 35
- Views: 9947
Problems with job control using sequences that run jobs
Hi all! Hopefully, this is a legimate problem and not just one of those instances of me having another mental speedbump... :oops: Here's my situation: I am using Ken Bland's JobControl utility to run my data transfer process for 192 tables from our OLTP system to staging tables for each client's dat...
- Mon Dec 17, 2007 4:12 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Comparing Two Datasets
- Replies: 4
- Views: 1810
Depending on how much data is involved with each row in the sequential files, follow Arnd's suggestion about using the hashed file, but only put the PK fields in there, this will make your hashed file smaller and possibly speed things up as well. Once again, this depends on the amount of data you ar...
- Mon Dec 17, 2007 4:11 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Comparing Two Datasets
- Replies: 4
- Views: 1810
Depending on how much data is involved with each row in the sequential files, follow Arnd's suggestion about using the hashed file, but only put the PK fields in there, this will make your hashed file smaller and possibly speed things up as well. Once again, this depends on the amount of data you ar...
- Mon Dec 17, 2007 4:07 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: SIMPLE DS job taking 90% CPU??
- Replies: 4
- Views: 1482
Just as an experiment, try landing the data instead of sending it directly from source to target in the same job. Send the source data to a sequential file and then put together a job to push the landed data to the target. If you are using the ODBC Stage, consider trying the RDBMS stage. I know it i...
- Mon Dec 10, 2007 9:57 am
- Forum: General
- Topic: DSD_SendEvent.B.
- Replies: 1
- Views: 1285
I haven't actually faced this problem, but it looks as though you need to add code that will provide better error control. Whether this is in the form of an additional stage in the sequence or a terminator stage or something, I'm not sure. Could you provide more information about how the sequence is...
- Mon Dec 10, 2007 9:44 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem in calling stored procedure
- Replies: 2
- Views: 1254
I have found issues with using the Run Stored Procedure stage as well. My solution has just been to use a before or after SQL call in a workable place. For example, if I need to run a procedure before I process another job then in that job, I will add the code that would normally call an SP from wit...
- Wed Nov 21, 2007 2:26 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: ExecCommand
- Replies: 17
- Views: 4190
- Wed Nov 21, 2007 1:46 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Commas in data value...How to fix the existing file ?
- Replies: 5
- Views: 1902
Is the field that has the problem consistently in the same position, or do multiple fields have the issue? If it is always the 17th field, for example, then you could use a little Perl script to find rows that have too many commas and then remove the comma after the beginning of that problem field. ...
- Tue Nov 20, 2007 3:07 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Schema file error
- Replies: 7
- Views: 6041
Just for grins, try it this way: record ( Col1:string[5] { delim='ws', quote=double }; Clo2:string[5] { delim='ws', quote=double }; ) I don't know how finicky Parallel is on format of the file. I am assuming that the database defaults field definitions to nullable (if that's not the case then add it...
- Tue Nov 20, 2007 2:52 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Problem loading text starting with date
- Replies: 5
- Views: 1271
Something else you might try is landing the data. Rewrite the job to drop the data into a sequential file using pipe (|) delimiters (unless your data contains that character). Then load to your target from the sequential file. This should take care of the issue with your dates. I also believe it wil...
- Tue Nov 20, 2007 2:40 pm
- Forum: DSXchange Testimonials
- Topic: Some things are just "what the doctor ordered"
- Replies: 8
- Views: 35673