Search found 39 matches
- Sat May 12, 2007 8:10 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delimiters and MFF stage in Mainframe canvas
- Replies: 4
- Views: 2415
Thank you , Ray. Delimited flat file stage , doesnt allow multiple formats - header, data and trailer records and MFF doesnt allow delimiters!! Thanks for pointing out the documentation, I tried ASCII delimiter 168 in the delimiter column of the delimited flat file stage , it says its invalid .. and...
- Fri May 04, 2007 12:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delimiters and MFF stage in Mainframe canvas
- Replies: 4
- Views: 2415
Delimiters and MFF stage in Mainframe canvas
I am developing mainframe jobs in DS 390. I have a requirement to read an input file which has the format 1)Header - Data - Trailer 2) Data record delimited by ASCII 168 Q1: Since the record has multiple formats, I decided to use MFF Stage, but then it doesnt support delimited files, only fixed widt...
- Thu Apr 19, 2007 12:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Multi format flat file - handling occurs clause
- Replies: 3
- Views: 3089
Yes , exactly. I had to apply a filter, I did and it works correctly now. =>Also I wanted to get the 20 rows for one output file (flattened) =>and no.of occurs * no. of rows (like 6 occurs* 20 rows = 120) in other output file (normalised) I had to use 2 seperate jobs to achieve this. One with faltte...
- Thu Apr 19, 2007 12:51 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Generating abends (Failures) from Datastage canvas in DS 390
- Replies: 3
- Views: 2503
- Fri Apr 13, 2007 11:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Generating abends (Failures) from Datastage canvas in DS 390
- Replies: 3
- Views: 2503
Generating abends (Failures) from Datastage canvas in DS 390
Is there a way where we can generate an abend or a sysout from DS 390 jobs?
I would like to check a condition in my transformer , and if it doesnt want
to satisfy, I want my job to fail with an error message.
Is this possible in datastage canvas?
Please suggest
I would like to check a condition in my transformer , and if it doesnt want
to satisfy, I want my job to fail with an error message.
Is this possible in datastage canvas?
Please suggest
- Thu Apr 12, 2007 8:22 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Multi format flat file - handling occurs clause
- Replies: 3
- Views: 3089
Multi format flat file - handling occurs clause
I am new to mainframe jobs... I have a similar situation. My mainframe source file has header, data and trailer records. So I am using a Multi format flat file, and using constraints to identify the records and it works fine. In data record, I have an occurs clause - 6 times, and I setelect flatten ...
- Thu Apr 12, 2007 11:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Multi Format Flatfile
- Replies: 9
- Views: 5382
And here are my CFDs... * DATA EXTRACT RECORD 01 ABC-CLIENT-ACCOUNT. 05 ABC-CONTROL-SECTION. 10 ABC-SYSTEM-ID PIC X(03). 10 ABC-ACCOUNT-NUMBER PIC X(12). 10 ABC-NUM-OF-CLIENTS PIC 9(02). 10 ABC-LEVEL-3-4-IND PIC 9(01). 88 ABC-LEVEL-3-AMA VALUE '3'. 88 ABC-LEVEL-4-EQS VALUE '4'. 10 ABC-ACCOUNT-STATUS...
- Thu Apr 12, 2007 10:09 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Multi Format Flatfile
- Replies: 9
- Views: 5382
I am new to mainframe jobs... I have a similar situation. My mainframe source file has header, data and trailer records. So I am using a Multi format flat file, and using constraints to identify the records and it works fine. In data record, I have an occurs clause - 6 times, and I setelect flatten ...
- Tue Apr 10, 2007 2:43 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: multiple record structure flat file
- Replies: 16
- Views: 9805
Thanks again Ray. Yes, this seems to be the easy , usual approach. But again, once I get the counts - processed and trailer counts, I dont have a Job Control here, all I have is only the Canvas. How do I do the decission making, where I can compare, and call child job which actually does the record ...
- Tue Apr 10, 2007 1:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: multiple record structure flat file
- Replies: 16
- Views: 9805
Thanks Ray. Yes, also we can use DSGetLinkInfo I believe. But these techniques, require the whole file to be processed once , to get the record count. I have to get the counts first, if it doesnt match, I need to error out and not process the file. Is there any way to achieve this... Yes, this is mo...
- Mon Apr 09, 2007 11:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: multiple record structure flat file
- Replies: 16
- Views: 9805
We have almost similar requirement and we are planning to use Multi format flat file stage. Is it okay?? Also I have a requirement to check record count and compare it with the trailer record, and abort if theres a mismatch. I am new to mainframe jobs, I can imagine using wc -l in unix and compare o...
- Mon Mar 12, 2007 12:42 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datasets location - Disk resource type in config file
- Replies: 5
- Views: 2656
thanks
Thanks Sumit,Much appreciated. Will get back if there are any queries.
- Wed Mar 07, 2007 9:00 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datasets location - Disk resource type in config file
- Replies: 5
- Views: 2656
Datasets location - Disk resource type in config file
Hi
We do define a resource type "disk" with the location of datasets in the apt config file.
How is it different/same as the location which we mention in the individual dataset stages inside the job.
Any ideas?
Thanks,
Sankar
We do define a resource type "disk" with the location of datasets in the apt config file.
How is it different/same as the location which we mention in the individual dataset stages inside the job.
Any ideas?
Thanks,
Sankar
- Mon Mar 05, 2007 5:01 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Documentation on orchestrate
- Replies: 5
- Views: 2904
no luck yet
Thanks Kumar.
I am still not able to find out in the documentation of 7.1
Any idea?
Thanks,
Sankar
I am still not able to find out in the documentation of 7.1
Any idea?
Thanks,
Sankar
- Mon Mar 05, 2007 4:26 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Documentation on orchestrate
- Replies: 5
- Views: 2904
advpjdev.pdf
Hi,
I am not able to find out the advance parallel job developer guide in the product documentation. (Client software, Docs folder or in dsbooks.pdf).
Please let me know where it would be available.
Many thanks,
Sankar
I am not able to find out the advance parallel job developer guide in the product documentation. (Client software, Docs folder or in dsbooks.pdf).
Please let me know where it would be available.
Many thanks,
Sankar