Page 1 of 1

Execute PKZIP on mainframe

Posted: Mon Apr 28, 2014 8:43 am
by leathermana
What possible approaches are there to create a job or UNIX script that can ZIP an existing flat file on a mainframe Z/OS system (all display format, no packed or binary data) in preparation for ftp'ing to our DataStage server? PKZIP is installed on the mainframe. We are being required to do this strictly through DataStage with no JCL jobs stored on the Mainframe unless created and Maintainable through datastage. Is this doable?

Posted: Mon Apr 28, 2014 8:49 am
by FranklinE
I would need to know more about the reasons for the restrictions. What is your scheduling tool? Is it possible to use its command line to run PKZip in z/OS? Do you have the DataStage for Cobol implementation?

I'm suspicious. When I see requirments like this, I wonder if those making them actually understand DataStage. Host systems have their utilities, they run natively under standard implementations. My first choice here would be JCL, not take it off the table.

Posted: Mon Apr 28, 2014 9:06 am
by qt_ky
Caution: We avoid using PKZip on the mainframe because we have seen it corrupt too many ZIP files.

Think about this too: If your DataStage server has access to the mainframe file, such as on a shared drive (not sure that's possible), then your job could just go ahead and read the file.

Posted: Mon Apr 28, 2014 9:20 am
by FranklinE
Eric, I didn't know that about PKZip. Thanks.

It leads to another question: what compression protocols are they using in z/OS? If they have a standard DASD setup, they should already be storing data in compressed formats.

Posted: Mon Apr 28, 2014 9:20 am
by leathermana
The reasons for the restrictions are that at some point DataStage was sold to the funders as a complete Do-All replacement of the pieced together set of processes currently running but turning into a maintenance nightmare. My suggestion that ZIPping code could be tailed onto the existing (and to continute to exist) JCL that extracts the data into a flat file, has been rejected. That appears to them to be a compromise of their goal to eliminate their current piecmeal process. I don't really get it since in my limited understanding it seems that part of the code would be maintenance free, not needing to deal with any data structure changes, etc. I am just wondering if there is a way to do this, connecting to their system and executing a set of commands through a script ... ? Or .... ? We don't have the Mainframe job type available to us and don't know what it's capabilities would be. As a shot in the dark I have tried seeing if executing commands through an FTP connection might work. Not that I've found so far.

Posted: Mon Apr 28, 2014 10:07 am
by leathermana
They have been using PKZIP for years, apparently with no corruption issues. We don't have shared drive access to the mainframe, and ftping the uncompressed test file takes 98 minutes, not acceptable in the timeframe we have to do several of these. I know nothing about the compression protocols or DASD, but I do know they have found it necessary to PKZIP their files in the current process for faster FTPing.

Posted: Mon Apr 28, 2014 10:56 am
by qt_ky
I should clarify that PKZIP on z/OS does not corrupt files every time. We have run into corruption multiple times however, on large files. It could be that our PKZIP version is older or has a defect. If you plan use it, test it first on a variety of file sizes.

I don't know if they compress anything on our mainframe or not; it's like a black box to me. Because of the problems we have seen, we transfer files off the mainframe while uncompressed. Transfers take longer, but also gives us data that is not corrupted.

If you can trigger a compression command remotely, that would be better than a file share scenario. If your DataStage server issues a zip command, then the DataStage server is where the compression processing takes places. That means the file would travel over the network to your server, get compressed in chunks, and the compressed bytes would travel back over the network again. Then you would ftp the file to you server, transferring the same data multiple times...

Posted: Mon Apr 28, 2014 11:07 am
by FranklinE
Alden, your first need is an ally on the mainframe side who does have the technical knowledge and access, and is willing to be sympathetic to the realistic limitations of DataStage rather than how it was previously "sold".

You're in a bad place, and have every right to be frustrated. I have the requisite mainframe expertise but no knowledge of the details of your host systems. It comes down to knowing the right questions to ask, and an insider is your only hope for effective help with that.

Good luck.

Posted: Tue Apr 29, 2014 8:51 am
by leathermana
Just found what I think will be the answer to my requirement. The implementation of FTP on z/OS systems includes the command FILE=JES. This enables a JCL coded file to be transferred to the mainframe via FTP and sent directly to the JES (Job Entry Subsystem) for execution. It also allows output to be retrieved. Here are some introductory resources for anyone finding this possiblity useful:

1: http://pic.dhe.ibm.com/infocenter/zos/v ... tm#intfjes

2: http://www.lbdsoftware.com/Submitting_J ... ng_FTP.pdf

3: https://media.blackhat.com/us-13/US-13- ... You-WP.pdf

Of course I haven't got my JCL file figured out yet, but know that the current process uses JCL to zip the file so am pretty confident this will work. Thanks for all the input. Will post when I succeed (or fail).

Posted: Tue Apr 29, 2014 9:00 am
by FranklinE
I don't know if you need to find and purchase it or will find it online, but your go-to reference for JCL is "MVS JCL" by Doug Lowe, pub Mike Murach & Associates, Inc. It has comprehensive and amazingly readable syntax and examples through JES3. For novices, Chapter 2 is an excellent intro to mainframe infrastructure, including DASD.

Even now that I'm using the mainframe platform as a source or destination rather than doing ongoing development, it remains my reference and memory jogger.