Search found 40 matches
- Tue Mar 29, 2016 1:42 pm
- Forum: General
- Topic: Passing User variable to Exception Handler
- Replies: 11
- Views: 5909
Hi Chulett It's not parameters that I want to select. It's the user-variables, and it seems they not in the External Parameter Helper list. This is what I want to do: I created a User-Variable, say CURRTIME, and popuplated it with the current date/time. It is then passed onto other jobs/scripts to b...
- Tue Mar 29, 2016 10:29 am
- Forum: General
- Topic: Passing User variable to Exception Handler
- Replies: 11
- Views: 5909
Passing User variable to Exception Handler
Hi We have this standard Exception Handler process like this: ExceptionHandler_Stage --> Email_Stage --> Terminator_Stage In the Email_Stage we can references parameters in the subject field, like this: "MyJob Contained the following exception on #$DS_ENV#: #Exception_Handler.$ErrMessage#"...
- Tue Mar 29, 2016 10:10 am
- Forum: General
- Topic: ExeCmd always execute SERIALLY within a single Sequence Job
- Replies: 9
- Views: 4404
- Mon Mar 28, 2016 10:52 pm
- Forum: General
- Topic: ExeCmd always execute SERIALLY within a single Sequence Job
- Replies: 9
- Views: 4404
- Mon Mar 28, 2016 2:22 pm
- Forum: General
- Topic: ExeCmd always execute SERIALLY within a single Sequence Job
- Replies: 9
- Views: 4404
ExeCmd always execute SERIALLY within a single Sequence Job
Hi I have a single Sequence job. Within it are five independent Execute Command stages, each calling a shell script, which then calls a Hive (Hadoop) command. At the beginning of this Shell Script I immediately log the system-time; at the end I log the time also. This way I would know when the shell...
- Fri Apr 24, 2015 12:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: BDFS Row Column Number
- Replies: 3
- Views: 1912
BDFS Row Column Number
I would like to understand whether the "Row Column Number" option in the BDFS file input step is impacted by partitioning or other parallelism options, or if it will always produce row numbers matching the source file.
- Wed Aug 27, 2014 1:15 pm
- Forum: General
- Topic: JAQL part files
- Replies: 0
- Views: 1001
JAQL part files
When a job is optimized to use JAQL, the output files are broken up into a bunch of pieces labeled like part00001. Is there a way to have JAQL output a single file?
- Mon Aug 25, 2014 9:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: JAQL add filename as column
- Replies: 0
- Views: 1494
JAQL add filename as column
I am using the option 'File Name Column = ?' in the BDFS stage, and I would like to optimize the job as JAQL, but this option does not seem to come across in the optimized code. Is there a way, in JAQL, to manage this scenario: Many files in a folder on Hadoop Read in all the rows from all the files...
- Fri Aug 01, 2014 12:55 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Read zip file
- Replies: 6
- Views: 5162
It looks to me like the Expand stage can only unpack a file that has been created in DataStage's own compressed format, is that correct? It cannot unzip an ordinary gzip file? I've tried using Sequential File stage and DataSet stage to read the zipped file and send to Expand, and it doesn't seem to ...
- Thu Jul 31, 2014 1:33 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Read zip file
- Replies: 6
- Views: 5162
Had not tried the External Source stage. That seems to be working, with command: hadoop dfs -cat /DL/INCOMING/GOOGLE_DFA/temp/aaa.gz | gunzip -c Was wondering why the Expand stage does not seem to work, though. Zipped file is simply put onto hadoop for storage, we don't want to unzip them with the a...
- Wed Jul 30, 2014 3:54 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Read zip file
- Replies: 6
- Views: 5162
Read zip file
I would like to be able to read a compressed (probably gzip, maybe other) file directly into Datastage from Hadoop. What is the correct way to do this? I have tried the Expand stage, but have not been able to make it work. (getting error The decode operator is being used on a non-encoded data set) W...
- Thu Jun 05, 2014 11:49 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: CSV file with embedded quotes and commas
- Replies: 4
- Views: 3079
CSV file with embedded quotes and commas
I have looked around on google and this forum, as well as the documentation, and experimenting for myself, and not found a solution. If there is no DataStage native solution, I would like confirmation that that is the case. I do not want to see a command-line scripting solution to this problem. It s...
- Thu Jun 05, 2014 2:28 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Non-Padded Timestamp Format
- Replies: 13
- Views: 4884
- Wed Jun 04, 2014 9:35 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Variable Column Set in File
- Replies: 2
- Views: 1430
Variable Column Set in File
I have a file coming in which may have different column sets on different days. It has a header row, so it is possible to determine which columns exist on a given day, but I have not been able to find a way of getting the incoming file stage (HDFS stage) to understand the header and map the input co...
- Mon Jun 02, 2014 5:02 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Non-Padded Timestamp Format
- Replies: 13
- Views: 4884