Search found 14 matches

by elsont
Fri May 16, 2014 8:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimals in RCP
Replies: 12
Views: 6382

Below function will give Log Summary DSGetLogSummary(JobHandle,DSJ.LOGINFO ,StartDate,EndDate, 0) Loop through the log and get Event ID ( EventId ) of line starts with "main_program: Schemas:" Use below function to get all the schemas used in the job. LogSchemas = DSGetLogEntry (JobHandle,...
by elsont
Thu May 15, 2014 3:01 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimals in RCP
Replies: 12
Views: 6382

OK, I think IBM should keep an option in sequential file stage to write decimals without these extra zeros. I have put a workaround using a routine to pull the metadata from the log and a job with loop in transformer to remove the unwanted zeros.
by elsont
Thu May 15, 2014 2:38 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimals in RCP
Replies: 12
Views: 6382

Hi,
See below two queries. PAY_AMT is decimal and PAY_AMT_STRING is string.
1) SEL PAY_AMT FROM TABLE
2) SEL PAY_AMT(FORMAT 'Z9') (VARCHAR(9)) AS PAY_AMT_STRING FROM TABLE
by elsont
Wed May 14, 2014 11:23 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimals in RCP
Replies: 12
Views: 6382

Thanks for Reply. I don't want to do any transformations. Just need to write the data from Teradata view/s into the sequential file. We have many fields with data type decimal (18, 0) and decimal (11, 2). File becomes big because of these extra zeros. Also no one likes to see lot of preceding zeros ...
by elsont
Mon May 12, 2014 9:54 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimals in RCP
Replies: 12
Views: 6382

Decimals in RCP

Hi, I have a generic RCP enabled job (Teradata Connector -> Copy Stage -> Sequential file stage) to extract data from Teradata into a text file. It accepts SELECT query on run time and creates the file. The issue I am facing is with decimal columns were DataStage adds zeros before and after. I would...
by elsont
Tue Dec 11, 2012 1:19 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problem with seperator in timestamp field
Replies: 4
Views: 3047

Re: Problem with seperator in timestamp field

DataStage won't validate delimiters in the time/date/timestamp fields automatically. If you want to validate it then read it as string and write code to validate in transformer then reject/convert to timestamp accordingly.
by elsont
Tue Dec 11, 2012 1:13 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: left join with between date condition
Replies: 5
Views: 4240

Then duplicate the input stream into two using copy stage. Do the first lookup on the first stream, second lookup on the second stream. Now funnel into one and remove duplicates
by elsont
Tue Dec 11, 2012 12:19 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Address Shuffle
Replies: 8
Views: 4560

Re: Address Shuffle

I will try to explain using one example suppose your records is like below one "Name Address State" Now you want to shuffle Name and Address with the state Ans: Split the record into two streams 1: Name + State 2: Address + State Now add new column "Order" for both streams and us...
by elsont
Tue Dec 11, 2012 11:54 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to read special character in Datastage
Replies: 1
Views: 5363

Re: How to read special character in Datastage

If you are seeing these special characters as '?' in your target table/file (don't use DataStage data viewer to validate.), then try changing NLS settings (try AMERICAN_AMERICA.AL32UTF8 or AMERICAN_AMERICA.WE8PC850 or NLS of your database)
by elsont
Tue Dec 11, 2012 11:49 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job hanging at CDC stage
Replies: 2
Views: 2497

Hi jweir, I saw your solution but it doesn't looks good to me. Your solution will work correctly only if all of your stages are running sequentially (then it is not a good design as you are not using parallell capability).
by elsont
Tue May 15, 2012 8:16 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Parse xml which has hierarchical data
Replies: 8
Views: 9993

Re: Parse xml which has hierarchical data

Try setting element of 5th level as KEY (repetition element).
by elsont
Sun Sep 19, 2010 1:08 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Commit after some records in ODBC stage
Replies: 6
Views: 3678

Re: Commit after some records in ODBC stage

HI i am using ODBC stage and i want to commit after every 1000 records, what Environment Variable should be used,and what is use of Array size? so if i use after commiting 1000 reocrds, and the job got Aborted,then if i start once again wheteher it will start from the starting or where it is aborte...
by elsont
Sun Sep 19, 2010 12:56 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Archiving file sets
Replies: 2
Views: 1941

Re: Archiving file sets

I have to archive a large File Set and keep the last 7 days' data. As I understand it, a .fs file is just a link to where the actual data is stored, and this file can be moved around at will, as the location reference is absolute and not relative. So I can just rename the .fs file and move it to an...
by elsont
Sun Sep 19, 2010 10:34 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Looking for design help
Replies: 3
Views: 1847

Re: Looking for design help

Hi, You can use multi node config file if table is having row level locking. In this case use the hash partition on the key. My suggestion is to avoid hitting database multipile times for the same record again and again. try to create a single record(unique key) instead of hitting table again and ag...