Search found 206 matches

by rwierdsm
Wed Jun 28, 2006 8:37 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to open csv file format
Replies: 22
Views: 12866

Vijayindukuri,

That will not work. You have to open your xls in Excel and 'Save As' CSV (MS DOS). Renaming the file leaves it in Excel format, as you have seen, unreadable by a text editor.

Rob
by rwierdsm
Tue Jun 27, 2006 6:38 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to open csv file format
Replies: 22
Views: 12866

Re: Unable to open csv file format

Vijayindukuri,

Ensure that you have the Line Termination radio button set to DOS Style (CR LF).

If you have it set to UNIX Style, the file itself will have a line terminator of (x0D0A) but DS is expecting (X0A). DS thinks the (x0D) is an extra column.

Rob
by rwierdsm
Fri Jun 23, 2006 9:16 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Schema File
Replies: 5
Views: 4373

Ray, Emma, Thanks and Merci Beaucoup for the input. When you open the table definition in the Repository and click on the Layout tab, then choose Parallel option, is the schema displayed according to your expectations there? You might see, for example, record_delim='\n' in this listing. Nope, it's n...
by rwierdsm
Wed Jun 21, 2006 6:34 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Schema File
Replies: 5
Views: 4373

Schema File

All, By searching the forum on the string 'schema file' I determined how to generate some examples of schema files, however, I'm not getting what I expected. I have a file defined in a Column Import stage that has has a record delimiter string defined for the Record Level and a Delimiter string defi...
by rwierdsm
Mon Jun 19, 2006 6:39 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problems reading a file with large columns
Replies: 7
Views: 2533

I believe only single-character delimiters are handled. In the end, it was OK with the 6 character record delimiter string ||^^||, so long as I didn't also define the column delimiter to the same characters (at this point of the flow, I'm treating the whole input record as a single column). Later o...
by rwierdsm
Thu Jun 15, 2006 8:44 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Have to remove 0's from varchar field
Replies: 6
Views: 3770

Assuming the data is as shown in your post, you will have to separate the data based on the ',' delimiter before implementing the solutions suggested by Arnd and Guru. Rebuild the string, putting the ',' back in after the manipulation.

Rob
by rwierdsm
Thu Jun 15, 2006 7:56 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ChangeTimestamptoyear
Replies: 7
Views: 1736

Re: HI SDguru2B

Kumar,

Assuming format YYYY-MM-DD, Craig's suggestion breaks down to

Year = in.timestamp [1, 4]
Month = in.timestamp [6, 2]

Are you looking for more than that?

Rob
by rwierdsm
Thu Jun 15, 2006 6:56 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Atomic Upserts in Teradata
Replies: 0
Views: 986

Atomic Upserts in Teradata

All, Has anyone had any experience using the Upsert capabilities of the TeraData MultiLoad Stage, using either MultiLoad or TPump? The TeraData documentation suggests that the transaction is quite efficient, having implemented a feature called Atomic Upserts. http://www.teradataforum.com/teradata_pd...
by rwierdsm
Thu Jun 15, 2006 6:28 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problems reading a file with large columns
Replies: 7
Views: 2533

Ray, A brief history.... This is the file I have posted about earlier in the Server forum (when I was trying to solve our design issue in Server!). In short, we have a file with huge columns, 32k, and embedded weirdness, including terminators of all flavours and potentially any character you can thi...
by rwierdsm
Wed Jun 14, 2006 2:35 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problems reading a file with large columns
Replies: 7
Views: 2533

Hi, Guru, Here it is. Copy_of_Input_File,0: Error reading on import: Copy_of_Input_File,0: Consumed more than 100000 bytes looking for record delimiter; aborting Copy_of_Input_File,0: Import error at record 1. Copy_of_Input_File,0: Operator's runLocally() failed. APT_CombinedOperatorController,0: Op...
by rwierdsm
Wed Jun 14, 2006 1:06 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problems reading a file with large columns
Replies: 7
Views: 2533

Re: Problems reading a file with large columns

It appears that there were two things going on. In the initial read, I was reading the whole input as single column. I had defined a character string as a row delimiter and then the same string as the column delimiter, this is what DS was unhappy about. When the job aborted, the data was too big for...
by rwierdsm
Wed Jun 14, 2006 8:11 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: error in oracle plugin
Replies: 8
Views: 2990

sudhakar_viswa wrote:Hi rwierdsm ,

It is nullable column.

Thanks,
sudhakar[/i]
Check to see if the row has a null date.
by rwierdsm
Wed Jun 14, 2006 6:48 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: error in oracle plugin
Replies: 8
Views: 2990

Re: error in oracle plugin

can any one help me asap. Tsk, tsk, tsk, (like my sister used to say!) Sudhakar you've been around long enough to know that will earn you a tongue lashing! Is your target field defined as not null? It message seems to be indicating that you are loading null values into a not null column. Rob W
by rwierdsm
Wed Jun 14, 2006 6:42 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: First Character of source to First char of target
Replies: 5
Views: 1268

Re: First Character of source to First char of target

Saik, I think this is really more of a display issue in whatever tool is used to access the data in Oracle. However, if you wish to have the data left justified instead of right justified in the Oracle column, just concatenate a blank to your input column, i.e. TrgCol1 = SrcCol1 : ' ' Hope this help...
by rwierdsm
Wed Jun 14, 2006 6:03 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problems reading a file with large columns
Replies: 7
Views: 2533

Problems reading a file with large columns

All, I'm having all kinds of fun with my first EE job. I'm trying to load an input file with the following structure: NUMBER char(10) CATEGORY varchar(25) MISC_ARRAY1 varchar(5000) RB_APPROVALS varchar(100) RB_APPROVALS_BY varchar(500) RBC_LOCATIONS varchar(200) RBC_LOCATIONS_TASK varchar(500) RBC_E...