Search found 27 matches

by Sandeep.pendem
Wed Jul 23, 2008 9:20 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"]Have you [b]confirmed[/b] a timeout issue or is it just a belief? I for one haven't heard of a "3 minute timeout" - a 3 minute egg, sure, but timeouts are usually more in the 30 to 60 minute range. IMHO, 3 would just be crazy and would affect more than this one j...
by Sandeep.pendem
Wed Jul 23, 2008 8:43 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"]:? Why do you believe you need to "dump 400+ million" records into a hashed file? Best Practice is to constrain your hashed builds to the incremental keys so you only have "just what you need" hashed ...[/quote] Hi, Let me rephrase my question, Our quer...
by Sandeep.pendem
Wed Jul 23, 2008 7:35 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="ray.wurlod"]Yes it is. You may need to create/convert your hashed files with 64-bit addressing, but that's OK. Such hashed files can theoretically support 19 million TB of data (but most operating systems or fi ...[/quote] Dont we have any other option other than dumping the 400+ m...
by Sandeep.pendem
Tue Jul 22, 2008 2:06 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="satya99"]Job1: Dump data to hashed file( 4 different hashed files)
job2: Now perform your look up operation[/quote]


We have 400 milion records, so dumping to hash file is not an option for us.

Thanks,
by Sandeep.pendem
Tue Jul 22, 2008 1:43 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"][i][b]If[/b][/i] that is what is happening, you'd need to work with someone else - a [b]SysAdmin[/b] - to see what your options are. ...[/quote] Hi, Thanks for the responses. I have worked with our sysadmin and he tried all the options as far as the SQL server 2005 is conc...
by Sandeep.pendem
Mon Jul 21, 2008 1:46 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"]Run the query [i]outside[/i] of DataStage so you know how long it will take. I suspect you have a network/firewall "inactivity timeout" issue here, but you need to know how long it will appear to be i ...[/quote] Ok, so how we overcome the netwrok/firewall inacti...
by Sandeep.pendem
Mon Jul 21, 2008 1:26 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"]The [b]distinct[/b] clause adds the sorting/grouping I was asking about. When it "works fine on sql server" how long does it take to start returning rows with a "large" volume? ...[/quote] It doesnt return any rows as such...I have started the job more ...
by Sandeep.pendem
Mon Jul 21, 2008 1:16 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

[quote="chulett"]What kind of query - one that does sorting/grouping that would cause it to hold on to all the records until ready? You may have a network/firewall timeout issue if that's the case. ...[/quote] Its a simple query --> This query works fine on sql server SELECT DISTINCT C.MMO...
by Sandeep.pendem
Mon Jul 21, 2008 1:06 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Unable to read higher volume of record with ODBC(Sql server)
Replies: 19
Views: 7267

Unable to read higher volume of record with ODBC(Sql server)

Hello, I have a simple mapping with i/p as ODBC stage(Sql server as database) and output as a flat file(one to mapping) with an intermediate transformer stage. i/p(ODBC) sql server------------------Xfm----------o/p flat file This job works fine for when ur incoming data is comparatively small(arond ...
by Sandeep.pendem
Mon Jun 02, 2008 8:46 am
Forum: General
Topic: Remove duplicates in Datastage MVS edition
Replies: 8
Views: 4344

Hi , I have already put a flat file after a sort stage then followed by a aggreagtor stage still gets the same sort error message. below is the job design for the same, Do I need to have 2 separate jobs one with a sort stage and other hob with an aggreagator stage or anything specific? Flat file(i/p...
by Sandeep.pendem
Sun Jun 01, 2008 11:46 am
Forum: General
Topic: Remove duplicates in Datastage MVS edition
Replies: 8
Views: 4344

[ Hi Ray, Thanks for the support, I have tried using a sort stage before a aggregator stage, but it seems we cant have a sort stage before aggreator stage, since when I link sort stage followed by aggregator stage it gives me compilation error as input should be a file, relational table for aggregat...
by Sandeep.pendem
Sat May 31, 2008 2:36 pm
Forum: General
Topic: Remove duplicates in Datastage MVS edition
Replies: 8
Views: 4344

Remove duplicates in Datastage MVS edition

Hi, I have fixed width file in Datastage mainframe job, When I used aggregator stage my job failes with a SORT error, despite of using SORT stage prior to aggreagtor stage along with a intermediate fixed width file the job fails with the same error message. Can anyone tell me how to remove duplicate...