Page 2 of 2

Posted: Tue Aug 29, 2006 10:50 am
by chulett
You know what is not recommended and you are working to get rid of? And hashed files over 2GB are perfectly fine as long as they are 64BIT hashed files.

2000 or 2050 sounds a bit high to me... not sure if that would also cause an issue. We are supporting something like 15 projects with our T30FILE parameter setting of 500. Other opinions welcome!

BTW, what is your operating system?

Posted: Tue Aug 29, 2006 11:02 pm
by ray.wurlod
Hashed files can, if 64-bit addressing is used, be much bigger than 2GB if required.

Check whether "they" have any processes that are removing the .Type30 files, perhaps because the files are zero-length. If they do, oblige them to change the behaviour so as not to delete files called .Type30 because it is that which is causing the corruption - you don't have a hashed file any more - all you have is a directory, which will be substantially slower!

Posted: Thu Aug 31, 2006 2:01 am
by TBartfai
So I found out the solution... :P :idea:

Ray was right, they installed scripts deleting files during the night.
But this script is not deleting any directory, so hash's directory remained.

Very big thanks for all your ideas and help!

Posted: Thu Aug 31, 2006 3:18 am
by ray.wurlod
So did you take a big stick to "them" and urge them to desist? After all, "they" were all too ready to blame the ETL tool!

Posted: Wed Oct 18, 2006 2:04 pm
by chulett
Coming back to this as we just ran into this message on a new Server install. COmpounding the trouble-shooting was the fact that this new HP-UX based server was installed with 7.5.2 of DataStage where all others are still on 7.5.1A. Testing and all that rot.

One particular hashed file would not create, and any attempts to do so would result in the job aborting with the same 'floating point exception' noted early on. Still trying to track down exactly what caused the issue, but had two things I noticed that were a little out of the ordinary:

1) Initial creation parameters were 'non-standard' - Large Record and Record Size had been changed and were equal.

2) A Varchar field was declared with a Length of 10,000.

Putting the hashed params back to their defaults and changing the Varchar to a LongVarchar seems to have solved the problem. Hopefully either Support will figure out the why or I'll get time to go back and see which bits actually fixed in. In the meantime, it needed to be fixed NOW. :lol:

Posted: Wed Oct 18, 2006 3:12 pm
by ray.wurlod
RECORD.SIZE performs a calculation to set GROUP.SIZE and LARGE.RECORD. The problem's probably there. It's not a required tuning parameter and should therefore be left blank.