Rare Error Messsage

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You know what is not recommended and you are working to get rid of? And hashed files over 2GB are perfectly fine as long as they are 64BIT hashed files.

2000 or 2050 sounds a bit high to me... not sure if that would also cause an issue. We are supporting something like 15 projects with our T30FILE parameter setting of 500. Other opinions welcome!

BTW, what is your operating system?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Hashed files can, if 64-bit addressing is used, be much bigger than 2GB if required.

Check whether "they" have any processes that are removing the .Type30 files, perhaps because the files are zero-length. If they do, oblige them to change the behaviour so as not to delete files called .Type30 because it is that which is causing the corruption - you don't have a hashed file any more - all you have is a directory, which will be substantially slower!
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
TBartfai
Premium Member
Premium Member
Posts: 15
Joined: Wed Jul 28, 2004 5:26 am

Post by TBartfai »

So I found out the solution... :P :idea:

Ray was right, they installed scripts deleting files during the night.
But this script is not deleting any directory, so hash's directory remained.

Very big thanks for all your ideas and help!
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

So did you take a big stick to "them" and urge them to desist? After all, "they" were all too ready to blame the ETL tool!
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Coming back to this as we just ran into this message on a new Server install. COmpounding the trouble-shooting was the fact that this new HP-UX based server was installed with 7.5.2 of DataStage where all others are still on 7.5.1A. Testing and all that rot.

One particular hashed file would not create, and any attempts to do so would result in the job aborting with the same 'floating point exception' noted early on. Still trying to track down exactly what caused the issue, but had two things I noticed that were a little out of the ordinary:

1) Initial creation parameters were 'non-standard' - Large Record and Record Size had been changed and were equal.

2) A Varchar field was declared with a Length of 10,000.

Putting the hashed params back to their defaults and changing the Varchar to a LongVarchar seems to have solved the problem. Hopefully either Support will figure out the why or I'll get time to go back and see which bits actually fixed in. In the meantime, it needed to be fixed NOW. :lol:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

RECORD.SIZE performs a calculation to set GROUP.SIZE and LARGE.RECORD. The problem's probably there. It's not a required tuning parameter and should therefore be left blank.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply