Page 2 of 2

Posted: Thu Aug 06, 2009 1:27 pm
by dsuser_cai
"Is this hashed file the source for your surrogate key? If so this could be part of your issue. "

Yes. you are right. this file stores only the table name ans the column name and the surrogate key.

Posted: Thu Aug 06, 2009 2:04 pm
by chulett
Then I would ensure you do not cache the lookup to this hashed file you are updating and see if that resolves your problem.

Posted: Thu Aug 06, 2009 2:15 pm
by dsuser_cai
this is what we have right now:

when writing to a file:

Allow stage write cache -- this is not checked
create file --- not checked

under Update Optiosn

Bothe are un checked.

When reading:

Pre-Load file to memory:
Enabled --> is chosen. so do you want me to disable this and try.

Posted: Thu Aug 06, 2009 2:23 pm
by dsuser_cai
I changed the setting for "Pre-Load file to memory" to disabled and now the performance is very slow (100 rows/sec). :?

Posted: Thu Aug 06, 2009 2:27 pm
by chulett
Then try "Enabled, locked for update" and see if that gets any faster. You must do one or the other for this to work correctly. You *are* doing the update link in the same transformer as the reference lookup link, yes?

Posted: Thu Aug 06, 2009 6:39 pm
by dsuser_cai
Hi

I changed the "Pre-Load file to memory" to Disabled and the job didnt have any problem. It generated sequence in correct fashion.

I suspesct the file might be locked from getting updating and thats why it was reading the most recent value and generating the error. Or dure to less memory / cache the issue might have arrised. Now i recall that my co-worker was also running a job that had lots of look up stages and join and he consumed most of the resources. That could be the reason why my job was running slow.

Please share your comments.

Criag I would like to thank you personally for your support. You helped me to fix this issue. Thank you so much.

Posted: Thu Aug 06, 2009 9:23 pm
by chulett
No problem, that's why I do this. :wink:

I too would think this is memory / cache related, I would wager that your in memory footprint for this hashed file grew until it would no longer fit into memory and generated the informational message you saw. After that, perhaps your updates were only stored on disk at that point and what you were pulling from memory was stale as it was no longer being updated properly. [/guess]

Anyway, all better now.

Posted: Thu Aug 06, 2009 10:17 pm
by ray.wurlod
Not quite.

Moderator: please move to server forum

There are no hashed files in parallel jobs.

Posted: Fri Aug 07, 2009 5:51 am
by chulett
:lol: You know the forum has very little relevance to what actually gets posted in it any more.

So... "not quite". Was that strictly a forum related response or does the Master have the real explanation of the issue waiting up his sleeve for a post-forum-move reveal?