Space Occupied by datastage logs

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
samyamkrishna
Premium Member
Premium Member
Posts: 258
Joined: Tue Jul 04, 2006 10:35 pm
Location: Toronto

Space Occupied by datastage logs

Post by samyamkrishna »

Hi,

I have created a job to check how good the source file is.

For every import error there will be a warning.

We read the logs and identify the columns which have junk data in them.

Now if the source file has 6 million records and worst case all of them have junk data the director log is going to be massive.

wanted to do an impact analysis to see how this would impact datastage sever.

Regards,
Samyam
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

pardon me... yuk.


Don't inject a warning message in your job execution log which valdiates data quality of your input data. I would rather create a seperate file which contains the dirty records and one that contains valid records.
You could have a COUNT value echoed to your job execution log if you wish.

Always plan for worst case, and that would be that you have suddenly injected 6million rows of log file information into your UV database. You do not want to do that.

What do you plan on doing with the warning messages?
Did your site turn off the default 50 warnings and you abort setting?
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

You could also using Information Analyzer. If you don't have it, maybe ask for a trial.
Choose a job you love, and you will never have to work a day in your life. - Confucius
samyamkrishna
Premium Member
Premium Member
Posts: 258
Joined: Tue Jul 04, 2006 10:35 pm
Location: Toronto

Post by samyamkrishna »

Thanks for the info you guys gave....

We already have information analyser will do it using that.

But i would like to know how the logs take up space in UNIX or in XMETA.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Where are you logging to, the 'legacy repository' or to XMETA? The former will have an ~2GB maximum size per 'log' and if that is exceeded they will corrupt.
-craig

"You can never have too many knives" -- Logan Nine Fingers
samyamkrishna
Premium Member
Premium Member
Posts: 258
Joined: Tue Jul 04, 2006 10:35 pm
Location: Toronto

Post by samyamkrishna »

Datastage is logging into XMETA.

If the job creates a log with 6 million warning. is there a possibilty of over loading the database.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Of course and that would be true of any table. The obvious solution is to correct your job so it doesn't log extraneous message, especially millions of them.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply