How can I make a job "always on"?

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ramsubbiah
Participant
Posts: 40
Joined: Tue Nov 11, 2008 5:49 am

How can I make a job "always on"?

Post by ramsubbiah »

Hi Everyone,

i have a requirement to develop a job which will fetch the files from source Queue whenever file arrives , so i have created the following job design for testing,

MQ----->TRANSFORMER------->Dataset

in the MQ stage i have given the following values for the parameters

Wait time = -1
message quantity 1
end of data message type = 9999989
Record Count = 0
End Of Wave = After
End Of Data = Yes


The job is successfully fetching up to four files , after that its getting finished, later i observed that my configuration file is having four nodes,
if i change to 2 nodes then its fetching only up to two files after its getting finished.

Note: After running the job i am placing files one by one.

please guide me if any other setting needs to be done for making the job always running mode to fetch the files when ever that arrives!


Thanks in advance.
Ram
Knowledge is Fair,execution is matter!
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Not sure if there is an issue? .....sounds like it is working great... ...but of course, for a real time job like this, you should have a single node configuration. Then, if you browse, you will get only one copy of the message, not multiples. There are only very rare cases when a real time job needs multiple nodes. The single node is also vastly simpler to debug and work with during your initial testing.

Also, during testing, I like to use a wait of 10 seconds, and a message count of 10..... in case I make any mistakes. Once I have the logic tested and working, move to -1 and a message type stop mechanism like you have now.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
JRodriguez
Premium Member
Premium Member
Posts: 425
Joined: Sat Nov 19, 2005 9:26 am
Location: New York City
Contact:

Post by JRodriguez »

Hello Ramsubbiah,

Follow Ernie suggestions ... and please try increasing the number of messages that you are expecting to unlimited

Wait time = -1
message quantity -1
end of data message type = 9999989
Record Count = 0
End Of Wave = After
End Of Data = Yes

Regards
Julio Rodriguez
ETL Developer by choice

"Sure we have lots of reasons for being rude - But no excuses
ramsubbiah
Participant
Posts: 40
Joined: Tue Nov 11, 2008 5:49 am

Post by ramsubbiah »

Hi Ernie/julio,

Thanks for your information, i will re test the job based on your findings and get back to you shortly,

Thanks,
Ram
Knowledge is Fair,execution is matter!
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Is this an Information Analyzer question?
:?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ramsubbiah
Participant
Posts: 40
Joined: Tue Nov 11, 2008 5:49 am

Post by ramsubbiah »

eostic wrote:Not sure if there is an issue? .....sounds like it is working great... ...but of course, for a real time job like this, you should have a single node configuration. Then, if you browse, you will get only one copy of the message, not multiples. There are only very rare cases when a real time job needs multiple nodes. The single node is also vastly simpler to debug and work with during your initial testing.

Also, during testing, I like to use a wait of 10 seconds, and a message count of 10..... in case I make any mistakes. Once I have the logic tested and working, move to -1 and a message type stop mechanism like you have now.

Ernie
Hi Ernie,

No luck, if i am running the job with single node configuration its fetching the first file which is arrives in to queue, after that the job is getting finished,

later i have given wait of 120 seconds and message count is 2 in this case the job is waiting for two files to arrive once they arrive its fetching both two files and the job is getting finished status,

but my requirement is whenever the file arrives it should fetch the file and load in to target and again it should wait for next file, the job should not get finished status, it should be always running,

Additional information:
i am using datastage 8.0.1 version

please correct me if i am in a wrong way.
Knowledge is Fair,execution is matter!
ramsubbiah
Participant
Posts: 40
Joined: Tue Nov 11, 2008 5:49 am

Post by ramsubbiah »

ray.wurlod wrote:Is this an Information Analyzer question?
:? ...
Hi Ray,

sorry wrongly i have created this topic under this forum, please tell me how to move this topic to datastage parallel forum

Thanks,
Ram
Knowledge is Fair,execution is matter!
ramsubbiah
Participant
Posts: 40
Joined: Tue Nov 11, 2008 5:49 am

Post by ramsubbiah »

JRodriguez wrote:Hello Ramsubbiah,

Follow Ernie suggestions ... and please try increasing the number of messages that you are expecting to unlimited

Wait time = -1
message quantity -1
end of data message type = 9999989
Record Count = 0
End Of Wave = After
End Of Data = Yes

Regards
Hi Julio,

if i change the message quantity to -1 , then the job is not fetching any files, though i keep file in the queue , once i post a file in to queue suddenly its disappearing , in the job also its not fetching, am wondering where the file is gone!
Thanks,
Ram
Knowledge is Fair,execution is matter!
suman27
Participant
Posts: 33
Joined: Wed Jul 15, 2009 6:52 am
Location: London

Post by suman27 »

Hi ramsubbiah,

We had a similar requirement. If you tried to run the mq stage on multinode config file it will duplicate messages times the number of nodes. So we developed a saparate job which read from MQ and runs on single config file. And the second job will process data with multinode config file.

You can disign a routene/script which will trigger this job in loop continually checking the status of the job.

Regards,
Suman.
Post Reply