22101 Segmentation Fault - core dumped & Parallel job re
Moderators: chulett, rschirm, roy
22101 Segmentation Fault - core dumped & Parallel job re
Hi All,
We are facing performence problem in a job which will execute merge query in Oracle database. I tried to modify the query to split the volume & run it paralley.Below is the original query,
MERGE INTO #$DM_SCHEMA_PREFIX#PART_FACT a
USING #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT b
ON ( a.dealer_key = b.dealer_key
AND a.part_key = b.part_key
AND a.time_dim_key = b.time_dim_key
AND a.purchase_order_number = b.purchase_order_number
AND a.sales_ord_inv_flag = b.sales_ord_inv_flag
AND a.invoice_line_number = '-1001'
AND a.purchase_order_number = '-1001'
AND a.order_line = '-1001')
WHEN MATCHED THEN
UPDATE
SET a.sales_quantity = b.sales_quantity,
a.lost_sales_quantity = b.lost_sales_quantity,
a.source_system_id = b.source_system_id,
a.update_user_id = b.update_user_id,
a.update_date = b.update_date, a.tstamp = b.tstamp
WHEN NOT MATCHED THEN
INSERT (a.dealer_key, a.part_key, a.time_dim_key, a.sales_quantity,
a.purchase_order_number, a.lost_sales_quantity, a.create_date,
a.source_system_id, a.create_user_id, a.tstamp, a.sales_ord_inv_flag)
VALUES (b.dealer_key, b.part_key, b.time_dim_key, b.sales_quantity,
b.purchase_order_number, b.lost_sales_quantity, b.create_date,
b.source_system_id, b.create_user_id, b.tstamp, b.sales_ord_inv_flag)
Modified query:
*************
MERGE INTO #$DM_SCHEMA_PREFIX#PART_FACT a
USING ( SELECT * FROM #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)-2,4)=0 ) b
ON ( a.dealer_key = b.dealer_key
AND a.part_key = b.part_key
AND a.time_dim_key = b.time_dim_key
AND a.purchase_order_number = b.purchase_order_number
AND a.sales_ord_inv_flag = b.sales_ord_inv_flag
AND a.invoice_line_number = '-1001'
AND a.purchase_order_number = '-1001'
AND a.order_line = '-1001')
WHEN MATCHED THEN
UPDATE
SET a.sales_quantity = b.sales_quantity,
a.lost_sales_quantity = b.lost_sales_quantity,
a.source_system_id = b.source_system_id,
a.update_user_id = b.update_user_id,
a.update_date = b.update_date, a.tstamp = b.tstamp
WHEN NOT MATCHED THEN
INSERT (a.dealer_key, a.part_key, a.time_dim_key, a.sales_quantity,
a.purchase_order_number, a.lost_sales_quantity, a.create_date,
a.source_system_id, a.create_user_id, a.tstamp, a.sales_ord_inv_flag)
VALUES (b.dealer_key, b.part_key, b.time_dim_key, b.sales_quantity,
b.purchase_order_number, b.lost_sales_quantity, b.create_date,
b.source_system_id, b.create_user_id, b.tstamp, b.sales_ord_inv_flag)
Difference between the above queries is :,
We included the below statement , but the job is aborting saying
Parallel job reports failure (code 139).
SELECT * FROM #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)-2,4)=0 )
Is there any environment variable do I need to set?
Thanks in advance.
We are facing performence problem in a job which will execute merge query in Oracle database. I tried to modify the query to split the volume & run it paralley.Below is the original query,
MERGE INTO #$DM_SCHEMA_PREFIX#PART_FACT a
USING #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT b
ON ( a.dealer_key = b.dealer_key
AND a.part_key = b.part_key
AND a.time_dim_key = b.time_dim_key
AND a.purchase_order_number = b.purchase_order_number
AND a.sales_ord_inv_flag = b.sales_ord_inv_flag
AND a.invoice_line_number = '-1001'
AND a.purchase_order_number = '-1001'
AND a.order_line = '-1001')
WHEN MATCHED THEN
UPDATE
SET a.sales_quantity = b.sales_quantity,
a.lost_sales_quantity = b.lost_sales_quantity,
a.source_system_id = b.source_system_id,
a.update_user_id = b.update_user_id,
a.update_date = b.update_date, a.tstamp = b.tstamp
WHEN NOT MATCHED THEN
INSERT (a.dealer_key, a.part_key, a.time_dim_key, a.sales_quantity,
a.purchase_order_number, a.lost_sales_quantity, a.create_date,
a.source_system_id, a.create_user_id, a.tstamp, a.sales_ord_inv_flag)
VALUES (b.dealer_key, b.part_key, b.time_dim_key, b.sales_quantity,
b.purchase_order_number, b.lost_sales_quantity, b.create_date,
b.source_system_id, b.create_user_id, b.tstamp, b.sales_ord_inv_flag)
Modified query:
*************
MERGE INTO #$DM_SCHEMA_PREFIX#PART_FACT a
USING ( SELECT * FROM #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)-2,4)=0 ) b
ON ( a.dealer_key = b.dealer_key
AND a.part_key = b.part_key
AND a.time_dim_key = b.time_dim_key
AND a.purchase_order_number = b.purchase_order_number
AND a.sales_ord_inv_flag = b.sales_ord_inv_flag
AND a.invoice_line_number = '-1001'
AND a.purchase_order_number = '-1001'
AND a.order_line = '-1001')
WHEN MATCHED THEN
UPDATE
SET a.sales_quantity = b.sales_quantity,
a.lost_sales_quantity = b.lost_sales_quantity,
a.source_system_id = b.source_system_id,
a.update_user_id = b.update_user_id,
a.update_date = b.update_date, a.tstamp = b.tstamp
WHEN NOT MATCHED THEN
INSERT (a.dealer_key, a.part_key, a.time_dim_key, a.sales_quantity,
a.purchase_order_number, a.lost_sales_quantity, a.create_date,
a.source_system_id, a.create_user_id, a.tstamp, a.sales_ord_inv_flag)
VALUES (b.dealer_key, b.part_key, b.time_dim_key, b.sales_quantity,
b.purchase_order_number, b.lost_sales_quantity, b.create_date,
b.source_system_id, b.create_user_id, b.tstamp, b.sales_ord_inv_flag)
Difference between the above queries is :,
We included the below statement , but the job is aborting saying
Parallel job reports failure (code 139).
SELECT * FROM #$DM_TMP_SCHEMA_PREFIX#TMP_PART_FACT WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)-2,4)=0 )
Is there any environment variable do I need to set?
Thanks in advance.
Is there any more to the error message, the Segmentation fault and abort message don't narrow down the issue at all. Also, does the new SELECT function correctly from your preferred tool?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
[quote="ArndW"]Is there any more to the error message, the Segmentation fault and abort message don't narrow down the issue at all. Also, does the new SELECT function correctly from your preferred tool? ...[/quote]
Thanks for response Arndw, Same query is working fine in Toad, but I tried to run through it say parallel job aborts code 139.
Thanks for response Arndw, Same query is working fine in Toad, but I tried to run through it say parallel job aborts code 139.
The 139 doesn't help much. Somewhere in the error message there will be a SQL code or other more specific message.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
OP - You are on v7x, right? I've not seen an error like this before that's all but I know there was a newly produced error code reference of some ilk for v8.
Furthermore, when you say
Furthermore, when you say
What is "saying" this? Is that a line out of the job log or some calling script or otherwise?the job is aborting saying Parallel job reports failure (code 139)
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
-
- Premium Member
- Posts: 278
- Joined: Wed Oct 03, 2007 8:45 am
[quote="miwinter"]OP - You are on v[u]7[/u]x, right? I've not seen an error like this before that's all but I know there was a newly produced error code reference of some ilk for v8.
Furthermore, when you say
[quote]the job is aborting saying Parallel job reports failure (code 139)[/quote]
[u]What[/u] is "saying" this? Is that a line out of the job log or some calling script or otherwise?[/quote]
Yes you are right am using V7.5.1 , it error code from job log. I tried using ODBC stage still the aborts. I think there would be a problem with Datatstage version 7x series, When a trigger goes from datastage to database it aborts> If I run same query in sqlplus it goes fine.
Furthermore, when you say
[quote]the job is aborting saying Parallel job reports failure (code 139)[/quote]
[u]What[/u] is "saying" this? Is that a line out of the job log or some calling script or otherwise?[/quote]
Yes you are right am using V7.5.1 , it error code from job log. I tried using ODBC stage still the aborts. I think there would be a problem with Datatstage version 7x series, When a trigger goes from datastage to database it aborts> If I run same query in sqlplus it goes fine.
Re: 22101 Segmentation Fault - core dumped & Parallel jo
Hi,
I am facing the same issue when placing an update call to Oracle via Oracle Enterprise stage
Contents of phantom output file =>
509 Segmentation Fault - core dumped
The query works fine on toad.
If you know a workaround to this please let me know.
Thanks,
Hema
I am facing the same issue when placing an update call to Oracle via Oracle Enterprise stage
Contents of phantom output file =>
509 Segmentation Fault - core dumped
The query works fine on toad.
If you know a workaround to this please let me know.
Thanks,
Hema
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Re: 22101 Segmentation Fault - core dumped & Parallel jo
No you're not. Please start a new thread.hema177 wrote:I am facing the same issue ...
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Re: 22101 Segmentation Fault - core dumped & Parallel jo
How would you know I am not?
Anyway, the resolution seems to be related to the length of the sql being passed via the Oracle Enterprise Stage. I shortened the sql and the job now works.
Anyway, the resolution seems to be related to the length of the sql being passed via the Oracle Enterprise Stage. I shortened the sql and the job now works.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Re: 22101 Segmentation Fault - core dumped & Parallel jo
Because you are getting a different error code.hema177 wrote:How would you know I am not?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.