Showing less data in the database
Moderators: chulett, rschirm, roy
Showing less data in the database
I am running a job. The job suppose to load 1.7 million rows in the mainframe table. It runs successfully but in the actual mainframe table it only has 40000 rows. We are doings some grouping. But i dont think so grouping reduce the data to that extent.
Data is loading from sql server to mainframe.
thanks
shivan
Data is loading from sql server to mainframe.
thanks
shivan
-
- Participant
- Posts: 232
- Joined: Sat May 07, 2005 2:49 pm
- Location: USA
Hi Shivan,
1) Check how many rows you are extracting from source( SQL server)
2) At each stage where you have a constraint, store the reject records in a reject file
3) Then finally,
That being said, if you have records in the source, having the same keys but other columns are different and assuming you are using the strategy of insert and update. Then multiple rows from the source might get converted into one row, depending upon the strategy you are trying to implement. This might also cause a difference in numbers.
I hope this will get you started in your analysis.
1) Check how many rows you are extracting from source( SQL server)
2) At each stage where you have a constraint, store the reject records in a reject file
3) Then finally,
Code: Select all
No of rows you are trying to load to target = No of rows you extracted from source + Total No of rows eliminated because of constraints
I hope this will get you started in your analysis.
Thanks,
Naveen
Naveen
but i m just using grouping. How can it only process 40000 out of 1.7 million.
thanks
shivan
thanks
shivan
pnchowdary wrote:Hi Shivan,
1) Check how many rows you are extracting from source( SQL server)
2) At each stage where you have a constraint, store the reject records in a reject file
3) Then finally,
That being said, if you have records in the source, having the same keys but other columns are different and assuming you are using the strategy of insert and update. Then multiple rows from the source might get converted into one row, depending upon the strategy you are trying to implement. This might also cause a difference in numbers.Code: Select all
No of rows you are trying to load to target = No of rows you extracted from source + Total No of rows eliminated because of constraints
I hope this will get you started in your analysis.
i am loading the data from sql server to mainframe.
shivan
shivan
shivan wrote:but i m just using grouping. How can it only process 40000 out of 1.7 million.
thanks
shivan
pnchowdary wrote:Hi Shivan,
1) Check how many rows you are extracting from source( SQL server)
2) At each stage where you have a constraint, store the reject records in a reject file
3) Then finally,
That being said, if you have records in the source, having the same keys but other columns are different and assuming you are using the strategy of insert and update. Then multiple rows from the source might get converted into one row, depending upon the strategy you are trying to implement. This might also cause a difference in numbers.Code: Select all
No of rows you are trying to load to target = No of rows you extracted from source + Total No of rows eliminated because of constraints
I hope this will get you started in your analysis.
-
- Participant
- Posts: 232
- Joined: Sat May 07, 2005 2:49 pm
- Location: USA
what i doing is that.
i wrote a sql query to extract data from sql server. then i m loading in db2. The sql query extracts 1.7 million rows. But in db2 only 40,000. There is one to one mapping.
thanks
shivan
i wrote a sql query to extract data from sql server. then i m loading in db2. The sql query extracts 1.7 million rows. But in db2 only 40,000. There is one to one mapping.
thanks
shivan
pnchowdary wrote:Hi Shivan,
What exactly do you mean by "but i m just using grouping". Could you please eloborate on what you are exactly doing by grouping?
-
- Participant
- Posts: 232
- Joined: Sat May 07, 2005 2:49 pm
- Location: USA
-
- Participant
- Posts: 232
- Joined: Sat May 07, 2005 2:49 pm
- Location: USA
-
- Premium Member
- Posts: 224
- Joined: Tue Sep 24, 2002 7:32 am
- Location: Denver, CO USA
It might happen this way if the DB2 table is missing the designation of one or more fields as part of a compound key. In essence, updating the same row over and over.
John
Yeah - it would be the other way around wouldn't it. Too many keys, not too few.
John
Yeah - it would be the other way around wouldn't it. Too many keys, not too few.
Last edited by ds_developer on Wed Aug 17, 2005 9:28 am, edited 1 time in total.
The problem is fixed. What was happening. The sql server has different set of primary keys then db2. So when the data was loading from sql server in the db2, it was neglecting the rows with a same value , as the pk should be unique. Example,
sql server data:
1(pk) 2(pk) 3 4 5
1 2 3 4 7
1 2 3 4 8
and the structure in db2 was
1(pk) 2(pk) 3(pk) 4 5
1(pk) 2(pk) 3(pk) 4 7
1 2 3 4 8
i hope it helps u understand what was the problem
thanks
sql server data:
1(pk) 2(pk) 3 4 5
1 2 3 4 7
1 2 3 4 8
and the structure in db2 was
1(pk) 2(pk) 3(pk) 4 5
1(pk) 2(pk) 3(pk) 4 7
1 2 3 4 8
i hope it helps u understand what was the problem
thanks
ds_developer wrote:It might happen this way if the DB2 table is missing the designation of one or more fields as part of a compound key. In essence, updating the same row over and over.
John
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Of course it's possible. If you grouped by sex ('M' or 'F') you would only get two output rows. Think about it. How many distinct values are there in the column by which you are grouping?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.