Sybase NNTP forums - End Of Life (EOL)

The NNTP forums from Sybase - forums.sybase.com - are now closed.

All new questions should be directed to the appropriate forum at the SAP Community Network (SCN).

Individual products have links to the respective forums on SCN, or you can go to SCN and search for your product in the search box (upper right corner) to find your specific developer center.

I am confused on the following problem, help needed

5 posts in Performance and Tuning Last posting was on 2007-11-28 18:31:30.0Z
tartampion Posted on 2007-11-28 16:16:53.0Z
Sender: 77d7.47472382.1804289383@sybase.com
From: tartampion
Newsgroups: sybase.public.ase.performance+tuning
Subject: I am confused on the following problem, help needed
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <474d9475.16b8.1681692777@sybase.com>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 28 Nov 2007 08:16:53 -0800
X-Trace: forums-1-dub 1196266613 10.22.241.41 (28 Nov 2007 08:16:53 -0800)
X-Original-Trace: 28 Nov 2007 08:16:53 -0800, 10.22.241.41
Lines: 65
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10510
Article PK: 89137

We have developped a query on our dev server, which works
well, uses indexes properly and gets the results from the
database in a reasonable time period, we have moved the same
query to production server, although the quantity of data in
the tables are comnaprable( less in dev ), the query takes
ages to accomplish.
I have run showplan on the query on both server, they use
the same query plan, indexes are used properly on both in
exactly the same way, but there are two enormous difference
between the 2 query plans:
1: Dev server uses 2K pages for indexes and indexes leaves
fetch and use a LRU strategy while the production uses 16K
pages and MRU strategy, my guess is that as the prod gets
16K of indexes into the cache and gets rid of them by a MRU
strategy, this seems to be a very expenssive strategy (get
16K of index and get rid of it rather quickly)my guess is
that the 16K prefetch for indexes leaves is the origin of
the fact that the quey takes ages to accomplish, number of
I/o is prod is much bigger than in dev.
My understanding was that the optimazor uses a 2k page and
LRU strategy for indexes by default but in the case that I
present this is not the case, I am trying to understand the
reason and seek also ways to overcome the problem, I know
that we can force the plan prefetch etc, but I would like to
avoid it and let optimazor does its job properly, help me
understand and reduce the time that I am going to spend on
the documentations, please.

The version of our sybase is 12.5.03( we are moving to
15.02, but it is going to take time to finalize all the
tests),
here is the diff betwwen thr two showplans
<: indicates dev
> indicates production:
< Table1
---
> Anotherdb..Table1
31c31
< Using I/O Size 2 Kbytes for index leaf pages.
---
> Using I/O Size 16 Kbytes for index leaf pages.
46c46
< Using I/O Size 2 Kbytes for index leaf pages.
---
> Using I/O Size 16 Kbytes for index leaf pages.
64c64
< Table1
---
> Anotherdb..Table1
70,71c70,71
< Using I/O Size 2 Kbytes for index leaf pages.
< With LRU Buffer Replacement Strategy for index
leaf pages.
---

> Using I/O Size 16 Kbytes for index leaf pages.
> With MRU Buffer Replacement Strategy for index

leaf pages.
95c95
< Using I/O Size 2 Kbytes for index leaf pages.

All you suggestions can be very helpful.
Thanks in adv.
Tartampion
---
> Using I/O Size 16 Kbytes for index leaf pages.


"Mark A. Parsons" <iron_horse Posted on 2007-11-28 16:40:40.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.10 (Windows/20070221)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: I am confused on the following problem, help needed
References: <474d9475.16b8.1681692777@sybase.com>
In-Reply-To: <474d9475.16b8.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: ool-4357fce9.dyn.optonline.net
X-Original-NNTP-Posting-Host: ool-4357fce9.dyn.optonline.net
Message-ID: <474d9a08$1@forums-1-dub>
Date: 28 Nov 2007 08:40:40 -0800
X-Trace: forums-1-dub 1196268040 67.87.252.233 (28 Nov 2007 08:40:40 -0800)
X-Original-Trace: 28 Nov 2007 08:40:40 -0800, ool-4357fce9.dyn.optonline.net
Lines: 99
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10512
Article PK: 89135

Without more details I'll make some guesses ....

- the query plan diff's have been cut-n-pasted in such a way that some important info (eg, other differences) have been
left out of the post

- the production table is quite a bit larger than the development table (ie, are the tables really comparable in size?)

- by 'number of I/o in prod is much bigger' I'm assuming that you ran 'set statistics io on' and found 'larger' counts
for logical io's; but how much 'larger'? is the difference in logical io count comparable to the difference in table sizes?

- dev server is running with a larger page size (eg, 8K or 16K) while the production server is running with a smaller
page size (eg, 2K); so fewer io's are required in dev to process a comparably sized table in prod

- the 'select' list includes at least one blob/text column, and in production these columns are much longer than in
development thus requiring more io's to access the blob/text chains

- the query is forcing an index, where either the index or data pages are highly fragmented in production, thus leading
to an excessive number of io's

- production table is DOL with lots of fragmentation in data and/or index trees, (eg, really large numbers of empty leaf
pages in a DOL index means extra time/io's to process the index; this could show up as the prod index having a mucho
larger reserved space number and/or excessively large numbers in the empty leaf pages counter - see optdiag or
systabstats.emptypgcnt for the index in question)

- dev query runs with most data in cache, while prod query requires mostly physical io's; could be due to heavy activity
on prod server, or perhaps a smaller sized cache/pool in prod

- heavy descriptor re-use in the prod server is causing thrashing in the meta-data cache (and possibly data cache)

- and on and on and on ....

tartampion wrote:
> We have developped a query on our dev server, which works
> well, uses indexes properly and gets the results from the
> database in a reasonable time period, we have moved the same
> query to production server, although the quantity of data in
> the tables are comnaprable( less in dev ), the query takes
> ages to accomplish.
> I have run showplan on the query on both server, they use
> the same query plan, indexes are used properly on both in
> exactly the same way, but there are two enormous difference
> between the 2 query plans:
> 1: Dev server uses 2K pages for indexes and indexes leaves
> fetch and use a LRU strategy while the production uses 16K
> pages and MRU strategy, my guess is that as the prod gets
> 16K of indexes into the cache and gets rid of them by a MRU
> strategy, this seems to be a very expenssive strategy (get
> 16K of index and get rid of it rather quickly)my guess is
> that the 16K prefetch for indexes leaves is the origin of
> the fact that the quey takes ages to accomplish, number of
> I/o is prod is much bigger than in dev.
> My understanding was that the optimazor uses a 2k page and
> LRU strategy for indexes by default but in the case that I
> present this is not the case, I am trying to understand the
> reason and seek also ways to overcome the problem, I know
> that we can force the plan prefetch etc, but I would like to
> avoid it and let optimazor does its job properly, help me
> understand and reduce the time that I am going to spend on
> the documentations, please.
>
> The version of our sybase is 12.5.03( we are moving to
> 15.02, but it is going to take time to finalize all the
> tests),
> here is the diff betwwen thr two showplans
> <: indicates dev
>> indicates production:
> < Table1
> ---
>> Anotherdb..Table1
> 31c31
> < Using I/O Size 2 Kbytes for index leaf pages.
> ---
>> Using I/O Size 16 Kbytes for index leaf pages.
> 46c46
> < Using I/O Size 2 Kbytes for index leaf pages.
> ---
>> Using I/O Size 16 Kbytes for index leaf pages.
> 64c64
> < Table1
> ---
>> Anotherdb..Table1
> 70,71c70,71
> < Using I/O Size 2 Kbytes for index leaf pages.
> < With LRU Buffer Replacement Strategy for index
> leaf pages.
> ---
>> Using I/O Size 16 Kbytes for index leaf pages.
>> With MRU Buffer Replacement Strategy for index
> leaf pages.
> 95c95
> < Using I/O Size 2 Kbytes for index leaf pages.
>
> All you suggestions can be very helpful.
> Thanks in adv.
> Tartampion
> ---
>> Using I/O Size 16 Kbytes for index leaf pages.


tartampion Posted on 2007-11-28 17:20:50.0Z
Sender: 16d4.474d95ce.1804289383@sybase.com
From: Tartampion
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: I am confused on the following problem, help needed
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <474da372.1956.1681692777@sybase.com>
References: <474d9a08$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 28 Nov 2007 09:20:50 -0800
X-Trace: forums-1-dub 1196270450 10.22.241.41 (28 Nov 2007 09:20:50 -0800)
X-Original-Trace: 28 Nov 2007 09:20:50 -0800, 10.22.241.41
Lines: 142
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10513
Article PK: 89141


> Without more details I'll make some guesses ....
>
> - the query plan diff's have been cut-n-pasted in such a
> way that some important info (eg, other differences) have
> been left out of the post

1:No, The difference that I posted is the result of diff
unix command between the two output files
>
> - the production table is quite a bit larger than the
> development table (ie, are the tables really comparable in
> size?)
>
2:No the sizes are comprable
> - by 'number of I/o in prod is much bigger' I'm assuming
> that you ran 'set statistics io on' and found 'larger'
> counts for logical io's; but how much 'larger'? is the
> difference in logical io count comparable to the
> difference in table sizes?
>
3: yes
> - dev server is running with a larger page size (eg, 8K or
> 16K) while the production server is running with a smaller
> page size (eg, 2K); so fewer io's are required in dev to
> process a comparably sized table in prod
4: No the two servers have teh same page size, 2K
>
> - the 'select' list includes at least one blob/text column
> , and in production these columns are much longer than in
> development thus requiring more io's to access the
> blob/text chains
>
5: No text columns are involved, the structur eof the 2
tables are identical
> - the query is forcing an index, where either the index or
> data pages are highly fragmented in production, thus
> leading to an excessive number of io's
6: No the query is not forcing teh indexes
>
> - production table is DOL with lots of fragmentation in
> data and/or index trees, (eg, really large numbers of
> empty leaf pages in a DOL index means extra time/io's to
> process the index; this could show up as the prod index
> having a mucho larger reserved space number and/or
> excessively large numbers in the empty leaf pages counter
> - see optdiag or systabstats.emptypgcnt for the index in
> question)

7: The two tables are apl tables, They are not fragmented,
tghe tables are in a data warehouse, select statements only
run against them , rows are added to them each evening,
statistics are updated regularly, once a week
>
> - dev query runs with most data in cache, while prod query
> requires mostly physical io's; could be due to heavy
> activity on prod server, or perhaps a smaller sized
> cache/pool in prod
>
8: When we run teh query once a day, the data neither in dev
nor in prod is in the cache, if we rerun the query just
after fiirst time either in dev or in prod the response time
is much less.

> - heavy descriptor re-use in the prod server is causing
> thrashing in the meta-data cache (and possibly data cache)
9: How can I find out this,
My problem which I tried to explain clearly was the use of
16K buffer for indexes in prod with a MRU strategy which in
dev 2K and LRU is used, I think this is the norigin of the
difference of response time.

>
> - and on and on and on ....
>
>
>
> tartampion wrote:
> > We have developped a query on our dev server, which
> > works well, uses indexes properly and gets the results
> > from the database in a reasonable time period, we have
> > moved the same query to production server, although the
> > quantity of data in the tables are comnaprable( less in
> > dev ), the query takes ages to accomplish.
> > I have run showplan on the query on both server, they
> > use the same query plan, indexes are used properly on
> > both in exactly the same way, but there are two enormous
> > difference between the 2 query plans:
> > 1: Dev server uses 2K pages for indexes and indexes
> > leaves fetch and use a LRU strategy while the
> > production uses 16K pages and MRU strategy, my guess is
> > that as the prod gets 16K of indexes into the cache and
> > gets rid of them by a MRU strategy, this seems to be a
> > very expenssive strategy (get 16K of index and get rid
> > of it rather quickly)my guess is that the 16K prefetch
> > for indexes leaves is the origin of the fact that the
> > quey takes ages to accomplish, number of I/o is prod is
> > much bigger than in dev. My understanding was that the
> > optimazor uses a 2k page and LRU strategy for indexes by
> > default but in the case that I present this is not the
> > case, I am trying to understand the reason and seek also
> > ways to overcome the problem, I know that we can force
> > the plan prefetch etc, but I would like to avoid it and
> > let optimazor does its job properly, help me understand
> > and reduce the time that I am going to spend on the
> > documentations, please.
> > The version of our sybase is 12.5.03( we are moving to
> > 15.02, but it is going to take time to finalize all the
> > tests),
> > here is the diff betwwen thr two showplans
> > <: indicates dev
> >> indicates production:
> > < Table1
> > ---
> >> Anotherdb..Table1
> > 31c31
> > < Using I/O Size 2 Kbytes for index leaf pages.
> > ---
> >> Using I/O Size 16 Kbytes for index leaf pages.
> > 46c46
> > < Using I/O Size 2 Kbytes for index leaf pages.
> > ---
> >> Using I/O Size 16 Kbytes for index leaf pages.
> > 64c64
> > < Table1
> > ---
> >> Anotherdb..Table1
> > 70,71c70,71
> > < Using I/O Size 2 Kbytes for index leaf pages.
> > < With LRU Buffer Replacement Strategy for index
> > leaf pages.
> > ---
> >> Using I/O Size 16 Kbytes for index leaf pages.
> >> With MRU Buffer Replacement Strategy for index
> > leaf pages.
> > 95c95
> > < Using I/O Size 2 Kbytes for index leaf pages.
> >
> > All you suggestions can be very helpful.
> > Thanks in adv.
> > Tartampion
> > ---
> >> Using I/O Size 16 Kbytes for index leaf pages.