Sybase NNTP forums - End Of Life (EOL)

The NNTP forums from Sybase - forums.sybase.com - are now closed.

All new questions should be directed to the appropriate forum at the SAP Community Network (SCN).

Individual products have links to the respective forums on SCN, or you can go to SCN and search for your product in the search box (upper right corner) to find your specific developer center.

Performance Problem on ASE 15.0.1

28 posts in Performance and Tuning Last posting was on 2008-06-11 03:12:29.0Z
John Everitt Posted on 2008-05-29 20:17:28.0Z
Sender: 3f8e.483f08be.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <483f0f58.40ab.1681692777@sybase.com>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 29 May 2008 13:17:28 -0700
X-Trace: forums-1-dub 1212092248 10.22.241.41 (29 May 2008 13:17:28 -0700)
X-Original-Trace: 29 May 2008 13:17:28 -0700, 10.22.241.41
Lines: 162
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10879
Article PK: 89490

Recently we upgraded from 12.5.3 to 15.0.1 ESD#2 on HP-UX.
It seems that since the upgrade we are suffering from
degraded performance. A typical example is a simple stored
proc ...

create proc dbo.read_claims_sav @m_member int,
@c_dt_till int
as
begin
select m_member ,
c_dt_sett ,
c_dt_from ,
c_status ,
c_arb ,
c_curr_amt ,
c_dt_ent
from claims
where m_member = @m_member
and c_status <> 'L'
and c_status <> 'R'
and c_status <> 'D'
and c_status <> 'F'
for read only
end

The claims table is a big table (approx 60 million rows) and
has a clustered index on m_member.

When I run ...

set showplan on
go
set statistics io on
go
set statistics time on
go
set noexec off
go
read_claims_sav 75219, 20070101
go

I get the query plan ...

|SCAN Operator

| FROM TABLE
| claims
| Using Clustered Index.
| Index : M_MEMBER
| Forward Scan.
| Positioning by key.
| Keys are:
| M_MEMBER ASC
| Using I/O Size 4 Kbytes for index leaf pages.
| With LRU Buffer Replacement Strategy for index leaf
pages.
| Using I/O Size 4 Kbytes for data pages.
| With LRU Buffer Replacement Strategy for data
pages.


The first time I run the proc, it takes nearly 5 seconds
&#8230;

Table: CLAIMS scan count 1, logical reads: (regular=501
apf=2 total=503), physical reads: (regular=94 apf=256
total=350), apf IOs used=255
Total actual I/O cost for this command: 9756.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 4833
ms.
(return status = 0)
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 4833
ms.

The second time, with the data cached, of course everything
is quicker &#8230;

Table: CLAIMS scan count 1, logical reads: (regular=501
apf=0 total=501), physical reads: (regular=0 apf=0 total=0),
apf IOs used=0
Total actual I/O cost for this command: 1002.
Total writes for this command: 0

Execution Time 1.
SQL Server cpu time: 100 ms. SQL Server elapsed time: 6 ms.
(return status = 0)
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 6 ms.


The disk storage is supplied by an HP SAN and after
monitoring its performance and OS metrics, it seems that
i/os take on average somewhere between 8 and 10ms. Given
that the stored proc should be just a read from the table
requiring 94 regular i/os, shouldn't this query take at max
1 second ? This was pretty much what I was getting on 12.5.

Any ideas where the time is being lost here or at least what
to do next to investigate what's going on ? I've checked out
disk performance and this looks OK, there also seems to be
little problem with APF performance. This is typical output
from a sysmon ...

Asynchronous Prefetch Activity

APFs Issued 20.6 2.4
1234 5.0 %

APFs Denied Due To

APF I/O Overloads 0.0 0.0
0 0.0 %

APF Limit Overloads 0.0 0.0
0 0.0 %

APF Reused Overloads 0.1 0.0
3 0.0 %

APF Buffers Found in Cache

With Spinlock Held 0.0 0.0
1 0.0 %

W/o Spinlock Held 387.8 45.4
23265 94.9 %

------------------------- ------------ ------------
----------

Total APFs Requested 408.4 47.8
24503



Other Asynchronous Prefetch Statistics

APFs Used 17.9 2.1
1072 n/a

APF Waits for I/O 3.1 0.4
184 n/a

APF Discards 0.0 0.0
0 n/a

I'm now at a loss what to check. Any suggestions would be
greatly appreciated. Let me know if any further information
is required.

Thanks in advance.

John


"Mark A. Parsons" <iron_horse Posted on 2008-05-29 23:01:55.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <483f0f58.40ab.1681692777@sybase.com>
In-Reply-To: <483f0f58.40ab.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <483f35e3@forums-1-dub>
Date: 29 May 2008 16:01:55 -0700
X-Trace: forums-1-dub 1212102115 10.22.241.152 (29 May 2008 16:01:55 -0700)
X-Original-Trace: 29 May 2008 16:01:55 -0700, vip152.sybase.com
Lines: 209
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10880
Article PK: 89491

You mention that the select should require 94 regular i/o's. I'm assuming you misread the 'stats io' output and really
meant 501 (logical) IOs?

The first time the proc runs you're obviously incurring the overhead of (350 total) disk IOs.

The second time no physical IOs so it runs quickly. ("Duh, Mark!" ?)

What kind of results do/did you get in ASE 12.5.4 when the data did not reside in cache? How many physical IOs did you
see in ASE 12.5.4 to pull the data from disk? Did you see 501 logical IOs regardless of whether or not the data was in
cache?

Is the 'fast' query (with no physical IOs) considered 'degraded' compared to ASE 12.5.4? 6ms for a completely-cached
query that requires 501 logical IOs seems somewhat OK to me ... how fast is/was it on ASE 12.5.4 and how many logical
IOs did it require?

For your 'upgrade', did any of the hardware change? Different SAN/disks? Did you use the same SAN (hardware and device
configurations) with ASE 12.5.4?

Did you do an upgrade in place or a dump-n-load? Did the dataserver page size change?

Did you change the locking scheme of the CLAIMS table from APL to DOL once you got into ASE 15.0.1? Did you change any
of the index designs between 12.5.4 and 15.0.1?

Where was the SAN/disk performance measured ... at the storage array or on the HP machine? (trying to figure out if
interconnect/network overhead is included in the numbers your provided)

--------------------------------

Other issues I'd want to look at:

- was anything else running in the dataserver at the time that required physical IOs?

- does sp_sysmon show that you've max'd out the number of outstanding disk IOs?

- what are your configuration settings for 'disk i/o structures' and 'async' device/server settings?

- is the dataserver using async IO for it's devices (see ASE errorlog at the point during bootup when devices are activated)

- does 501 logical IOs sound 'reasonable' for this particular query and the number of records being processed?

- is the index and/or table relatively fragmented?

NOTE: I'm assuming you ran the SELECT as a separate stand alone query since the code snippet you provided looks like
part of an incomplete cursor definition ... ?

John Everitt wrote:
> Recently we upgraded from 12.5.3 to 15.0.1 ESD#2 on HP-UX.
> It seems that since the upgrade we are suffering from
> degraded performance. A typical example is a simple stored
> proc ...
>
> create proc dbo.read_claims_sav @m_member int,
> @c_dt_till int
> as
> begin
> select m_member ,
> c_dt_sett ,
> c_dt_from ,
> c_status ,
> c_arb ,
> c_curr_amt ,
> c_dt_ent
> from claims
> where m_member = @m_member
> and c_status <> 'L'
> and c_status <> 'R'
> and c_status <> 'D'
> and c_status <> 'F'
> for read only
> end
>
> The claims table is a big table (approx 60 million rows) and
> has a clustered index on m_member.
>
> When I run ...
>
> set showplan on
> go
> set statistics io on
> go
> set statistics time on
> go
> set noexec off
> go
> read_claims_sav 75219, 20070101
> go
>
> I get the query plan ...
>
> |SCAN Operator
> | FROM TABLE
> | claims
> | Using Clustered Index.
> | Index : M_MEMBER
> | Forward Scan.
> | Positioning by key.
> | Keys are:
> | M_MEMBER ASC
> | Using I/O Size 4 Kbytes for index leaf pages.
> | With LRU Buffer Replacement Strategy for index leaf
> pages.
> | Using I/O Size 4 Kbytes for data pages.
> | With LRU Buffer Replacement Strategy for data
> pages.
>
>
> The first time I run the proc, it takes nearly 5 seconds
> &#8230;
>
> Table: CLAIMS scan count 1, logical reads: (regular=501
> apf=2 total=503), physical reads: (regular=94 apf=256
> total=350), apf IOs used=255
> Total actual I/O cost for this command: 9756.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 4833
> ms.
> (return status = 0)
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 4833
> ms.
>
> The second time, with the data cached, of course everything
> is quicker &#8230;
>
> Table: CLAIMS scan count 1, logical reads: (regular=501
> apf=0 total=501), physical reads: (regular=0 apf=0 total=0),
> apf IOs used=0
> Total actual I/O cost for this command: 1002.
> Total writes for this command: 0
>
> Execution Time 1.
> SQL Server cpu time: 100 ms. SQL Server elapsed time: 6 ms.
> (return status = 0)
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 6 ms.
>
>
> The disk storage is supplied by an HP SAN and after
> monitoring its performance and OS metrics, it seems that
> i/os take on average somewhere between 8 and 10ms. Given
> that the stored proc should be just a read from the table
> requiring 94 regular i/os, shouldn't this query take at max
> 1 second ? This was pretty much what I was getting on 12.5.
>
> Any ideas where the time is being lost here or at least what
> to do next to investigate what's going on ? I've checked out
> disk performance and this looks OK, there also seems to be
> little problem with APF performance. This is typical output
> from a sysmon ...
>
> Asynchronous Prefetch Activity
>
> APFs Issued 20.6 2.4
> 1234 5.0 %
>
> APFs Denied Due To
>
> APF I/O Overloads 0.0 0.0
> 0 0.0 %
>
> APF Limit Overloads 0.0 0.0
> 0 0.0 %
>
> APF Reused Overloads 0.1 0.0
> 3 0.0 %
>
> APF Buffers Found in Cache
>
> With Spinlock Held 0.0 0.0
> 1 0.0 %
>
> W/o Spinlock Held 387.8 45.4
> 23265 94.9 %
>
> ------------------------- ------------ ------------
> ----------
>
> Total APFs Requested 408.4 47.8
> 24503
>
>
>
> Other Asynchronous Prefetch Statistics
>
> APFs Used 17.9 2.1
> 1072 n/a
>
> APF Waits for I/O 3.1 0.4
> 184 n/a
>
> APF Discards 0.0 0.0
> 0 n/a
>
> I'm now at a loss what to check. Any suggestions would be
> greatly appreciated. Let me know if any further information
> is required.
>
> Thanks in advance.
>
> John


John Everitt Posted on 2008-05-30 07:47:15.0Z
Sender: 3d64.483ef73a.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <483fb103.5221.1681692777@sybase.com>
References: <483f35e3@forums-1-dub>
MIME-Version: 1.0
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 30 May 2008 00:47:15 -0700
X-Trace: forums-1-dub 1212133635 10.22.241.41 (30 May 2008 00:47:15 -0700)
X-Original-Trace: 30 May 2008 00:47:15 -0700, 10.22.241.41
Lines: 129
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10882
Article PK: 89492

Mark,

Sorry, I've made the assumption that the issue with
performance here revolves around getting the data physically
from disk and this is why I am focusing on the physical
reads. Correct me if I'm wrong but with this being a simple
select query using a clustered index, then Sybase is only
required to traverse the index and then read the required
data pages to create the result set so I assume the entire
time spent here is related to i/o of which the bulk will be
physical. This is why the first course of action for me was
to commission a complete check of the SAN infrastructure as
I was initially convinced this problem was due to a poor
performing disk sub-system. We did find some problems which
have now been fixed and when I monitor disk performance (at
the O.S using sar, on the EVA management console using an HP
diagnostic tool and through Sybase using the monDeviceIO
monitoring table) I see that we get consistently between 8ms
and 10ms for a single i/o.

We’ve now been on Sybase 15 for around 6 months and I’m
afraid I don’t have figures for the execution of the query
in question on 12.5. I have performance figures for various
sections of our primary application for the last 9 months
and this shows a degradation in performance at the time of
the upgrade. With help from local Sybase support here we
suspected disk performance. We have now ruled out disk
performance and still have the problem and the local Sybase
guys are scratching their heads. So, this is where I am now
…

When upgrading, we did a dump and load on to a Sybase server
on the same hardware and SAN infrastructure. We didn’t
change the page size. The reason for the 4k i/o in the query
plan is that I have a 4k i/o pool defined on the default
data cache. The server uses a 2k page size. The 4k i/o pool
was the start of a test to see what effects varying sizes of
i/o pool would have on performance, once again with the hope
of reducing physical i/o and therefore improving elapsed
time of the query.

We have always used DOL locking scheme. It’s a restriction
of the 4GL we use (Powerhouse). The indexes have not been
changed since 12.5 and 15.

The Sybase server I am currently running my tests on resides
on a 8 x Itanium HP-UX box with 32GB of RAM. It is
configured with 3 engines and shares the machine with
another Sybase server that is a near clone of this one.
It’s just used by another client.

How do I know whether I’ve max’d out the number of
outstanding disk i/os ? I see no i/os delayed by lack of
disk i/o structures, server, engine or os config limit if
that’s what you mean although I do see that the total
number of requested disk i/os is always larger than the
number of total completed i/os which the Sybase support guy
tutted over. This prompted a change to the “i/o polling
process count” (10 to 50) but this seemed to have little
measurable affect on performance.

I have disk i/o structures set at 1024 and async i/o is
enabled.

501 logical i/os does seem a reasonable figure for this
query. There’s 850 rows approximately returned, with
probably around 3 or 4 rows per page and an index depth of
3. In fact, 501 could possibly be a bit high but I’m not
proficient enough on the internals to be able to justify
that statement.

I run a reorg on the indexes in the table on a regular
basis, however, I guess there is still fragmentation but I
am unsure to what degree. Optdiag gives …

Statistics for table: "CLAIMS"

Data page count: 14547777
Empty data page count: 0
Data row count:
58087058.0000000000000000
Forwarded row count: 0.0000000000000000
Deleted row count:
103785.0000000000000000
Data page CR count:
1877207.0000000000000000
OAM + allocation page count: 146932
First extent data pages: 6440
Data row size: 498.0000000000000000
Pages in largest partition: 14547777

Derived statistics:
Data page cluster ratio: 0.9953859221211152
Space utilization: 0.9932258525674552
Large I/O efficiency: 1.0000000000000000

Is there any better way of analyzing table fragmentation ?

I assume that the query looks like part of a cursor
definition because of the “for read only” statement.
This is a legacy thing that affected locking in older
versions of Sybase which has just not been removed yet. So
just to confirm that the code snippet is the complete stored
proc. There is nothing else to it …

One thing that also may be relevant is that I “sniffed”
the communication at the client level when I ran the same
stored proc for a different member number. The whole
communication took 1.5 seconds but I received the first row
after 270ms so the remainder of the 500 rows took a further
1.2 seconds to send with the delay being on the server side
i.e. the client “ACK’d” quickly and then there was a
significant delay for the next result set packet. Rerunning
the command immediately after produced the results in just a
few milliseconds. For me, this ruled out any network issues
or client response problems and it was interesting that most
of the time was spent between sending the first row and
sending the last. Is there any significance to this ?

Hope this sheds a little more light on the problem. Thanks
for giving it some thought !


Regards,

John Everitt
Database Manager
IT Infrastructure
Metropolitan Health Group


"Mark A. Parsons" <iron_horse Posted on 2008-05-30 12:06:15.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <483f35e3@forums-1-dub> <483fb103.5221.1681692777@sybase.com>
In-Reply-To: <483fb103.5221.1681692777@sybase.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <483fedb7$1@forums-1-dub>
Date: 30 May 2008 05:06:15 -0700
X-Trace: forums-1-dub 1212149175 10.22.241.152 (30 May 2008 05:06:15 -0700)
X-Original-Trace: 30 May 2008 05:06:15 -0700, vip152.sybase.com
Lines: 178
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10883
Article PK: 89494

See comments, below.

John Everitt wrote:
> Mark,
>
> Sorry, I've made the assumption that the issue with
> performance here revolves around getting the data physically
> from disk and this is why I am focusing on the physical
> reads. Correct me if I'm wrong but with this being a simple
> select query using a clustered index, then Sybase is only
> required to traverse the index and then read the required
> data pages to create the result set so I assume the entire

Since the table is DOL the clustered index actually performs the same way as an APL non-clustered index, so there's the
index cost and then the cost to 'jump over' to the data page. ("Duh, Mark!" ?)

> time spent here is related to i/o of which the bulk will be
> physical. This is why the first course of action for me was
> to commission a complete check of the SAN infrastructure as
> I was initially convinced this problem was due to a poor
> performing disk sub-system. We did find some problems which
> have now been fixed and when I monitor disk performance (at
> the O.S using sar, on the EVA management console using an HP
> diagnostic tool and through Sybase using the monDeviceIO
> monitoring table) I see that we get consistently between 8ms
> and 10ms for a single i/o.
>
> WeÂ’ve now been on Sybase 15 for around 6 months and IÂ’m
> afraid I donÂ’t have figures for the execution of the query
> in question on 12.5. I have performance figures for various
> sections of our primary application for the last 9 months
> and this shows a degradation in performance at the time of

I'm assuming there are other queries/jobs/batches that are showing performance issues besides just this one query?

I'm working with one client where we've spent most of the past 6 months addressing similar issues ... lots of poor
performance after switching to ASE 15. While there have been quite a few issues of 'poorly written SQL', there have
also been quite a few issues with optimizer bugs. (We're running ASE 15.0.2 ESD #4; waiting on ESD #5 for more
optimizer fixes.)

> the upgrade. With help from local Sybase support here we
> suspected disk performance. We have now ruled out disk
> performance and still have the problem and the local Sybase
> guys are scratching their heads. So, this is where I am now
> Â…
>
> When upgrading, we did a dump and load on to a Sybase server
> on the same hardware and SAN infrastructure. We didnÂ’t
> change the page size. The reason for the 4k i/o in the query
> plan is that I have a 4k i/o pool defined on the default
> data cache. The server uses a 2k page size. The 4k i/o pool
> was the start of a test to see what effects varying sizes of
> i/o pool would have on performance, once again with the hope
> of reducing physical i/o and therefore improving elapsed
> time of the query.

How often do you run into these queries that require physical IOs to complete?

I'm wondering if you may have your 4K IO pool defined too small thus leading to some thrashing of the cache, eg, records
being pushed out too often/quickly thus requiring an excessive number of pages to be re-read from disk?

This would require taking a gander at the cache/pool sizes (sp_cacheconfig) and some sp_sysmon results during periods of
'degraded' performance.

> We have always used DOL locking scheme. ItÂ’s a restriction
> of the 4GL we use (Powerhouse). The indexes have not been
> changed since 12.5 and 15.
>
> The Sybase server I am currently running my tests on resides
> on a 8 x Itanium HP-UX box with 32GB of RAM. It is
> configured with 3 engines and shares the machine with
> another Sybase server that is a near clone of this one.
> ItÂ’s just used by another client.
>
> How do I know whether IÂ’ve maxÂ’d out the number of
> outstanding disk i/os ? I see no i/os delayed by lack of

At the top of the Disk IO section in sp_sysmon there's a listing of the number of outstanding IOs for the server and
each engine. The far right column will have a number that is less than or equal to your 'disk i/o structures' setting.
If any of these numbers are at 1024 then you're artificially limiting the dataserver's ability to pump out more disk
IO requests. I personally try to keep my 'disk i/o structures' and server/engine async limits set higher than the
numbers I see in sp_sysmon; the objective being to push any disk IO bottlenecks out to the OS and disk subsystem.

At one client they had all of their servers setup with 'disk i/o structures' at the default of 256. On most of the
machiens we were able to push the number to 5000 and 6000 (10000 on one machine), with the net effect being that we saw
a big improvement in the speed with which disk-intensive operations were coducted.

> disk i/o structures, server, engine or os config limit if
> thatÂ’s what you mean although I do see that the total
> number of requested disk i/os is always larger than the
> number of total completed i/os which the Sybase support guy
> tutted over. This prompted a change to the “i/o polling
> process count” (10 to 50) but this seemed to have little
> measurable affect on performance.
>
> I have disk i/o structures set at 1024 and async i/o is
> enabled.

async i/o is enabled - good; but does the ASE errorlog show that you're actually using async IO?

>
> 501 logical i/os does seem a reasonable figure for this
> query. ThereÂ’s 850 rows approximately returned, with
> probably around 3 or 4 rows per page and an index depth of
> 3. In fact, 501 could possibly be a bit high but IÂ’m not
> proficient enough on the internals to be able to justify
> that statement.

501 for a DOL/clustered index (which behaves as a non-clustered index) may be valid, especially if there are more than
850 records with the given m_member value. (I'm not talking about the final result size but rather the number of
records that have the same m_member value ... without the other 'where' clause conditions thrown in.)


> I run a reorg on the indexes in the table on a regular
> basis, however, I guess there is still fragmentation but I
> am unsure to what degree. Optdiag gives Â…
>
> Statistics for table: "CLAIMS"
>
> Data page count: 14547777
> Empty data page count: 0
> Data row count:
> 58087058.0000000000000000
> Forwarded row count: 0.0000000000000000
> Deleted row count:
> 103785.0000000000000000
> Data page CR count:
> 1877207.0000000000000000
> OAM + allocation page count: 146932
> First extent data pages: 6440
> Data row size: 498.0000000000000000
> Pages in largest partition: 14547777
>
> Derived statistics:
> Data page cluster ratio: 0.9953859221211152
> Space utilization: 0.9932258525674552
> Large I/O efficiency: 1.0000000000000000

Cluster ratio and large IO efficiency look good. I assume the delete row count (103875) looks 'normal' and is regularly
cleaned up with reorg?

What's the stats for the index(es) look like?

> Is there any better way of analyzing table fragmentation ?
>
> I assume that the query looks like part of a cursor
> definition because of the “for read only” statement.
> This is a legacy thing that affected locking in older
> versions of Sybase which has just not been removed yet. So
> just to confirm that the code snippet is the complete stored
> proc. There is nothing else to it Â…
>
> One thing that also may be relevant is that I “sniffed”
> the communication at the client level when I ran the same
> stored proc for a different member number. The whole
> communication took 1.5 seconds but I received the first row
> after 270ms so the remainder of the 500 rows took a further
> 1.2 seconds to send with the delay being on the server side
> i.e. the client “ACK’d” quickly and then there was a
> significant delay for the next result set packet. Rerunning
> the command immediately after produced the results in just a
> few milliseconds. For me, this ruled out any network issues
> or client response problems and it was interesting that most
> of the time was spent between sending the first row and
> sending the last. Is there any significance to this ?
>
> Hope this sheds a little more light on the problem. Thanks
> for giving it some thought !

The 'only' issue I've seen so far with the query in question is the relatively high number of physical IOs when the
query is first run. But afterwards the query performs quite nicely.

Does the same query exhibit the same high physical IO traits throughout the day or just after the dataserver has been
started?

If this were the only query that's exhibiting performance problems ... and assuming the problem crops up throughout the
day ... then I'd want to look at the size of the caches/pools (in conjunction with sp_sysmon results) to see if perhaps
the 4KB IO pool is sized too small (ie, cache thrashing).


John Everitt Posted on 2008-05-30 15:59:03.0Z
Sender: 3d64.483ef73a.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <48402447.6184.1681692777@sybase.com>
References: <483fedb7$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 30 May 2008 08:59:03 -0700
X-Trace: forums-1-dub 1212163143 10.22.241.41 (30 May 2008 08:59:03 -0700)
X-Original-Trace: 30 May 2008 08:59:03 -0700, 10.22.241.41
Lines: 2071
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10885
Article PK: 89496

Mark,

Yeah, the reason I chose the stored proc I did as that it
and a few of its friends are the root of all evil within our
system. Our most sensitive app is a &#8220;claims processing
engine&#8221; which evaluates a medical health claim for a
member of a health scheme and then in near real time, notify
the pharmacy of how much the scheme will pay of the bill for
the medicine being purchased. Part of the claims processing
process is to look over previous claims for this member and
evaluate how much benefit the member has used previously as
this may affect the amount the scheme is willing to pay for
the current claim. In other words, pretty much anything we
do requires us to retrieve the claims for the given member
for something like the previous 12 months. As we have
200,000 or 300,000 members in a scheme and each member could
have, as in this case, possibly a thousand or so claims for
the previous year (if he / she is a bit of a sicknote of
course) then there&#8217;s no way we would be able to keep
the entire claims table in cache. It is therefore
commonplace for a query such as the one I&#8217;ve
highlighted to have to read most if not all of its data from
disk. I obviously don&#8217;t mind this as it is expected
and this is obviously what happened with 12.5, but what I am
not happy about is that it just seems to take longer on
version 15.

As you know, we are currently using 15.0.1 ESD #3 and I am
aware that 15.0.2 ESD #4 does offer some optimizer
improvements and I will be upgrading soon however I am
dubious as to whether it will make any difference in this
case as the query is so simple&#8230;

In order to ensure that cache sizing was not a problem, I
extended the default data cache from 600MB to 2GB when
adding the 4k pool and made both the 2k and 4k pools 1GB in
size.

Cache Name Status Type Config Value Run Value
default data cache Active Default 2000.00 Mb 2000.00 Mb
------------
------------
Total 2000.00 Mb 2000.00
Mb
==========================================================================
Cache: default data cache, Status: Active, Type:
Default
Config Size: 2000.00 Mb, Run Size: 2000.00 Mb
Config Replacement: strict LRU, Run Replacement:
strict LRU
Config Partition: 4, Run Partition:
4

IO Size Wash Size Config Size Run Size APF Percent
2 Kb 204800 Kb 0.00 Mb 1000.00 Mb 20
4 Kb 204800 Kb 1000.00 Mb 1000.00 Mb 20
(return status = 0)

I checked the log and I am using asynchronous i/o &#8230;

Statistics on the m_member index &#8230;

Statistics for index: "M_MEMBER"
(clustered)
Index column list: "M_MEMBER",
"C_DT_TILL"
Leaf count: 308311
Empty data page count: 27
Data page CR count:
14427615.0000000000000000
Index page CR count:
140967.0000000000000000
Data row CR count:
25893038.0000000000000000
First extent leaf pages: 1031
Leaf row size: 8.0032241673989741
Index height: 3

Derived statistics:
Data page cluster ratio: 0.4774558376931584
Index page cluster ratio: 0.6203095171902512
Data row cluster ratio: 0.7587628338684391
Space utilization: 0.0000000000000000
Large I/O efficiency: 0.7248002450256041

Is there any way I can get a better idea of where the time
goes when the query executes ?

A sysmon during a busy period &#8230;

===============================================================================
Sybase Adaptive Server Enterprise System Performance
Report
===============================================================================

Server Version: Adaptive Server Enterprise/15.0.1/EBF
14380 ESD#3/P/ia64
Server Name: PROD04
Run Date: May 30, 2008
Sampling Started at: May 30, 2008 15:02:02
Sampling Ended at: May 30, 2008 15:03:02
Sample Interval: 00:01:00
Sample Mode: No Clear
Counters Last Cleared: Apr 12, 2008 23:00:46

===============================================================================

Kernel Utilization
------------------

Your Runnable Process Search Count is set to 2000
and I/O Polling Process Count is set to 50

Engine Busy Utilization CPU Busy I/O Busy
Idle
------------------------ -------- --------
--------
Engine 0 48.1 % 14.9 %
36.9 %
Engine 1 27.0 % 38.7 %
34.4 %
Engine 2 22.7 % 38.0 %
39.3 %
------------------------ -------- --------
--------
Summary Total 97.8 % 91.6 %
110.6 %
Average 32.6 % 30.5 %
36.9 %

CPU Yields by Engine per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Engine 0 0.0 0.0
0 0.0 %
Engine 1 3.0 0.1
177 40.2 %
Engine 2 4.4 0.1
263 59.8 %
------------------------- ------------ ------------
----------
Total CPU Yields 7.3 0.2
440

Network Checks
Non-Blocking 79200.0 2234.1
4752002 100.0 %
Blocking 7.3 0.2
440 0.0 %
------------------------- ------------ ------------
----------
Total Network I/O Checks 79207.4 2234.3
4752442
Avg Net I/Os per Check n/a n/a
0.03517 n/a

Disk I/O Checks
Total Disk I/O Checks 82047.2 2314.4
4922830 n/a
Checks Returning I/O 43779.5 1235.0
2626768 53.4 %
Avg Disk I/Os Returned n/a n/a
0.00617 n/a


===============================================================================

Worker Process Management
-------------------------
per sec per xact
count % of total
------------ ------------
---------- ----------
Worker Process Requests
Total Requests 0.0 0.0
0 n/a

Worker Process Usage
Total Used 0.0 0.0
0 n/a
Max Ever Used During Sample 0.0 0.0
0 n/a

Memory Requests for Worker Processes
Total Requests 0.0 0.0
0 n/a


===============================================================================

Parallel Query Management
-------------------------

Parallel Query Usage per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total Parallel Queries 0.0 0.0
0 n/a

Merge Lock Requests per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total # of Requests 0.0 0.0
0 n/a

Sort Buffer Waits per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total # of Waits 0.0 0.0
0 n/a

===============================================================================

Task Management per sec per xact
count % of total
--------------------------- ------------ ------------
---------- ----------

Connections Opened 1.6 0.0
96 n/a

Task Context Switches by Engine
Engine 0 1685.3 47.5
101119 45.7 %
Engine 1 1184.1 33.4
71045 32.1 %
Engine 2 820.6 23.1
49234 22.2 %
------------------------- ------------ ------------
----------
Total Task Switches: 3690.0 104.1
221398

Task Context Switches Due To:
Voluntary Yields 24.0 0.7
1441 0.7 %
Cache Search Misses 119.4 3.4
7164 3.2 %
System Disk Writes 0.5 0.0
27 0.0 %
I/O Pacing 8.3 0.2
500 0.2 %
Logical Lock Contention 0.0 0.0
0 0.0 %
Address Lock Contention 0.0 0.0
0 0.0 %
Latch Contention 0.0 0.0
0 0.0 %
Log Semaphore Contention 0.0 0.0
1 0.0 %
PLC Lock Contention 0.0 0.0
0 0.0 %
Group Commit Sleeps 0.7 0.0
44 0.0 %
Last Log Page Writes 19.3 0.5
1160 0.5 %
Modify Conflicts 0.4 0.0
24 0.0 %
I/O Device Contention 0.0 0.0
0 0.0 %
Network Packet Received 1261.1 35.6
75667 34.2 %
Network Packet Sent 760.2 21.4
45610 20.6 %
Other Causes 1496.0 42.2
89760 40.5 %


===============================================================================

Application Management
----------------------

Application Statistics Summary (All Applications)
-------------------------------------------------
Priority Changes per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
To High Priority 0.1 0.0
6 3.9 %
To Medium Priority 2.0 0.1
120 78.9 %
To Low Priority 0.4 0.0
26 17.1 %
------------------------- ------------ ------------
----------
Total Priority Changes 2.5 0.1
152

Allotted Slices Exhausted per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
High Priority 0.0 0.0
0 0.0 %
Medium Priority 0.2 0.0
12 100.0 %
Low Priority 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Slices Exhausted 0.2 0.0
12

Skipped Tasks By Engine per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total Engine Skips 0.0 0.0
0 n/a

Engine Scope Changes 0.0 0.0
0 n/a

===============================================================================

ESP Management per sec per xact
count % of total
--------------------------- ------------ ------------
---------- ----------
ESP Requests 0.0 0.0
0 n/a
===============================================================================

Housekeeper Task Activity
-------------------------
per sec per xact
count % of total
------------ ------------
----------
Buffer Cache Washes
Clean 154.6 4.4
9273 100.0 %
Dirty 0.1 0.0
3 0.0 %
------------ ------------
----------
Total Washes 154.6 4.4
9276

Garbage Collections 1.3 0.0
75 n/a
Pages Processed in GC 0.0 0.0
0 n/a

Statistics Updates 0.1 0.0
6 n/a

===============================================================================

Monitor Access to Executing SQL
-------------------------------
per sec per xact
count % of total
------------ ------------
---------- ----------
Waits on Execution Plans 0.0 0.0
0 n/a
Number of SQL Text Overflows 0.0 0.0
1 n/a
Maximum SQL Text Requested n/a n/a
13537 n/a
(since beginning of sample)


Tuning Recommendations for Monitor Access to Executing SQL
----------------------------------------------------------
- Consider increasing the 'max SQL text monitored'
parameter
to at least 8816 (i.e., half way from its current value
to Maximum SQL Text Requested).

===============================================================================

Transaction Profile
-------------------

Transaction Summary per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Committed Xacts 35.5 n/a
2127 n/a

Transaction Detail per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Inserts
APL Heap Table 408.3 11.5
24497 54.2 %
APL Clustered Table 0.0 0.0
0 0.0 %
Data Only Lock Table 344.8 9.7
20687 45.8 %
------------------------- ------------ ------------
---------- ----------
Total Rows Inserted 753.1 21.2
45184 95.9 %

Updates
APL Deferred 0.2 0.0
10 2.0 %
APL Direct In-place 0.1 0.0
6 1.2 %
APL Direct Cheap 0.0 0.0
0 0.0 %
APL Direct Expensive 0.0 0.0
0 0.0 %
DOL Deferred 0.0 0.0
0 0.0 %
DOL Direct 8.2 0.2
491 96.8 %
------------------------- ------------ ------------
---------- ----------
Total Rows Updated 8.5 0.2
507 1.1 %

Data Only Locked Updates
DOL Replace 8.2 0.2
491 100.0 %
DOL Shrink 0.0 0.0
0 0.0 %
DOL Cheap Expand 0.0 0.0
0 0.0 %
DOL Expensive Expand 0.0 0.0
0 0.0 %
DOL Expand & Forward 0.0 0.0
0 0.0 %
DOL Fwd Row Returned 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total DOL Rows Updated 8.2 0.2
491 1.0 %

Deletes
APL Deferred 0.2 0.0
10 0.7 %
APL Direct 0.0 0.0
0 0.0 %
DOL 23.9 0.7
1432 99.3 %
------------------------- ------------ ------------
---------- ----------
Total Rows Deleted 24.0 0.7
1442 3.1 %
========================= ============ ============
==========
Total Rows Affected 785.6 22.2
47133

===============================================================================

Transaction Management
----------------------

ULC Flushes to Xact Log per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
by Full ULC 0.1 0.0
5 0.1 %
by End Transaction 26.0 0.7
1558 33.8 %
by Change of Database 0.5 0.0
32 0.7 %
by Single Log Record 38.6 1.1
2318 50.3 %
by Unpin 0.2 0.0
14 0.3 %
by Other 11.4 0.3
683 14.8 %
------------------------- ------------ ------------
----------
Total ULC Flushes 76.8 2.2
4610

ULC Log Records 368.5 10.4
22110 n/a
Max ULC Size During Sample n/a n/a
0 n/a

ULC Semaphore Requests
Granted 858.5 24.2
51509 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total ULC Semaphore Req 858.5 24.2
51509

Log Semaphore Requests
Granted 67.5 1.9
4049 100.0 %
Waited 0.0 0.0
1 0.0 %
------------------------- ------------ ------------
----------
Total Log Semaphore Req 67.5 1.9
4050

Transaction Log Writes 32.6 0.9
1956 n/a
Transaction Log Alloc 22.3 0.6
1336 n/a
Avg # Writes per Log Page n/a n/a
1.46407 n/a

Tuning Recommendations for Transaction Management
-------------------------------------------------
- Consider decreasing the 'user log cache size'
configuration parameter if it is greater than the
logical database page size.

===============================================================================

Index Management
----------------

Nonclustered Maintenance per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Ins/Upd Requiring Maint 0.0 0.0
0 n/a
# of NC Ndx Maint 0.0 0.0
0 n/a

Deletes Requiring Maint 0.0 0.0
0 n/a
# of NC Ndx Maint 0.0 0.0
0 n/a

RID Upd from Clust Split 0.0 0.0
0 n/a
# of NC Ndx Maint 0.0 0.0
0 n/a

Upd/Del DOL Req Maint 32.1 0.9
1923 n/a
# of DOL Ndx Maint 53.7 1.5
3223 n/a
Avg DOL Ndx Maint / Op n/a n/a
1.67603 n/a

Page Splits 2.0 0.1
117 n/a
Retries 0.0 0.0
0 0.0 %
Deadlocks 0.0 0.0
0 0.0 %
Add Index Level 0.0 0.0
0 0.0 %

Page Shrinks 0.0 0.0
0 n/a

Index Scans per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Ascending Scans 75.0 2.1
4497 2.7 %
DOL Ascending Scans 2684.2 75.7
161049 95.2 %
Descending Scans 2.8 0.1
168 0.1 %
DOL Descending Scans 56.2 1.6
3369 2.0 %
------------ ------------
----------
Total Scans 2818.1 79.5
169083

===============================================================================

Metadata Cache Management
-------------------------

Metadata Cache Summary per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------

Open Object Usage
Active n/a n/a
7529 n/a
Max Ever Used Since Boot n/a n/a
14209 n/a
Free n/a n/a
92471 n/a
Reuse Requests
Succeeded n/a n/a
0 n/a
Failed n/a n/a
0 n/a

Open Index Usage
Active n/a n/a
1650 n/a
Max Ever Used Since Boot n/a n/a
2677 n/a
Free n/a n/a
8350 n/a
Reuse Requests
Succeeded n/a n/a
0 n/a
Failed n/a n/a
0 n/a

Open Partition Usage
Active n/a n/a
1652 n/a
Max Ever Used Since Boot n/a n/a
2681 n/a
Free n/a n/a
8348 n/a
Reuse Requests
Succeeded n/a n/a
0 n/a
Failed n/a n/a
0 n/a

Open Database Usage
Active n/a n/a
9 n/a
Max Ever Used Since Boot n/a n/a
11 n/a
Free n/a n/a
3 n/a
Reuse Requests
Succeeded n/a n/a
0 n/a
Failed n/a n/a
0 n/a

Object Manager Spinlock Contention n/a n/a
n/a 0.0 %

Object Spinlock Contention n/a n/a
n/a 0.0 %

Index Spinlock Contention n/a n/a
n/a 0.0 %

Index Hash Spinlock Contention n/a n/a
n/a 0.0 %

Partition Spinlock Contention n/a n/a
n/a 0.0 %

Partition Hash Spinlock Contention n/a n/a
n/a 0.0 %

===============================================================================

Lock Management
---------------

Lock Summary per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total Lock Requests 8651.9 244.1
519115 n/a
Avg Lock Contention 0.0 0.0
0 0.0 %
Deadlock Percentage 0.0 0.0
0 0.0 %

Lock Detail per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------

Table Lock Hashtable
Lookups 493.4 13.9
29601 n/a
Avg Chain Length n/a n/a
0.49802 n/a
Spinlock Contention n/a n/a
n/a 0.0 %

Exclusive Table
Granted 28.3 0.8
1700 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total EX-Table Requests 28.3 0.8
1700 0.3 %

Shared Table
Granted 2.0 0.1
119 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total SH-Table Requests 2.0 0.1
119 0.0 %

Exclusive Intent
Granted 31.0 0.9
1858 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total EX-Intent Requests 31.0 0.9
1858 0.4 %

Shared Intent
Granted 405.2 11.4
24310 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total SH-Intent Requests 405.2 11.4
24310 4.7 %

Page & Row Lock HashTable
Lookups 8093.2 228.3
485592 n/a
Avg Chain Length n/a n/a
0.00126 n/a
Spinlock Contention n/a n/a
n/a 0.0 %

Exclusive Page
Granted 0.3 0.0
20 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total EX-Page Requests 0.3 0.0
20 0.0 %

Update Page
Granted 0.0 0.0
1 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total UP-Page Requests 0.0 0.0
1 0.0 %

Shared Page
Granted 49.0 1.4
2938 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total SH-Page Requests 49.0 1.4
2938 0.6 %


Exclusive Row
Granted 50.9 1.4
3054 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total EX-Row Requests 50.9 1.4
3054 0.6 %

Update Row
Granted 24.7 0.7
1482 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total UP-Row Requests 24.7 0.7
1482 0.3 %

Shared Row
Granted 7948.9 224.2
476936 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total SH-Row Requests 7948.9 224.2
476936 91.9 %


Next-Key
Total Next-Key Requests 0.0 0.0
0 n/a

Address Lock Hashtable
Lookups 111.6 3.1
6697 n/a
Avg Chain Length n/a n/a
0.00000 n/a
Spinlock Contention n/a n/a
n/a 0.0 %

Exclusive Address
Granted 5.2 0.1
310 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total EX-Address Requests 5.2 0.1
310 0.1 %

Shared Address
Granted 106.5 3.0
6387 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total SH-Address Requests 106.5 3.0
6387 1.2 %


Last Page Locks on Heaps
Granted 408.3 11.5
24497 100.0 %
Waited 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total Last Pg Locks 408.3 11.5
24497 100.0 %


Deadlocks by Lock Type per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total Deadlocks 0.0 0.0
0 n/a


Deadlock Detection
Deadlock Searches 0.0 0.0
0 n/a


Lock Promotions
Total Lock Promotions 0.0 0.0
0 n/a


Lock Timeouts by Lock Type per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total Timeouts 0.0 0.0
0 n/a


===============================================================================

Data Cache Management
---------------------

Cache Statistics Summary (All Caches)
-------------------------------------
per sec per xact
count % of total
------------ ------------
---------- ----------

Cache Search Summary
Total Cache Hits 19028.5 536.8
1141709 95.2 %
Total Cache Misses 966.0 27.2
57958 4.8 %
------------------------- ------------ ------------
----------
Total Cache Searches 19994.5 564.0
1199667

Cache Turnover
Buffers Grabbed 1334.1 37.6
80047 n/a
Buffers Grabbed Dirty 0.0 0.0
0 0.0 %

Cache Strategy Summary
Cached (LRU) Buffers 19660.6 554.6
1179638 100.0 %
Discarded (MRU) Buffers 3.3 0.1
198 0.0 %

Large I/O Usage
Large I/Os Performed 142.5 4.0
8549 85.3 %

Large I/Os Denied due to
Pool < Prefetch Size 24.6 0.7
1478 14.7 %
Pages Requested
Reside in Another
Buffer Pool 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Large I/O Requests 167.1 4.7
10027

Large I/O Effectiveness
Pages by Lrg I/O Cached 291.0 8.2
17462 n/a
Pages by Lrg I/O Used 176.8 5.0
10606 60.7 %

Asynchronous Prefetch Activity
APFs Issued 84.2 2.4
5054 23.7 %
APFs Denied Due To
APF I/O Overloads 0.0 0.0
0 0.0 %
APF Limit Overloads 0.0 0.0
0 0.0 %
APF Reused Overloads 4.7 0.1
283 1.3 %
APF Buffers Found in Cache
With Spinlock Held 0.1 0.0
3 0.0 %
W/o Spinlock Held 265.9 7.5
15951 74.9 %
------------------------- ------------ ------------
----------
Total APFs Requested 354.9 10.0
21291

Other Asynchronous Prefetch Statistics
APFs Used 74.7 2.1
4481 n/a
APF Waits for I/O 11.5 0.3
690 n/a
APF Discards 0.0 0.0
0 n/a

Dirty Read Behavior
Page Requests 0.0 0.0
0 n/a

-------------------------------------------------------------------------------
Cache: dbcc_cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 0.0 %

Cache Searches
Cache Hits 0.1 0.0
3 100.0 %
Found in Wash 0.0 0.0
0 0.0 %
Cache Misses 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Searches 0.1 0.0
3

Pool Turnover 0.0 0.0
0 n/a

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash
Section Yet

Cache Strategy
Cached (LRU) Buffers 0.1 0.0
3 100.0 %
Discarded (MRU) Buffers 0.0 0.0
0 0.0 %

Large I/O Usage
Total Large I/O Requests 0.0 0.0
0 n/a

Large I/O Detail
16 Kb Pool
Pages Cached 0.0 0.0
0 n/a
Pages Used 0.0 0.0
0 n/a

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : dbcc_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

- Consider removing the 16k pool for this cache.

-------------------------------------------------------------------------------
Cache: default data cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 74.0 %

Cache Searches
Cache Hits 14698.6 414.6
881913 99.3 %
Found in Wash 2.3 0.1
135 0.0 %
Cache Misses 101.3 2.9
6076 0.7 %
------------------------- ------------ ------------
----------
Total Cache Searches 14799.8 417.5
887989

Pool Turnover
2 Kb Pool
LRU Buffer Grab 48.4 1.4
2903 25.7 %
Grabbed Dirty 0.0 0.0
0 0.0 %
4 Kb Pool
LRU Buffer Grab 139.6 3.9
8373 74.3 %
Grabbed Dirty 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Turnover 187.9 5.3
11276

Buffer Wash Behavior
Buffers Passed Clean 82.0 2.3
4917 99.9 %
Buffers Already in I/O 0.0 0.0
0 0.0 %
Buffers Washed Dirty 0.0 0.0
0 0.0 %

Cache Strategy
Cached (LRU) Buffers 14521.0 409.6
871258 100.0 %
Discarded (MRU) Buffers 0.0 0.0
0 0.0 %

Large I/O Usage
Large I/Os Performed 139.5 3.9
8370 91.1 %

Large I/Os Denied due to
Pool < Prefetch Size -1.2 0.0
-71 -0.8 %
Pages Requested
Reside in Another
Buffer Pool 14.8 0.4
890 9.7 %
------------------------- ------------ ------------
----------
Total Large I/O Requests 153.2 4.3
9189

Large I/O Detail
4 Kb Pool
Pages Cached 279.1 7.9
16746 n/a
Pages Used 176.8 5.0
10606 63.3 %

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : default data
cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

-------------------------------------------------------------------------------
Cache: log_cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 0.2 %

Cache Searches
Cache Hits 37.7 1.1
2261 100.0 %
Found in Wash 0.0 0.0
0 0.0 %
Cache Misses 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Searches 37.7 1.1
2261

Pool Turnover
2 Kb Pool
LRU Buffer Grab 0.2 0.0
12 6.3 %
Grabbed Dirty 0.0 0.0
0 0.0 %
8 Kb Pool
LRU Buffer Grab 3.0 0.1
179 93.7 %
Grabbed Dirty 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Turnover 3.2 0.1
191

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash
Section Yet

Cache Strategy
Cached (LRU) Buffers 7.9 0.2
471 100.0 %
Discarded (MRU) Buffers 0.0 0.0
0 0.0 %

Large I/O Usage
Large I/Os Performed 3.0 0.1
179 93.7 %

Large I/Os Denied due to
Pool < Prefetch Size 0.0 0.0
0 0.0 %
Pages Requested
Reside in Another
Buffer Pool 0.2 0.0
12 6.3 %
------------------------- ------------ ------------
----------
Total Large I/O Requests 3.2 0.1
191

Large I/O Detail
8 Kb Pool
Pages Cached 11.9 0.3
716 n/a
Pages Used 0.0 0.0
0 0.0 %

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : log_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

-------------------------------------------------------------------------------
Cache: member_cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 0.1 %

Cache Searches
Cache Hits 24.2 0.7
1454 100.0 %
Found in Wash 6.5 0.2
392 27.0 %
Cache Misses 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Searches 24.2 0.7
1454

Pool Turnover
2 Kb Pool
LRU Buffer Grab 0.0 0.0
2 100.0 %
Grabbed Dirty 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Turnover 0.0 0.0
2

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash
Section Yet

Cache Strategy
Cached (LRU) Buffers 18.0 0.5
1079 100.0 %
Discarded (MRU) Buffers 0.0 0.0
0 0.0 %

Large I/O Usage
Total Large I/O Requests 0.0 0.0
0 n/a

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : member_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

- Consider adding a large I/O pool for this cache.

-------------------------------------------------------------------------------
Cache: rule_cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 1.2 %

Cache Searches
Cache Hits 235.3 6.6
14118 100.0 %
Found in Wash 27.8 0.8
1669 11.8 %
Cache Misses 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Searches 235.3 6.6
14118

Pool Turnover 0.0 0.0
0 n/a

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash
Section Yet

Cache Strategy
Cached (LRU) Buffers 225.0 6.3
13499 100.0 %
Discarded (MRU) Buffers 0.0 0.0
0 0.0 %

Large I/O Usage
Total Large I/O Requests 0.0 0.0
0 n/a

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : rule_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

-------------------------------------------------------------------------------
Cache: tempdb_cache
per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Spinlock Contention n/a n/a
n/a 0.0 %

Utilization n/a n/a
n/a 24.5 %

Cache Searches
Cache Hits 4032.7 113.8
241960 82.3 %
Found in Wash 0.0 0.0
0 0.0 %
Cache Misses 864.7 24.4
51882 17.7 %
------------------------- ------------ ------------
----------
Total Cache Searches 4897.4 138.1
293842

Pool Turnover
2 Kb Pool
LRU Buffer Grab 1143.0 32.2
68578 100.0 %
Grabbed Dirty 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Cache Turnover 1143.0 32.2
68578

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash
Section Yet

Cache Strategy
Cached (LRU) Buffers 4888.8 137.9
293328 99.9 %
Discarded (MRU) Buffers 3.3 0.1
198 0.1 %

Large I/O Usage
Large I/Os Performed 0.0 0.0
0 0.0 %

Large I/Os Denied due to
Pool < Prefetch Size 10.8 0.3
647 100.0 %
Pages Requested
Reside in Another
Buffer Pool 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Large I/O Requests 10.8 0.3
647

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
Page Requests 0.0 0.0 0
n/a

Tuning Recommendations for Data cache : tempdb_cache
-------------------------------------
- Consider using 'strict LRU replacement policy'.
for this cache.

- Consider adding a large I/O pool for this cache.

===============================================================================

Procedure Cache Management per sec per xact
count % of total
--------------------------- ------------ ------------
---------- ----------
Procedure Requests 185.1 5.2
11108 n/a

Procedure Reads from Disk 0.1 0.0
3 0.0 %
Procedure Writes to Disk 0.0 0.0
0 0.0 %
Procedure Removals 20.5 0.6
1227 n/a
Procedure Recompilations 0.0 0.0
2 n/a

Recompilations Requests:
Execution Phase 0.0 0.0
0 0.0 %
Compilation Phase 0.0 0.0
0 0.0 %
Execute Cursor Execution 0.0 0.0
0 0.0 %
Redefinition Phase 0.0 0.0
2 100.0 %

Recompilation Reasons:
Table Missing 0.0 0.0
2 n/a
Temporary Table Missing 0.0 0.0
0 n/a
Schema Change 0.0 0.0
0 n/a
Index Change 0.0 0.0
0 n/a
Isolation Level Change 0.0 0.0
0 n/a
Permissions Change 0.0 0.0
0 n/a
Cursor Permissions Change 0.0 0.0
0 n/a

SQL Statement Cache:
Statements Cached 0.0 0.0
0 n/a
Statements Found in Cache 0.0 0.0
0 n/a
Statements Not Found 0.0 0.0
0 n/a
Statements Dropped 0.0 0.0
0 n/a
Statements Restored 0.0 0.0
0 n/a
Statements Not Cached 23.3 0.7
1400 n/a


===============================================================================

Memory Management per sec per xact
count % of total
--------------------------- ------------ ------------
---------- ----------
Pages Allocated 609.2 17.2
36554 n/a
Pages Released 609.6 17.2
36573 n/a

===============================================================================

Recovery Management
-------------------

Checkpoints per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
# of Normal Checkpoints 0.1 0.0
3 100.0 %
# of Free Checkpoints 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
----------
Total Checkpoints 0.1 0.0
3

Avg Time per Normal Chkpt 0.00000 seconds

===============================================================================

Disk I/O Management
-------------------

Max Outstanding I/Os per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Server n/a n/a
0 n/a
Engine 0 n/a n/a
677 n/a
Engine 1 n/a n/a
616 n/a
Engine 2 n/a n/a
631 n/a


I/Os Delayed by
Disk I/O Structures n/a n/a
0 n/a
Server Config Limit n/a n/a
0 n/a
Engine Config Limit n/a n/a
0 n/a
Operating System Limit n/a n/a
0 n/a


Total Requested Disk I/Os 272.4 7.7
16344

Completed Disk I/O's
Engine 0 37.9 1.1
2274 14.0 %
Engine 1 131.0 3.7
7861 48.5 %
Engine 2 101.1 2.9
6068 37.4 %
------------------------- ------------ ------------
----------
Total Completed I/Os 270.1 7.6
16203


Device Activity Detail
----------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_01
data01 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 8.1 0.2
486 33.3 %
Non-APF 16.2 0.5
970 66.4 %
Writes 0.1 0.0
4 0.3 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 24.3 0.7
1460 8.9 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_2
data02 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 14.4 0.4
866 36.2 %
Non-APF 25.4 0.7
1522 63.7 %
Writes 0.0 0.0
1 0.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 39.8 1.1
2389 14.6 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_3
data03 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 18.9 0.5
1134 52.5 %
Non-APF 17.1 0.5
1025 47.5 %
Writes 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 36.0 1.0
2159 13.2 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_4
data04 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 16.9 0.5
1012 54.4 %
Non-APF 14.1 0.4
848 45.6 %
Writes 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 31.0 0.9
1860 11.4 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_5
data05 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 13.8 0.4
830 48.5 %
Non-APF 14.7 0.4
883 51.5 %
Writes 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 28.6 0.8
1713 10.5 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_6
data06 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 12.1 0.3
723 50.4 %
Non-APF 11.9 0.3
711 49.6 %
Writes 0.0 0.0
0 0.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 23.9 0.7
1434 8.8 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_7
data07 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 0.0 0.0
0 0.0 %
Writes 0.0 0.0
1 100.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
1 0.0 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_DATA04/rLV_DB15PROD04_DATA04_8
data08 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 1.9 0.1
114 85.1 %
Writes 0.3 0.0
20 14.9 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 2.2 0.1
134 0.8 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_LOG04/rLV_DB15PROD04_LOG04_01
log01 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 0.1 0.0
3 0.2 %
Writes 21.7 0.6
1300 99.8 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 21.7 0.6
1303 8.0 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_LOG04/rLV_DB15PROD04_LOG04_02
log02 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 0.0 0.0
0 0.0 %
Writes 0.3 0.0
18 100.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.3 0.0
18 0.1 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_TEMPDBDATA/rLV_DB15PROD04_TEMPDBDATA1
tempdat1 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 11.5 0.3
688 36.5 %
Writes 20.0 0.6
1197 63.5 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 31.4 0.9
1885 11.5 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_TEMPDBDATA/rLV_DB15PROD04_TEMPDBDATA2
tempdat2 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 6.0 0.2
361 19.5 %
Writes 24.8 0.7
1486 80.5 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 30.8 0.9
1847 11.3 %



-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD04_TEMPDBLOG/rLV_DB15PROD04_TEMPDBLOG1
templog2 per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
0 n/a
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
0 0.0 %



-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P4master.dat
master per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 0.7 0.0
39 28.9 %
Writes 1.6 0.0
96 71.1 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 2.3 0.1
135 0.8 %



-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P4sysprocs.dat
sysprocsdev per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Reads
APF 0.0 0.0
0 0.0 %
Non-APF 0.0 0.0
0 0.0 %
Writes 0.0 0.0
1 100.0 %
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
1 0.0 %



-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P4systemdb.dat
systemdbdev per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
0 n/a
------------------------- ------------ ------------
---------- ----------
Total I/Os 0.0 0.0
0 0.0 %



-----------------------------------------------------------------------------



===============================================================================

Network I/O Management
----------------------

Total Network I/O Requests 2021.3 57.0
121277 n/a
Network I/Os Delayed 0.0 0.0
0 0.0 %


Total TDS Packets Received per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Engine 0 1031.4 29.1
61886 81.8 %
Engine 1 209.7 5.9
12580 16.6 %
Engine 2 20.0 0.6
1198 1.6 %
------------------------- ------------ ------------
----------
Total TDS Packets Rec'd 1261.1 35.6
75664


Total Bytes Received per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Engine 0 68749.6 1939.3
4124976 82.8 %
Engine 1 13658.4 385.3
819505 16.4 %
Engine 2 647.8 18.3
38869 0.8 %
------------------------- ------------ ------------
----------
Total Bytes Rec'd 83055.8 2342.9
4983350


Avg Bytes Rec'd per Packet n/a n/a
65 n/a


-----------------------------------------------------------------------------

Total TDS Packets Sent per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Engine 0 1238.8 34.9
74329 81.2 %
Engine 1 264.7 7.5
15880 17.4 %
Engine 2 21.4 0.6
1286 1.4 %
------------------------- ------------ ------------
----------
Total TDS Packets Sent 1524.9 43.0
91495


Total Bytes Sent per sec per xact
count % of total
------------------------- ------------ ------------
---------- ----------
Engine 0 780656.3 22021.3
46839377 77.3 %
Engine 1 223197.6 6296.1
13391853 22.1 %
Engine 2 6682.0 188.5
400921 0.7 %
------------------------- ------------ ------------
----------
Total Bytes Sent 1010535.9 28505.9
60632151


Avg Bytes Sent per Packet n/a n/a
662 n/a

===============================================================================

Replication Agent
-----------------

Replication Agent: POLMED
Replication Server: CPTREP01_RS

per sec per xact
count % of total
------------ ------------
---------- ----------
Log Scan Summary
Log Records Scanned n/a n/a
11453 n/a
Log Records Processed n/a n/a
2962 n/a

Log Scan Activity
Updates n/a n/a
438 n/a
Inserts n/a n/a
263 n/a
Deletes n/a n/a
35 n/a
Store Procedures n/a n/a
0 n/a
DDL Log Records n/a n/a
0 n/a
Writetext Log Records n/a n/a
0 n/a
Text/Image Log Records n/a n/a
0 n/a
CLRs n/a n/a
0 n/a
Checkpoints Processed n/a n/a
0 n/a
Transactions purged n/a n/a
0 n/a

Transaction Activity
Opened n/a n/a
1113 n/a
Commited n/a n/a
1110 n/a
Aborted n/a n/a
3 n/a
Prepared n/a n/a
3 n/a
Delayed Commit n/a n/a
0 n/a
Maintenance User n/a n/a
0 n/a

Log Extension Wait
Count n/a n/a
1029 n/a
Amount of time (ms) n/a n/a
58919 n/a
Longest Wait (ms) n/a n/a
0 n/a
Average Time (ms) n/a n/a
57.3 n/a

Schema Cache Lookups
Forward Schema
Count n/a n/a
0 n/a
Total Wait (ms) n/a n/a
0 n/a
Longest Wait (ms) n/a n/a
0 n/a
Average Time (ms) n/a n/a
0.0 n/a
Backward Schema
Count n/a n/a
0 n/a
Total Wait (ms) n/a n/a
0 n/a
Longest Wait (ms) n/a n/a
0 n/a
Average Time (ms) n/a n/a
0.0 n/a

Truncation Point Movement
Moved n/a n/a
3 n/a
Gotten from RS n/a n/a
3 n/a

Connections to Replication Server
Success n/a n/a
0 n/a
Failed n/a n/a
0 n/a

Network Packet Information
Packets Sent n/a n/a
1486 n/a
Full Packets Sent n/a n/a
416 n/a
Largest Packet n/a n/a
0 n/a
Amount of Bytes Sent n/a n/a
1796260 n/a
Average Packet n/a n/a
1208.8 n/a

I/O Wait from RS
Count n/a n/a
1489 n/a
Amount of Time (ms) n/a n/a
1189 n/a
Longest Wait (ms) n/a n/a
0 n/a
Average Wait (ms) n/a n/a
0.8 n/a

--------------------------------------------------------------------------------
=============================== End of Report
=================================
(1 row affected)
(return status = 0)

Regards,

John


"Mark A. Parsons" <iron_horse Posted on 2008-05-31 15:36:07.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <483fedb7$1@forums-1-dub> <48402447.6184.1681692777@sybase.com>
In-Reply-To: <48402447.6184.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <48417067@forums-1-dub>
Date: 31 May 2008 08:36:07 -0700
X-Trace: forums-1-dub 1212248167 10.22.241.152 (31 May 2008 08:36:07 -0700)
X-Original-Trace: 31 May 2008 08:36:07 -0700, vip152.sybase.com
Lines: 107
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10888
Article PK: 89499

I'm afraid I've just got a lot more questions ...

- did ASE 12.5.3 utilize the same devices on the same SAN/DG? if different DG's are being used for ASE 15.0.1, do the
DG's have the same number of spindles and amount of cache? [Granted, it's probably too late at this point to know for
sure but I'm wondering if you had the same physical IO times with ASE 12.5.3, or were the times 'faster' due to the use
of a more efficient DG configuration - whether it be more spindles, more cache, or the use of cylinders on the outer
edge of the disks?]

- can you quantify the performance degradations you've noticed over the last 6-9 months? is there any chance the volume
of data and/or queries has ramped up in that time which could also account for some degraded performance, eg, more
contention for cpu and/or disk resources? [Again, perhaps too late to know for sure but I'm wondering if we're
comparing apples to apples in terms of application/database workload? While an increased workload is not necessarily a
reason for degraded performance I'd want to rule out the possibility.]

- you've provided an idea of monSysWaits numbers from a 5-minute period of monitoring, but it's not clear at this point
how those numbers relate (if at all) to the stored proc in question; could you try the following and report back with
all of the output:

set statistics time on
set statistics io on
go
select convert(varchar,getdate(),109)
select * from master..monProcessWaits where SPID = @@spid
<run_slow_disk_bound_query>
select * from master..monProcessWaits where SPID = @@spid
select convert(varchar,getdate(),109)
go
select convert(varchar,getdate(),109)
select * from master..monProcessWaits where SPID = @@spid
<run_fast_in_cache_query>
select * from master..monProcessWaits where SPID = @@spid
select convert(varchar,getdate(),109)
go

- you've mentioned that the stored proc in question (as well as a couple others) are the root of all evil within your
system; could you humor me with a brief explanation of why you think this particular proc is one cause for the degraded
performance? [Being on the outside looking in I'm just wondering how this lil fella popped to the top of the most
wanted list, and whether there could be other issues that are being overlooked? This (obviously ?) also falls under the
quantification issue.]

------------------------

Setting aside the stored proc for a moment ...

Does your system by any chance submit a lot of dynamic SQL to the dataserver, in particular a lot of same-format dynamic
SQL?

NOTE: By 'same-format' dynamic SQL I'm referring to batches of SQL that differ only in terms of the SARG's provided to
queries.

With ASE 15.0.x the optimizer typically spends a lot more time optimizing queries and I'm wondering if you could be
seeing a performance degradation due to the overhead of query compilation.

At one of my current clients we've seen a huge degradation in overall performance due to the excessive overhead of query
compilation times for high volumes of dynamic SQL commands.

In the case of a replicate dataserver we found that with ASE 12.5.4 a 'normal' nightly batch process in the PDB would
replicate through to the RDB in a matter of a few hours; with ASE 15.0.x the same 'normal' nightly batch process in the
PDB took more than 24 hours to replicate through to the RDB, during which time the next night's batch process started
queuing up; net result is that replication into the RDB fell behind and never caught up.

In another case a batch process running against a database was submitting on the order of 20K batches of same-format
dynamic SQL to the dataserver. With ASE 12.5.4 this batch would run in 5-10 minutes, with ASE 15.0.x the batch was
taking 45-60 minutes to complete. [Yeah, the batch process could/should be rewritten as a set-based process but getting
the code changed is not high on the list of to-do's for the developers and/or management - even though a proof of
concept shows a set-based process could run in under 30 seconds.]

In both cases we found that we could 'get back to' ASE 12.5.4 performance numbers by allocating some memory to
'statement cache size' and enabling 'literal autoparam'.

During a period of 'heavy' dataserver activity what kind of numbers do you get from sp_sysmon for the 'Statement Cache'
section? [In particular I'm interested in the numbers in 'Statements Not Cached'.]

From your 1-minute sp_sysmon we see 23.3 statements/second, or 1400 statements/minute; is this low/high/about-right for
periods of 'heavy' dataserver activity during which you see 'degraded' performance? [In the case of the one client's
replication issue we saw this number jump from dozen's per second to thousand's per second once we enabled statement
cache and literal autoparam!]

I also notice from your 1-minute sp_sysmon that you've got ~33% cpu utilization numbers, but the cache hit rates seem
relatively low (though this really depends on the speed of your cpu's). The (relatively) high cpu utilization numbers,
in conjunction with (relatively) low cache hit rates, can sometimes be indicative of a dataserver that's spending a good
amount of time compiling queries.

At some point you may want to consider allocating some memory to 'statement cache size' and enabling 'literal
autoparam'. You'll need to experiment with the value you assign to 'statement cache size', with the idea being to make
a sizeable dent in the numbers in sp_sysmon's 'Statements Not Cached' column. [You'll probably want to look at longer
sp_sysmon samplings, or a group of consecutive 'smaller' sp_sysmon samplings, to get a good idea of overall
effectiveness. ("Duh, Mark!" ?)]

NOTE: There are a few cases of 'literal autoparam' causing either a) data corruption (CR#'s 491032, 432956) or b)
follow-on performance related issues (akin to a stored proc which can generate different query plans based on differing
input parameters).

------------------------

From your 1-minute sp_sysmon I also see that you've got about a 10% removal rate in the procedure cache. While this
may be a one-time blip, and you'll want to take into consideration longer sp_sysmon sampling periods, this could be
indicative of some performance degradation.

What kind of numbers are you getting from 'sp_monitorconfig' for procedure cache usage (in particular 'current %' and
'max used' during a period of heavy activity)?

If your (degraded) process makes use of a lot of stored procs you may see some boost in performance by adding some more
memory to the procedure cache.


"Mark A. Parsons" <iron_horse Posted on 2008-05-31 15:45:04.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <483fedb7$1@forums-1-dub> <48402447.6184.1681692777@sybase.com> <48417067@forums-1-dub>
In-Reply-To: <48417067@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <48417280$1@forums-1-dub>
Date: 31 May 2008 08:45:04 -0700
X-Trace: forums-1-dub 1212248704 10.22.241.152 (31 May 2008 08:45:04 -0700)
X-Original-Trace: 31 May 2008 08:45:04 -0700, vip152.sybase.com
Lines: 147
X-Authenticated-User: TeamSybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10889
Article PK: 89500

Clarification:

"[In the case of the one client's replication issue we saw this number jump from dozen's per second to thousand's per
second once we enabled statement cache and literal autoparam!]"

Should have been:

"[In the case of the one client's replication issue we saw this number jump from dozen's per second 'statements not
cached' to thousand's per second 'statements cached' once we enabled statement cache and literal autoparam!]"

Mark A. Parsons wrote:
> I'm afraid I've just got a lot more questions ...
>
> - did ASE 12.5.3 utilize the same devices on the same SAN/DG? if
> different DG's are being used for ASE 15.0.1, do the DG's have the same
> number of spindles and amount of cache? [Granted, it's probably too
> late at this point to know for sure but I'm wondering if you had the
> same physical IO times with ASE 12.5.3, or were the times 'faster' due
> to the use of a more efficient DG configuration - whether it be more
> spindles, more cache, or the use of cylinders on the outer edge of the
> disks?]
>
> - can you quantify the performance degradations you've noticed over the
> last 6-9 months? is there any chance the volume of data and/or queries
> has ramped up in that time which could also account for some degraded
> performance, eg, more contention for cpu and/or disk resources? [Again,
> perhaps too late to know for sure but I'm wondering if we're comparing
> apples to apples in terms of application/database workload? While an
> increased workload is not necessarily a reason for degraded performance
> I'd want to rule out the possibility.]
>
> - you've provided an idea of monSysWaits numbers from a 5-minute period
> of monitoring, but it's not clear at this point how those numbers relate
> (if at all) to the stored proc in question; could you try the following
> and report back with all of the output:
>
> set statistics time on
> set statistics io on
> go
> select convert(varchar,getdate(),109)
> select * from master..monProcessWaits where SPID = @@spid
> <run_slow_disk_bound_query>
> select * from master..monProcessWaits where SPID = @@spid
> select convert(varchar,getdate(),109)
> go
> select convert(varchar,getdate(),109)
> select * from master..monProcessWaits where SPID = @@spid
> <run_fast_in_cache_query>
> select * from master..monProcessWaits where SPID = @@spid
> select convert(varchar,getdate(),109)
> go
>
> - you've mentioned that the stored proc in question (as well as a couple
> others) are the root of all evil within your system; could you humor me
> with a brief explanation of why you think this particular proc is one
> cause for the degraded performance? [Being on the outside looking in
> I'm just wondering how this lil fella popped to the top of the most
> wanted list, and whether there could be other issues that are being
> overlooked? This (obviously ?) also falls under the quantification issue.]
>
> ------------------------
>
> Setting aside the stored proc for a moment ...
>
> Does your system by any chance submit a lot of dynamic SQL to the
> dataserver, in particular a lot of same-format dynamic SQL?
>
> NOTE: By 'same-format' dynamic SQL I'm referring to batches of SQL that
> differ only in terms of the SARG's provided to queries.
>
> With ASE 15.0.x the optimizer typically spends a lot more time
> optimizing queries and I'm wondering if you could be seeing a
> performance degradation due to the overhead of query compilation.
>
> At one of my current clients we've seen a huge degradation in overall
> performance due to the excessive overhead of query compilation times for
> high volumes of dynamic SQL commands.
>
> In the case of a replicate dataserver we found that with ASE 12.5.4 a
> 'normal' nightly batch process in the PDB would replicate through to the
> RDB in a matter of a few hours; with ASE 15.0.x the same 'normal'
> nightly batch process in the PDB took more than 24 hours to replicate
> through to the RDB, during which time the next night's batch process
> started queuing up; net result is that replication into the RDB fell
> behind and never caught up.
>
> In another case a batch process running against a database was
> submitting on the order of 20K batches of same-format dynamic SQL to the
> dataserver. With ASE 12.5.4 this batch would run in 5-10 minutes, with
> ASE 15.0.x the batch was taking 45-60 minutes to complete. [Yeah, the
> batch process could/should be rewritten as a set-based process but
> getting the code changed is not high on the list of to-do's for the
> developers and/or management - even though a proof of concept shows a
> set-based process could run in under 30 seconds.]
>
> In both cases we found that we could 'get back to' ASE 12.5.4
> performance numbers by allocating some memory to 'statement cache size'
> and enabling 'literal autoparam'.
>
> During a period of 'heavy' dataserver activity what kind of numbers do
> you get from sp_sysmon for the 'Statement Cache' section? [In
> particular I'm interested in the numbers in 'Statements Not Cached'.]
>
> From your 1-minute sp_sysmon we see 23.3 statements/second, or 1400
> statements/minute; is this low/high/about-right for periods of 'heavy'
> dataserver activity during which you see 'degraded' performance? [In
> the case of the one client's replication issue we saw this number jump
> from dozen's per second to thousand's per second once we enabled
> statement cache and literal autoparam!]
>
> I also notice from your 1-minute sp_sysmon that you've got ~33% cpu
> utilization numbers, but the cache hit rates seem relatively low (though
> this really depends on the speed of your cpu's). The (relatively) high
> cpu utilization numbers, in conjunction with (relatively) low cache hit
> rates, can sometimes be indicative of a dataserver that's spending a
> good amount of time compiling queries.
>
> At some point you may want to consider allocating some memory to
> 'statement cache size' and enabling 'literal autoparam'. You'll need to
> experiment with the value you assign to 'statement cache size', with the
> idea being to make a sizeable dent in the numbers in sp_sysmon's
> 'Statements Not Cached' column. [You'll probably want to look at longer
> sp_sysmon samplings, or a group of consecutive 'smaller' sp_sysmon
> samplings, to get a good idea of overall effectiveness. ("Duh, Mark!" ?)]
>
> NOTE: There are a few cases of 'literal autoparam' causing either a)
> data corruption (CR#'s 491032, 432956) or b) follow-on performance
> related issues (akin to a stored proc which can generate different query
> plans based on differing input parameters).
>
> ------------------------
>
> From your 1-minute sp_sysmon I also see that you've got about a 10%
> removal rate in the procedure cache. While this may be a one-time blip,
> and you'll want to take into consideration longer sp_sysmon sampling
> periods, this could be indicative of some performance degradation.
>
> What kind of numbers are you getting from 'sp_monitorconfig' for
> procedure cache usage (in particular 'current %' and 'max used' during a
> period of heavy activity)?
>
> If your (degraded) process makes use of a lot of stored procs you may
> see some boost in performance by adding some more memory to the
> procedure cache.
>
>
>


John Everitt Posted on 2008-05-31 19:54:18.0Z
Sender: 136d.4841ac84.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4841acea.1379.1681692777@sybase.com>
References: <48417280$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 31 May 2008 12:54:18 -0700
X-Trace: forums-1-dub 1212263658 10.22.241.41 (31 May 2008 12:54:18 -0700)
X-Original-Trace: 31 May 2008 12:54:18 -0700, 10.22.241.41
Lines: 130
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10890
Article PK: 89501

Mark,

Crikey, you're a hard task master ... Some of what you ask
will need to wait until Monday when the system will be
loaded once again. However, I did want to reply mostly to
rant excitedly about the statement cache and literal
autoparam. I'll try and contain myself for the time being
and go through your mail in order ...

First, the devices for version 15 were allocated from the
same DGs. We have a "high speed" DG that contains smaller
15K disks that we assign log devices from at RAID 1 and a
lower speed DG using larger 10K disks that we assign data
disks from at RAID 5. All these devices are raw.
Incidentally our tempdb is also on raw devices allocated
from these DGs which did appear to give us a performance
improvement over a tempdb based on a VxFS file system.

I have kept figures for the average duration of the "claims
processing loop" for the last 9 months and on a typical
account we saw approximately a 10% degradation in
performance. I say approximately as the the thing that was
strange was that while 12.5 gave very consistent
performance, 15 gave very erratic performance, one day being
comparable to 12.5, the next considerably worse. There was
nothing I could find that would distnguish a bad day from a
good. However, the business can change focus on a day by day
basis and it is likely that one day part of the business
would be doing predominantly one function which would change
the next day. You are also right to suggest that the data
load changes over time. On the bigger accounts for example,
our claims table can grow by 40 to 50 thousand records a
day. Every year we strip off an old 12 months of data from
the table. We tend to do this around June / July time i.e.
we're coming to that time now. It'll be interesting to see
what happens when we do purge a proportion of the records.
We therefore do expect some degradation in performance over
the year but this doesn't explain the extent of the
degradation we have seen since 15 was installed.

I'll get you the monProcessWaits details in due course ...

A couple of reasons why I suspect read_claims_sav. First, we
have embedded a set of timing functions within the code and
have isolated the area where the application sums claims
values as an area where time is spent. We also use ProActive
DBA from Whitesands and are able to isolate "long running"
queries relatively easily. Yup, read_claims_sav and it's
brothers rate fairly highly here too.

Now, time to start getting excited. Yes, our system does
alot, and I mean alot, of similar format embedded SQL
queries. The reason being that our 4GL, PowerHouse, appears
to break all it's requests, coded in it's own proprietary
language, into many small but similar requests. The slight
twist being that it generally creates a stored proc of the
query then calls the stored proc. This allows it to call the
same proc repeatedly if necessary which would obviously
reduce the recompile burden a bit but the stored proc is
only persistent for the single PowerHouse request. I guess
this would circumvent the statement cache though ... damn
it. However, there is still alot of other requests which are
straight embedded SQL which would benefit from the statement
cache. I think this is definately a very promising area to
be investigated. One thing, we currently use 200 histograms
for our stats. The system is very OLTP orientated i.e. all
the queries pretty much are of the "select fields from table
where field = something" variety with very little joining or
anything dss-like going on. Would it make sense to try
reducing the number of histograms so as to reduce the amount
of stats the optimiser needs to wade through ?

>I also notice from your 1-minute sp_sysmon that you've got
~33% cpu utilization
>numbers, but the cache hit rates seem
>relatively low (though this really depends on the speed of
your cpu's). The
>(relatively) high cpu utilization numbers,
>in conjunction with (relatively) low cache hit rates, can
sometimes be indicative of
>a dataserver that's spending a good
>amount of time compiling queries.

Hmmm... I took a look at the statement above and thought
"what's he going on about low cache hit rates ?" believing
that our cache hit rates are normally over 99% and sure
enough in the sysmon I provided the cache hit rate is only
95%. The reason appears to be that the cache hit rate on
tempdb is at 82.3% but the cache utilisation is only 24.5%.
How is this possible ? Given that pretty much all our
queries are very short lived, if the cache is not fully
utilised, how is it possible for the cache hit rate not to
be 100% ? I'll try and monitor this next week but any ideas
how this is possible ?

Monday evenings is the time we make config changes. We need
a week to be able to check on performance changes so I'll
change the statement cache settings then. Once changed, we
will also start watching the statement cache stats closely
..

As for the procedure cache, we have suffered before with a
very busy and full procedure cache. We believe this is due
to the way PowerHouse works i.e. it's use of multiple stored
procs when processing it's own scripts. We are therefore
wary of heavy procedure cache usage but will pay particular
attention to this next week. One thing though, if we do get
alot of action in the procedure cache, is it possible for
ASE to potentially thrash looking for contiguous space in
the cache ? We did run into an issue with an older version
of 12.5 where the server would suddenly become very
unresponsive and after a few months of investigation by
Sybase the conclusion was that due to the way we use the
procedure cache and the fact that at the time we were using
an 8k page size, it was possible for the procedure cache to
become very fragmented causing the process to start to work
very hard in finding usable space. Could we potentially be
running in to a bit of this and if so are there any tell
tale signs that this is what is going on internally ?
Thinking about this, I will probably play with adding more
procedure cache to the system anyway. I'll querue this
change up after the statement cache changes ...

Anyway, enormous thanks for the assistance. I am almost
looking forward to Monday morning to start looking at some
of these suggestions. Well, not really ...

Kind Regards,

John


"Mark A. Parsons" <iron_horse Posted on 2008-05-31 20:46:34.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <48417280$1@forums-1-dub> <4841acea.1379.1681692777@sybase.com>
In-Reply-To: <4841acea.1379.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4841b92a$1@forums-1-dub>
Date: 31 May 2008 13:46:34 -0700
X-Trace: forums-1-dub 1212266794 10.22.241.152 (31 May 2008 13:46:34 -0700)
X-Original-Trace: 31 May 2008 13:46:34 -0700, vip152.sybase.com
Lines: 81
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10891
Article PK: 89502


>> I also notice from your 1-minute sp_sysmon that you've got
> ~33% cpu utilization
>> numbers, but the cache hit rates seem
>> relatively low (though this really depends on the speed of
> your cpu's). The
>> (relatively) high cpu utilization numbers,
>> in conjunction with (relatively) low cache hit rates, can
> sometimes be indicative of
>> a dataserver that's spending a good
>> amount of time compiling queries.
>
> Hmmm... I took a look at the statement above and thought
> "what's he going on about low cache hit rates ?" believing
> that our cache hit rates are normally over 99% and sure
> enough in the sysmon I provided the cache hit rate is only
> 95%. The reason appears to be that the cache hit rate on
> tempdb is at 82.3% but the cache utilisation is only 24.5%.
> How is this possible ? Given that pretty much all our
> queries are very short lived, if the cache is not fully
> utilised, how is it possible for the cache hit rate not to
> be 100% ? I'll try and monitor this next week but any ideas
> how this is possible ?

Sorry, I wasn't addressing the cache hit percentages but rather the cache hit numbers, let me rephrase my comment ...

19K total cache hits per second seems quite low with 3 cpu's hitting an average of 32.6% utilization and no spinlock
contention.

Either you've got some really slow cpu's or the dataserver engines are busy doing something other than hitting data
caches. Granted, there are quite a few things the optimizer could be doing besides hitting data caches ... it's just
that I've seen similarly high-cpu-low-cache-hit numbers when I knew there were a lot of inbound dynamic SQL queries
while statement cache/literal autoparam was disabled.

On a couple of my client's systems I've seen the average cpu utilization numbers drop from 40% to 15% as soon as I
enabled statement cache (and literal autoparam). Yeah, these particular systems see a *lot* of repeated dynamic SQL. YMMV.

> As for the procedure cache, we have suffered before with a
> very busy and full procedure cache. We believe this is due
> to the way PowerHouse works i.e. it's use of multiple stored
> procs when processing it's own scripts. We are therefore
> wary of heavy procedure cache usage but will pay particular
> attention to this next week. One thing though, if we do get
> alot of action in the procedure cache, is it possible for
> ASE to potentially thrash looking for contiguous space in
> the cache ? We did run into an issue with an older version
> of 12.5 where the server would suddenly become very
> unresponsive and after a few months of investigation by
> Sybase the conclusion was that due to the way we use the
> procedure cache and the fact that at the time we were using
> an 8k page size, it was possible for the procedure cache to
> become very fragmented causing the process to start to work
> very hard in finding usable space. Could we potentially be
> running in to a bit of this and if so are there any tell
> tale signs that this is what is going on internally ?
> Thinking about this, I will probably play with adding more
> procedure cache to the system anyway. I'll querue this
> change up after the statement cache changes ...

If PH is generating a lot of new procs on-the-fly that could be an explanation for the numbers you're seeing. And
obviously (?) for most systems it may not be possible to hit 0% removals in procedure cache.

The 10% may not be a major issue ... twas just making a passing comment.

As for the question about procedure cache thrashing ... *shrug*. Up until a couple weeks ago I wouldn't have had any
comment on the topic, but recently one of my clients has seen intermittent problems with procedure cache ... messages
about being out of procedure cache yet sp_monitorconfig shows nowhere near max proc cache usage ... syslocks-related
queries going walkabout and generating 10's of millions of cache hits while filling up tempdb ... high cpu utilization
... slowdowns in dataserver response times. After providing a few shared mem dumps and a host of other schtuff the best
we've gotten out of Sybase TechSupport so far is that there could be some sort of fragmentation in proc cache.

NOTE: There was supposedly a (related?) problem with the need for contiguous chunks of memory in procedure cache that
could cause the out-of-proc-cache errors; but that's suppose to have been fixed as of ASE 15.0.2 ESD #4 (which is what
my client is running).

> Anyway, enormous thanks for the assistance. I am almost
> looking forward to Monday morning to start looking at some
> of these suggestions. Well, not really ...

You're going to wait *all* weekend before doing anything about this major performance problem?!?! Slackard! ;-)


John Everitt Posted on 2008-06-02 17:55:59.0Z
Sender: 6e93.4844333b.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4844342f.6eae.1681692777@sybase.com>
References: <4841b92a$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 2 Jun 2008 10:55:59 -0700
X-Trace: forums-1-dub 1212429359 10.22.241.41 (2 Jun 2008 10:55:59 -0700)
X-Original-Trace: 2 Jun 2008 10:55:59 -0700, 10.22.241.41
Lines: 8
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10892
Article PK: 89503

OK. I've enabled statement cache and literal autoparam and
also upped the disk io structures to 5000. Should have an
idea about any performance gain in the next 24 hours.

Do you think there would be any reduction in query compile
time if I reduce the number of histograms ?

John


"Mark A. Parsons" <iron_horse Posted on 2008-06-02 22:47:09.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <4841b92a$1@forums-1-dub> <4844342f.6eae.1681692777@sybase.com>
In-Reply-To: <4844342f.6eae.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4844786d$1@forums-1-dub>
Date: 2 Jun 2008 15:47:09 -0700
X-Trace: forums-1-dub 1212446829 10.22.241.152 (2 Jun 2008 15:47:09 -0700)
X-Original-Trace: 2 Jun 2008 15:47:09 -0700, vip152.sybase.com
Lines: 20
X-Authenticated-User: TeamSybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10895
Article PK: 89505

I think if you were running with thousands of histograms per column you could see some degradation (in terms of proc
cache usage, in terms of parse-n-compile), but 200 (as you've mentioned in another post) doesn't sound like that big of
a deal.

Obviously you're welcome to try knocking the histogram step count up and down to see if you notice any changes in
performance.

You know that the 'histogram tuning factor' setting can actually cause your number of histogram steps to balloon out way
beyond 'histogram steps', right?

John Everitt wrote:
> OK. I've enabled statement cache and literal autoparam and
> also upped the disk io structures to 5000. Should have an
> idea about any performance gain in the next 24 hours.
>
> Do you think there would be any reduction in query compile
> time if I reduce the number of histograms ?
>
> John


John Everitt Posted on 2008-06-03 06:23:32.0Z
Sender: 6e93.4844333b.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4844e364.28e6.1681692777@sybase.com>
References: <4844786d$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 2 Jun 2008 23:23:32 -0700
X-Trace: forums-1-dub 1212474212 10.22.241.41 (2 Jun 2008 23:23:32 -0700)
X-Original-Trace: 2 Jun 2008 23:23:32 -0700, 10.22.241.41
Lines: 8
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10897
Article PK: 89508

Yup. I've had a run in with 'histogram tuning factor'
already when I was investigating semantic partitioning. When
upgrading to 15.0.1 I believe there was a default value > 1
(can't remember exactly what it was) but in conjunction with
all the local indexes that were created for a 32 partition
table, the statistics produced were huge and caused a
significant delay (approximately 1 second on every query on
the table) for the optimizer to do its stuff. John


Sherlock, Kevin Posted on 2008-06-03 15:16:37.0Z
From: "Sherlock, Kevin" <kevin.sherlock@teamsybase.com>
Newsgroups: sybase.public.ase.performance+tuning
References: <4841b92a$1@forums-1-dub> <4844342f.6eae.1681692777@sybase.com> <4844786d$1@forums-1-dub>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 37
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
X-RFC2646: Format=Flowed; Response
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <48456055$1@forums-1-dub>
Date: 3 Jun 2008 08:16:37 -0700
X-Trace: forums-1-dub 1212506197 10.22.241.152 (3 Jun 2008 08:16:37 -0700)
X-Original-Trace: 3 Jun 2008 08:16:37 -0700, vip152.sybase.com
X-Authenticated-User: teamsybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10898
Article PK: 89509


"Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
news:4844786d$1@forums-1-dub...
>I think if you were running with thousands of histograms per column you
>could see some degradation (in terms of proc cache usage, in terms of
>parse-n-compile), but 200 (as you've mentioned in another post) doesn't
>sound like that big of a deal.
>
> Obviously you're welcome to try knocking the histogram step count up and
> down to see if you notice any changes in performance.
>
> You know that the 'histogram tuning factor' setting can actually cause
> your number of histogram steps to balloon out way beyond 'histogram
> steps', right?

Just FYI, as of 15.0.2 ESD2 there is an adjustment to how "histogram tuning
factor" works. I haven't tested this out, but here is what the docs state
about this change:
http://infocenter.sybase.com/help/topic/com.sybase.dc00775_1502/html/esd2_1502/CJAICHFB.htm

=======================================
Adaptive Server 15.0.2 ESD #2 introduces a change to the 'histogram tuning
factor' parameter in sp_configure. This parameter controls the number of
steps Adaptive Server analyzes per histogram for update statistics, update
index statistics, update all statistics, and create index.

If you set the 'histogram tuning factor' configuration parameter to its
default value of 20 and a large number of steps are requested for the
histogram, then the actual step count used for the histogram is limited to:

min (max (400, requested_steps), histogram_tuning_factor * requested_steps)

This is designed to reduce the procedure cache usage.


John Everitt Posted on 2008-06-04 19:07:28.0Z
Sender: 60a1.4846dbb2.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4846e7f0.62ae.1681692777@sybase.com>
References: <4844786d$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 4 Jun 2008 12:07:28 -0700
X-Trace: forums-1-dub 1212606448 10.22.241.41 (4 Jun 2008 12:07:28 -0700)
X-Original-Trace: 4 Jun 2008 12:07:28 -0700, 10.22.241.41
Lines: 292
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10905
Article PK: 89516

Mark,

I've given the statement cache and literal autoparam a
couple of days, not a great deal of time admittedly, with
negligible improvement but I need to try and move forward
with this. So, I've been playing with the monProcessWaits
table and came across what I believe to be a good example of
the problem I'm suffering from. I started by running the
following set of commands ...

set statistics time on
go
set statistics io on
go
select convert(varchar,getdate(),109)
go
select * from master..monProcessWaits where SPID = @@spid
go
read_claims_sav 49430, 0
go
select * from master..monProcessWaits where SPID = @@spid
go
select convert(varchar,getdate(),109)
go

I kept running these commands using slightly different
parameters, i.e. 49430 is a member number and I cycled
through member number to get different results, and
generally saw that when I used a member number that returned
results, that the elapsed time of the query generally
matched the number of physical non-APF ios * the average
time required to complete an io on our system (between 8 and
10ms). This seems to make a lot of sense. However, for one
iteration, I got the following information &#8230;

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.


Jun 4 2008 8:22:43:790AM
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:1 er:300
cpu: 0
/
CacheScan
(4)
(VA = 0)
r:1 er:300

============================================================
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

SPID KPID WaitEventID Waits WaitTime
1571 1330578616 29 295 2600
1571 1330578616 124 47 100
1571 1330578616 214 3 0
1571 1330578616 250 793 1119500
1571 1330578616 251 256 100
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:5 er:10
cpu: 0


/
TableScan
master..monProcessWaits
(VA = 0)
r:5 er:10
l:0 el:10
p:0 ep:10

============================================================
Table: monProcessWaits scan count 0, logical reads:
(regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
total=0), apf IOs used=0
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

m_member c_dt_sett c_dt_from
c_status c_arb c_curr_amt c_dt_ent
<RESULTS &#8211; removed for space considerations>
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:532 er:444
cpu: 0


/
IndexScan
M_MEMBER
(VA = 0)
r:532 er:444
l:344 el:287
p:282 ep:287

============================================================
Table: CLAIMS scan count 1, logical reads: (regular=344
apf=0 total=344), physical reads: (regular=80 apf=202
total=282), apf IOs used=207
Total actual I/O cost for this command: 7738.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
ms.
(return status = 0)
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

SPID KPID WaitEventID Waits WaitTime
1571 1330578616 29 375 4500
1571 1330578616 124 66 300
1571 1330578616 214 3 0
1571 1330578616 250 795 1119500
1571 1330578616 251 256 100
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:5 er:10
cpu: 0


/
TableScan
master..monProcessWaits
(VA = 0)
r:5 er:10
l:0 el:10
p:0 ep:10

============================================================
Table: monProcessWaits scan count 0, logical reads:
(regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
total=0), apf IOs used=0
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 3 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.


Jun 4 2008 8:22:45:980AM
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:1 er:300
cpu: 0
/
CacheScan
(4)
(VA = 0)
r:1 er:300

============================================================
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.

As you can see, the procedure call took over 2000ms to
complete and yet only required 80 physical non-APF reads.
Even if the ios took 10ms, this should still require less
than 1000ms to execute. Also, notice that my task was in
wait id 29 for the entire duration of the query. Id 29 is
&#8220;waiting for regular buffer read to complete&#8221;.
My suspicion is therefore that in certain circumstances the
APF reads become serialized. So, I did a sysmon pretty much
straight after this test and &#8230;

Asynchronous Prefetch Activity
APFs Issued 330.1 12.5
99033 33.2 %
APFs Denied Due To
APF I/O Overloads 0.0 0.0
0 0.0 %
APF Limit Overloads 0.0 0.0
0 0.0 %
APF Reused Overloads 41.5 1.6
12452 4.2 %
APF Buffers Found in Cache
With Spinlock Held 0.3 0.0
88 0.0 %
W/o Spinlock Held 622.0 23.5
186592 62.6 %
------------------------- ------------ ------------
----------
Total APFs Requested 993.9 37.5
298165

Other Asynchronous Prefetch Statistics
APFs Used 297.1 11.2
89131 n/a
APF Waits for I/O 47.9 1.8
14376 n/a
APF Discards 0.0 0.0
0 n/a

Now, I have little experience of this sort of stuff, but
would over 10% of APF reads waiting for i/o be considered
high ? It would match what I am seeing at an application
performance level if the server would periodically have
issues reading ahead and therefore queries that would be
expected to take 200 or 300ms suddenly take 1000ms or
greater. If this is indeed an APF I/O Wait issue, then what
can I do to reduce this ? I am pondering putting large i/o
pools on the default data cache to reduce the number of
i/os. Is there anything else I can do ? If this isn&#8217;t
an APF issue and I am slightly concerned that the server
wasn&#8217;t waiting on id 32, &#8220;waiting for an APF
buffer read to complete&#8221;, then what could the problem
potentially be ? It is very unlikely that the SAN storage
goes through short periods where i/o latency is enormously
degraded. We&#8217;re talking io times > 25ms for the query
shown above !

What do you think ?

John


Sherlock, Kevin Posted on 2008-06-04 20:09:50.0Z
From: "Sherlock, Kevin" <kevin.sherlock@teamsybase.com>
Newsgroups: sybase.public.ase.performance+tuning
References: <4844786d$1@forums-1-dub> <4846e7f0.62ae.1681692777@sybase.com>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 303
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
X-RFC2646: Format=Flowed; Original
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4846f68e$1@forums-1-dub>
Date: 4 Jun 2008 13:09:50 -0700
X-Trace: forums-1-dub 1212610190 10.22.241.152 (4 Jun 2008 13:09:50 -0700)
X-Original-Trace: 4 Jun 2008 13:09:50 -0700, vip152.sybase.com
X-Authenticated-User: teamsybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10906
Article PK: 89517

I'd be interested in seeing the optdiag output for the "claims" table (and
indexes). Sometimes fragmentation (extent fragmentation and interleaving)
can contribute to APF io waits. i think you are right on to suspect that
APF io is your culprit.

Any chance you could try to "reorg rebuild" your "claims" table and see if
that helps?

<John Everitt> wrote in message news:4846e7f0.62ae.1681692777@sybase.com...
> Mark,
>
> I've given the statement cache and literal autoparam a
> couple of days, not a great deal of time admittedly, with
> negligible improvement but I need to try and move forward
> with this. So, I've been playing with the monProcessWaits
> table and came across what I believe to be a good example of
> the problem I'm suffering from. I started by running the
> following set of commands ...
>
> set statistics time on
> go
> set statistics io on
> go
> select convert(varchar,getdate(),109)
> go
> select * from master..monProcessWaits where SPID = @@spid
> go
> read_claims_sav 49430, 0
> go
> select * from master..monProcessWaits where SPID = @@spid
> go
> select convert(varchar,getdate(),109)
> go
>
> I kept running these commands using slightly different
> parameters, i.e. 49430 is a member number and I cycled
> through member number to get different results, and
> generally saw that when I used a member number that returned
> results, that the elapsed time of the query generally
> matched the number of physical non-APF ios * the average
> time required to complete an io on our system (between 8 and
> 10ms). This seems to make a lot of sense. However, for one
> iteration, I got the following information &#8230;
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
>
> Jun 4 2008 8:22:43:790AM
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:1 er:300
> cpu: 0
> /
> CacheScan
> (4)
> (VA = 0)
> r:1 er:300
>
> ============================================================
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
> SPID KPID WaitEventID Waits WaitTime
> 1571 1330578616 29 295 2600
> 1571 1330578616 124 47 100
> 1571 1330578616 214 3 0
> 1571 1330578616 250 793 1119500
> 1571 1330578616 251 256 100
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:5 er:10
> cpu: 0
>
>
> /
> TableScan
> master..monProcessWaits
> (VA = 0)
> r:5 er:10
> l:0 el:10
> p:0 ep:10
>
> ============================================================
> Table: monProcessWaits scan count 0, logical reads:
> (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
> total=0), apf IOs used=0
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
> m_member c_dt_sett c_dt_from
> c_status c_arb c_curr_amt c_dt_ent
> <RESULTS &#8211; removed for space considerations>
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:532 er:444
> cpu: 0
>
>
> /
> IndexScan
> M_MEMBER
> (VA = 0)
> r:532 er:444
> l:344 el:287
> p:282 ep:287
>
> ============================================================
> Table: CLAIMS scan count 1, logical reads: (regular=344
> apf=0 total=344), physical reads: (regular=80 apf=202
> total=282), apf IOs used=207
> Total actual I/O cost for this command: 7738.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
> ms.
> (return status = 0)
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
> ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
> SPID KPID WaitEventID Waits WaitTime
> 1571 1330578616 29 375 4500
> 1571 1330578616 124 66 300
> 1571 1330578616 214 3 0
> 1571 1330578616 250 795 1119500
> 1571 1330578616 251 256 100
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:5 er:10
> cpu: 0
>
>
> /
> TableScan
> master..monProcessWaits
> (VA = 0)
> r:5 er:10
> l:0 el:10
> p:0 ep:10
>
> ============================================================
> Table: monProcessWaits scan count 0, logical reads:
> (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
> total=0), apf IOs used=0
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 3 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
>
> Jun 4 2008 8:22:45:980AM
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:1 er:300
> cpu: 0
> /
> CacheScan
> (4)
> (VA = 0)
> r:1 er:300
>
> ============================================================
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
> As you can see, the procedure call took over 2000ms to
> complete and yet only required 80 physical non-APF reads.
> Even if the ios took 10ms, this should still require less
> than 1000ms to execute. Also, notice that my task was in
> wait id 29 for the entire duration of the query. Id 29 is
> &#8220;waiting for regular buffer read to complete&#8221;.
> My suspicion is therefore that in certain circumstances the
> APF reads become serialized. So, I did a sysmon pretty much
> straight after this test and &#8230;
>
> Asynchronous Prefetch Activity
> APFs Issued 330.1 12.5
> 99033 33.2 %
> APFs Denied Due To
> APF I/O Overloads 0.0 0.0
> 0 0.0 %
> APF Limit Overloads 0.0 0.0
> 0 0.0 %
> APF Reused Overloads 41.5 1.6
> 12452 4.2 %
> APF Buffers Found in Cache
> With Spinlock Held 0.3 0.0
> 88 0.0 %
> W/o Spinlock Held 622.0 23.5
> 186592 62.6 %
> ------------------------- ------------ ------------
> ----------
> Total APFs Requested 993.9 37.5
> 298165
>
> Other Asynchronous Prefetch Statistics
> APFs Used 297.1 11.2
> 89131 n/a
> APF Waits for I/O 47.9 1.8
> 14376 n/a
> APF Discards 0.0 0.0
> 0 n/a
>
> Now, I have little experience of this sort of stuff, but
> would over 10% of APF reads waiting for i/o be considered
> high ? It would match what I am seeing at an application
> performance level if the server would periodically have
> issues reading ahead and therefore queries that would be
> expected to take 200 or 300ms suddenly take 1000ms or
> greater. If this is indeed an APF I/O Wait issue, then what
> can I do to reduce this ? I am pondering putting large i/o
> pools on the default data cache to reduce the number of
> i/os. Is there anything else I can do ? If this isn&#8217;t
> an APF issue and I am slightly concerned that the server
> wasn&#8217;t waiting on id 32, &#8220;waiting for an APF
> buffer read to complete&#8221;, then what could the problem
> potentially be ? It is very unlikely that the SAN storage
> goes through short periods where i/o latency is enormously
> degraded. We&#8217;re talking io times > 25ms for the query
> shown above !
>
> What do you think ?
>
> John


"Mark A. Parsons" <iron_horse Posted on 2008-06-04 23:54:32.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <4844786d$1@forums-1-dub> <4846e7f0.62ae.1681692777@sybase.com>
In-Reply-To: <4846e7f0.62ae.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <48472b38$1@forums-1-dub>
Date: 4 Jun 2008 16:54:32 -0700
X-Trace: forums-1-dub 1212623672 10.22.241.152 (4 Jun 2008 16:54:32 -0700)
X-Original-Trace: 4 Jun 2008 16:54:32 -0700, vip152.sybase.com
Lines: 182
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10907
Article PK: 89518


John Everitt wrote:

> ============================================================
> Table: CLAIMS scan count 1, logical reads: (regular=344
> apf=0 total=344), physical reads: (regular=80 apf=202
> total=282), apf IOs used=207
> Total actual I/O cost for this command: 7738.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060 ms.
> (return status = 0)
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
> ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
> SPID KPID WaitEventID Waits WaitTime
> 1571 1330578616 29 375 4500
> 1571 1330578616 124 66 300
> 1571 1330578616 214 3 0
> 1571 1330578616 250 795 1119500
> 1571 1330578616 251 256 100
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:5 er:10
> cpu: 0
>
>
> /
> TableScan
> master..monProcessWaits
> (VA = 0)
> r:5 er:10
> l:0 el:10
> p:0 ep:10
>
> ============================================================
> Table: monProcessWaits scan count 0, logical reads:
> (regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
> total=0), apf IOs used=0
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 3 ms.
>
>
> Parse and Compile Time 0.
> SQL Server cpu time: 0 ms.
>
>
> Jun 4 2008 8:22:45:980AM
> ==================== Lava Operator Tree
> ====================
>
>
> Emit
> (VA = 1)
> r:1 er:300
> cpu: 0
> /
> CacheScan
> (4)
> (VA = 0)
> r:1 er:300
>
> ============================================================
> Total actual I/O cost for this command: 0.
> Total writes for this command: 0
>
> Execution Time 0.
> SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
>
> As you can see, the procedure call took over 2000ms to
> complete and yet only required 80 physical non-APF reads.
> Even if the ios took 10ms, this should still require less
> than 1000ms to execute. Also, notice that my task was in
> wait id 29 for the entire duration of the query. Id 29 is
> &#8220;waiting for regular buffer read to complete&#8221;.
> My suspicion is therefore that in certain circumstances the
> APF reads become serialized. So, I did a sysmon pretty much
> straight after this test and &#8230;
>
> Asynchronous Prefetch Activity
> APFs Issued 330.1 12.5
> 99033 33.2 %
> APFs Denied Due To
> APF I/O Overloads 0.0 0.0
> 0 0.0 %
> APF Limit Overloads 0.0 0.0
> 0 0.0 %
> APF Reused Overloads 41.5 1.6
> 12452 4.2 %
> APF Buffers Found in Cache
> With Spinlock Held 0.3 0.0
> 88 0.0 %
> W/o Spinlock Held 622.0 23.5
> 186592 62.6 %
> ------------------------- ------------ ------------
> ----------
> Total APFs Requested 993.9 37.5
> 298165
>
> Other Asynchronous Prefetch Statistics
> APFs Used 297.1 11.2
> 89131 n/a
> APF Waits for I/O 47.9 1.8
> 14376 n/a
> APF Discards 0.0 0.0
> 0 n/a
>
> Now, I have little experience of this sort of stuff, but
> would over 10% of APF reads waiting for i/o be considered
> high ? It would match what I am seeing at an application
> performance level if the server would periodically have
> issues reading ahead and therefore queries that would be
> expected to take 200 or 300ms suddenly take 1000ms or
> greater. If this is indeed an APF I/O Wait issue, then what
> can I do to reduce this ? I am pondering putting large i/o
> pools on the default data cache to reduce the number of
> i/os. Is there anything else I can do ? If this isn&#8217;t

The optdiag for the index (a previous post) shows some low data-related cluster ratios for the index. You may want to
see if defragging that index will help with the overall IO rates and times (ie, cutting down on fragmentation can often
times lead to fewer disk IO requests). (Basically same thing I think Kevin's getting at?).

'course, even if the defragging helps you'll need to consider what options to deploy on down the road as the table
becomes fragmented again ... periodic rebuild of clustered index?

----------------------

The original query has 4 where clauses like "c_status <> '{L|R|D|F}'".

How many different c_status values are there?

What percentage of records (for a given m_member) would you expect to have c_status not in ('L','R','D','F')?

If there are relatively few other c_status values *and* they represent a relatively small number of the records for a
given m_member, then an index on (m_member, c_status) *may* help reduce the number of IOs by insuring you only read the
smaller set of desired records from disk. ("Duh, Mark!" ?)

Follow-on options would be the use of a non-clustered index and/or rewriting the query to insure the new index is used
to filter out the undersirable records before performing the disk IOs to pull back the desired records.

----------------------

And yeah, I'd also look to setup a 16KB IO pool so that you get the benefit of more-pages-read-per-disk-IO-request. If
you've got some of your database log io size's configured at 4KB then leave some space allocated to the 4KB pool,
otherwise move all of the 4KB pool into a 16KB pool. (Obviously (?) you're welcome to run tests with 8KB vs 16KB to see
if there are any performance differences.)

> an APF issue and I am slightly concerned that the server
> wasn&#8217;t waiting on id 32, &#8220;waiting for an APF
> buffer read to complete&#8221;, then what could the problem
> potentially be ?

re: wait event's 29 vs 32 ... *shrug* ... this area of the MDA tables is still poorly documented for general public use.
Perhaps Chris Brown (or Jeff Tallman or Rob Verschoor) will drop by to provide some insight on the wait times you're
seeing for 29? and why not 32?

I hope to checkout the new MDA-based P&T course over the next week; if I come across anything useful for this thread
I'll post back here.

> We&#8217;re talking io times > 25ms for the query shown above !

I guess it depends on how you slice-n-dice the numbers ... 2060 ms spread across 282 physical IOs works out to an
average of a tad bit more than 7ms per physical IO ;-)


John Everitt Posted on 2008-06-06 09:00:26.0Z
Sender: 7532.48478621.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4848fcaa.2a47.1681692777@sybase.com>
References: <48472b38$1@forums-1-dub>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="-=_forums-1-dub4848fcaa"
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 6 Jun 2008 02:00:26 -0700
X-Trace: forums-1-dub 1212742826 10.22.241.41 (6 Jun 2008 02:00:26 -0700)
X-Original-Trace: 6 Jun 2008 02:00:26 -0700, 10.22.241.41
Lines: 1996
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10908
Article PK: 89519

I was concerned that I was getting differing performance
between stored procs and embedded SQL so I constructed a
perl script to test this (attached). The SQL in the command
is the same as the SQL used by the stored proc. I ran the
program from my PC using one of our production databases
(the one with statement cache, literal autoparam, 1.5GB
default data cache and the 1.0GB 8k i/o pool) and the
results were &#8230;

SQL1 : Records 17318 max exec 759.45 max read 15.958 total
exec 60535.9219999999 total read 936.086
SQL2 : Records 18118 max exec 827.146 max read 8.116 total
exec 62454.551 total read 953.981
Program finished ...

It shows that there&#8217;s a small advantage to stored
procs over embedded SQL so my suspicions have been put to
rest, however, the max exec figures shows the problem I am
suffering from. It just seems that periodically an execution
of the code takes a lot longer than it should. The average
execution time is around 60ms (1000 iterations) although
quite a few of these iterations are likely to use cached
data. Saying that though, I wouldn&#8217;t expect this
stored proc to take any longer than 200ms. In around 2000
iterations I get probably 20 that are of 300ms or longer. In
previous executions of this test I have seen execution times
> 2000ms !!! Whilst running the tests I ran a sysmon
(attached) and used sar to monitor disk activity on the
database server. i/o ops on all relevant disks did not
exceed 9 ms and most of the time seemed to hover around 7ms.

I&#8217;m not sure if I sent you this in previous posts. I
ran the following commands &#8230;

set statistics time on
go
set statistics io on
go
select convert(varchar,getdate(),109)
go
select * from master..monProcessWaits where SPID = @@spid
go
read_claims_sav 49430, 0
go
select * from master..monProcessWaits where SPID = @@spid
go
select convert(varchar,getdate(),109)
go

&#8230; and after a few retries (using slightly different
parameters to avoid cached data), I got the following
results &#8230;

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.


Jun 4 2008 8:22:43:790AM
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:1 er:300
cpu: 0
/
CacheScan
(4)
(VA = 0)
r:1 er:300

============================================================
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

SPID KPID WaitEventID Waits WaitTime
1571 1330578616 29 295 2600
1571 1330578616 124 47 100
1571 1330578616 214 3 0
1571 1330578616 250 793 1119500
1571 1330578616 251 256 100
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:5 er:10
cpu: 0


/
TableScan
master..monProcessWaits
(VA = 0)
r:5 er:10
l:0 el:10
p:0 ep:10

============================================================
Table: monProcessWaits scan count 0, logical reads:
(regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
total=0), apf IOs used=0
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

m_member c_dt_sett c_dt_from
c_status c_arb c_curr_amt c_dt_ent
<RESULTS &#8211; removed for space considerations>
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:532 er:444
cpu: 0


/
IndexScan
M_MEMBER
(VA = 0)
r:532 er:444
l:344 el:287
p:282 ep:287

============================================================
Table: CLAIMS scan count 1, logical reads: (regular=344
apf=0 total=344), physical reads: (regular=80 apf=202
total=282), apf IOs used=207
Total actual I/O cost for this command: 7738.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
ms.
(return status = 0)
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 2060
ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.
Parse and Compile Time 0.
SQL Server cpu time: 0 ms.

SPID KPID WaitEventID Waits WaitTime
1571 1330578616 29 375 4500
1571 1330578616 124 66 300
1571 1330578616 214 3 0
1571 1330578616 250 795 1119500
1571 1330578616 251 256 100
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:5 er:10
cpu: 0


/
TableScan
master..monProcessWaits
(VA = 0)
r:5 er:10
l:0 el:10
p:0 ep:10

============================================================
Table: monProcessWaits scan count 0, logical reads:
(regular=0 apf=0 total=0), physical reads: (regular=0 apf=0
total=0), apf IOs used=0
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 3 ms.


Parse and Compile Time 0.
SQL Server cpu time: 0 ms.


Jun 4 2008 8:22:45:980AM
==================== Lava Operator Tree
====================


Emit
(VA = 1)
r:1 er:300
cpu: 0
/
CacheScan
(4)
(VA = 0)
r:1 er:300

============================================================
Total actual I/O cost for this command: 0.
Total writes for this command: 0

Execution Time 0.
SQL Server cpu time: 0 ms. SQL Server elapsed time: 0 ms.

It&#8217;s not the same SQL as the SQL from the test but I
think shows an incident that matches our issue. In summary,
the SQL requires 80 physical reads and 202 APF reads and
took over 2000ms to run. Assuming that each of the 80
physical reads take 10ms, then the query shouldn&#8217;t
take any longer than 800ms. The dump of the monProcessWaits
table shows the process spent all the time in Wait ID 29
which is &#8220;wait for regular buffer read to
complete&#8221;.

Now from this I can only think of three possible reasons for
this to be happening that explains all the symptoms.
1) Periodically, a number of i/os take a much longer time to
complete. They could be held up at the o/s level or the SAN.
We've monitored the SAN extensively and haven't seen any
such problems but I guess it is a possiblility.
2) Periodically, Sybase waits for APF reads to complete as
well as the normal physical reads. I've added i/o pools to
the cache with little improvement and the test above
suggests the wait time is for "regular buffer reads" and not
APF reads.
3) Periodically, Sybase takes much longer to process
complete i/os. Dunno if this seems like even a reasonable
suggestion. One thing that might be related though is that
we maintain 2000 or 3000 active connections at any one time.
Could this have any effect on Sybase's ability to process
i/o ?

The other symptom that also suggest some kind of i/o issue
is that our servers appear to show a very high I/O Busy
percentage in sysmons. I have always been concerned about
this &#8230;

The only thing I can suggest that we can potentially try is
to move a live database to local storage in order to isolate
it from the SAN. Other than that I&#8217;m nearly out of
ideas. Is there any way we can look at internal Sybase
metrics to get a clearer idea of what is going on internally
regarding i/o ? Any other ideas ?

Mark, I promise to reply to your questions as soon as I can.
I'm not good at multi-tasking so had to finish my perl stuff
before contemplating your questions. Some time over the
weekend ... probably.

Thanks.

John

===============================================================================
Sybase Adaptive Server Enterprise System Performance Report
===============================================================================

Server Version: Adaptive Server Enterprise/15.0.1/EBF 14380 ESD#3/P/ia64
Server Name: PROD01
Run Date: Jun 06, 2008
Sampling Started at: Jun 06, 2008 09:40:53
Sampling Ended at: Jun 06, 2008 09:41:53
Sample Interval: 00:01:00
Sample Mode: No Clear
Counters Last Cleared: Jun 05, 2008 17:02:23

===============================================================================

Kernel Utilization
------------------

Your Runnable Process Search Count is set to 2000
and I/O Polling Process Count is set to 50

Engine Busy Utilization CPU Busy I/O Busy Idle
------------------------ -------- -------- --------
Engine 0 37.6 % 34.1 % 28.3 %
Engine 1 21.5 % 61.1 % 17.4 %
------------------------ -------- -------- --------
Summary Total 59.1 % 95.2 % 45.7 %
Average 29.6 % 47.6 % 22.8 %

CPU Yields by Engine per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 0.0 0.0 0 0.0 %
Engine 1 0.1 0.0 5 100.0 %
------------------------- ------------ ------------ ----------
Total CPU Yields 0.1 0.0 5

Network Checks
Non-Blocking 10455.5 409.5 627329 100.0 %
Blocking 0.1 0.0 5 0.0 %
------------------------- ------------ ------------ ----------
Total Network I/O Checks 10455.6 409.5 627334
Avg Net I/Os per Check n/a n/a 0.22802 n/a

Disk I/O Checks
Total Disk I/O Checks 11986.4 469.4 719186 n/a
Checks Returning I/O 8107.3 317.5 486437 67.6 %
Avg Disk I/Os Returned n/a n/a 0.03004 n/a


===============================================================================

Worker Process Management
-------------------------
per sec per xact count % of total
------------ ------------ ---------- ----------
Worker Process Requests
Total Requests 0.0 0.0 0 n/a

Worker Process Usage
Total Used 0.0 0.0 0 n/a
Max Ever Used During Sample 0.0 0.0 0 n/a

Memory Requests for Worker Processes
Total Requests 0.0 0.0 0 n/a

Tuning Recommendations for Worker Processes
-------------------------------------------
- Consider decreasing the 'number of worker processes'
configuration parameter.


===============================================================================

Parallel Query Management
-------------------------

Parallel Query Usage per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total Parallel Queries 0.0 0.0 0 n/a

Merge Lock Requests per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total # of Requests 0.0 0.0 0 n/a

Sort Buffer Waits per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total # of Waits 0.0 0.0 0 n/a

===============================================================================

Task Management per sec per xact count % of total
--------------------------- ------------ ------------ ---------- ----------

Connections Opened 2.8 0.1 168 n/a

Task Context Switches by Engine
Engine 0 1682.4 65.9 100942 65.1 %
Engine 1 903.6 35.4 54214 34.9 %
------------------------- ------------ ------------ ----------
Total Task Switches: 2585.9 101.3 155156

Task Context Switches Due To:
Voluntary Yields 18.6 0.7 1115 0.7 %
Cache Search Misses 157.8 6.2 9468 6.1 %
System Disk Writes 0.5 0.0 27 0.0 %
I/O Pacing 8.4 0.3 501 0.3 %
Logical Lock Contention 0.0 0.0 0 0.0 %
Address Lock Contention 0.0 0.0 0 0.0 %
Latch Contention 0.0 0.0 0 0.0 %
Log Semaphore Contention 0.0 0.0 0 0.0 %
PLC Lock Contention 0.0 0.0 0 0.0 %
Group Commit Sleeps 0.1 0.0 5 0.0 %
Last Log Page Writes 12.4 0.5 743 0.5 %
Modify Conflicts 0.0 0.0 2 0.0 %
I/O Device Contention 0.0 0.0 0 0.0 %
Network Packet Received 1057.6 41.4 63455 40.9 %
Network Packet Sent 155.0 6.1 9301 6.0 %
Other Causes 1175.7 46.0 70539 45.5 %


===============================================================================

Application Management
----------------------

Application Statistics Summary (All Applications)
-------------------------------------------------
Priority Changes per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
To High Priority 0.1 0.0 6 2.4 %
To Medium Priority 3.1 0.1 187 76.0 %
To Low Priority 0.9 0.0 53 21.5 %
------------------------- ------------ ------------ ----------
Total Priority Changes 4.1 0.2 246

Allotted Slices Exhausted per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
High Priority 0.0 0.0 0 0.0 %
Medium Priority 0.0 0.0 0 0.0 %
Low Priority 0.0 0.0 2 100.0 %
------------------------- ------------ ------------ ----------
Total Slices Exhausted 0.0 0.0 2

Skipped Tasks By Engine per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total Engine Skips 0.0 0.0 0 n/a

Engine Scope Changes 2.8 0.1 168 n/a

===============================================================================

ESP Management per sec per xact count % of total
--------------------------- ------------ ------------ ---------- ----------
ESP Requests 0.0 0.0 0 n/a
===============================================================================

Housekeeper Task Activity
-------------------------
per sec per xact count % of total
------------ ------------ ----------
Buffer Cache Washes
Clean 172.5 6.8 10351 99.9 %
Dirty 0.1 0.0 6 0.1 %
------------ ------------ ----------
Total Washes 172.6 6.8 10357

Garbage Collections 0.5 0.0 28 n/a
Pages Processed in GC 0.0 0.0 0 n/a

Statistics Updates 0.3 0.0 16 n/a

===============================================================================

Monitor Access to Executing SQL
-------------------------------
per sec per xact count % of total
------------ ------------ ---------- ----------
Waits on Execution Plans 0.0 0.0 0 n/a
Number of SQL Text Overflows 1.6 0.1 95 n/a
Maximum SQL Text Requested n/a n/a 5172 n/a
(since beginning of sample)


Tuning Recommendations for Monitor Access to Executing SQL
----------------------------------------------------------
- Consider increasing the 'max SQL text monitored' parameter
to at least 4634 (i.e., half way from its current value
to Maximum SQL Text Requested).

===============================================================================

Transaction Profile
-------------------

Transaction Summary per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Committed Xacts 25.5 n/a 1532 n/a

Transaction Detail per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Inserts
APL Heap Table 1257.5 49.2 75448 77.3 %
APL Clustered Table 0.0 0.0 0 0.0 %
Data Only Lock Table 369.0 14.5 22139 22.7 %
------------------------- ------------ ------------ ---------- ----------
Total Rows Inserted 1626.5 63.7 97587 97.6 %

Updates
APL Deferred 1.6 0.1 94 18.7 %
APL Direct In-place 0.9 0.0 56 11.1 %
APL Direct Cheap 0.0 0.0 0 0.0 %
APL Direct Expensive 0.0 0.0 0 0.0 %
DOL Deferred 1.4 0.1 81 16.1 %
DOL Direct 4.5 0.2 272 54.1 %
------------------------- ------------ ------------ ---------- ----------
Total Rows Updated 8.4 0.3 503 0.5 %

Data Only Locked Updates
DOL Replace 5.1 0.2 304 86.1 %
DOL Shrink 0.8 0.0 49 13.9 %
DOL Cheap Expand 0.0 0.0 0 0.0 %
DOL Expensive Expand 0.0 0.0 0 0.0 %
DOL Expand & Forward 0.0 0.0 0 0.0 %
DOL Fwd Row Returned 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total DOL Rows Updated 5.9 0.2 353 0.4 %

Deletes
APL Deferred 1.6 0.1 94 5.0 %
APL Direct 0.0 0.0 0 0.0 %
DOL 29.6 1.2 1773 95.0 %
------------------------- ------------ ------------ ---------- ----------
Total Rows Deleted 31.1 1.2 1867 1.9 %
========================= ============ ============ ==========
Total Rows Affected 1666.0 65.2 99957

===============================================================================

Transaction Management
----------------------

ULC Flushes to Xact Log per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
by Full ULC 0.1 0.0 6 0.2 %
by End Transaction 19.4 0.8 1164 40.1 %
by Change of Database 0.3 0.0 20 0.7 %
by Single Log Record 20.9 0.8 1255 43.3 %
by Unpin 0.1 0.0 5 0.2 %
by Other 7.5 0.3 451 15.5 %
------------------------- ------------ ------------ ----------
Total ULC Flushes 48.4 1.9 2901

ULC Log Records 282.0 11.0 16917 n/a
Max ULC Size During Sample n/a n/a 0 n/a

ULC Semaphore Requests
Granted 659.7 25.8 39582 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total ULC Semaphore Req 659.7 25.8 39582

Log Semaphore Requests
Granted 42.8 1.7 2565 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Log Semaphore Req 42.8 1.7 2565

Transaction Log Writes 26.0 1.0 1557 n/a
Transaction Log Alloc 18.5 0.7 1109 n/a
Avg # Writes per Log Page n/a n/a 1.40397 n/a

Tuning Recommendations for Transaction Management
-------------------------------------------------
- Consider decreasing the 'user log cache size'
configuration parameter if it is greater than the
logical database page size.

===============================================================================

Index Management
----------------

Nonclustered Maintenance per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Ins/Upd Requiring Maint 0.0 0.0 0 n/a
# of NC Ndx Maint 0.0 0.0 0 n/a

Deletes Requiring Maint 0.0 0.0 0 n/a
# of NC Ndx Maint 0.0 0.0 0 n/a

RID Upd from Clust Split 0.0 0.0 0 n/a
# of NC Ndx Maint 0.0 0.0 0 n/a

Upd/Del DOL Req Maint 35.4 1.4 2126 n/a
# of DOL Ndx Maint 56.0 2.2 3357 n/a
Avg DOL Ndx Maint / Op n/a n/a 1.57902 n/a

Page Splits 0.3 0.0 17 n/a
Retries 0.0 0.0 0 0.0 %
Deadlocks 0.0 0.0 0 0.0 %
Add Index Level 0.0 0.0 0 0.0 %

Page Shrinks 0.2 0.0 12 n/a
Deadlocks %
Deadlock Retries Exceeded 0.0 0.0 0 0.0 %

Index Scans per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Ascending Scans 76.7 3.0 4604 3.3 %
DOL Ascending Scans 2176.4 85.2 130585 93.1 %
Descending Scans 2.9 0.1 176 0.1 %
DOL Descending Scans 82.7 3.2 4960 3.5 %
------------ ------------ ----------
Total Scans 2338.8 91.6 140325

===============================================================================

Metadata Cache Management
-------------------------

Metadata Cache Summary per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------

Open Object Usage
Active n/a n/a 7645 n/a
Max Ever Used Since Boot n/a n/a 7703 n/a
Free n/a n/a 92355 n/a
Reuse Requests
Succeeded n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Open Index Usage
Active n/a n/a 5777 n/a
Max Ever Used Since Boot n/a n/a 5777 n/a
Free n/a n/a 4223 n/a
Reuse Requests
Succeeded n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Open Partition Usage
Active n/a n/a 5787 n/a
Max Ever Used Since Boot n/a n/a 5787 n/a
Free n/a n/a 4213 n/a
Reuse Requests
Succeeded n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Open Database Usage
Active n/a n/a 16 n/a
Max Ever Used Since Boot n/a n/a 16 n/a
Free n/a n/a 2 n/a
Reuse Requests
Succeeded n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Object Manager Spinlock Contention n/a n/a n/a 0.0 %

Object Spinlock Contention n/a n/a n/a 0.0 %

Index Spinlock Contention n/a n/a n/a 0.0 %

Index Hash Spinlock Contention n/a n/a n/a 0.0 %

Partition Spinlock Contention n/a n/a n/a 0.0 %

Partition Hash Spinlock Contention n/a n/a n/a 0.0 %

===============================================================================

Lock Management
---------------

Lock Summary per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total Lock Requests 4906.1 192.1 294368 n/a
Avg Lock Contention 0.0 0.0 0 0.0 %
Deadlock Percentage 0.0 0.0 0 0.0 %

Lock Detail per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------

Table Lock Hashtable
Lookups 422.6 16.6 25357 n/a
Avg Chain Length n/a n/a 2.67185 n/a
Spinlock Contention n/a n/a n/a 0.0 %

Exclusive Table
Granted 24.9 1.0 1492 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total EX-Table Requests 24.9 1.0 1492 0.5 %

Shared Table
Granted 1.9 0.1 116 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total SH-Table Requests 1.9 0.1 116 0.0 %

Exclusive Intent
Granted 21.3 0.8 1279 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total EX-Intent Requests 21.3 0.8 1279 0.4 %

Shared Intent
Granted 359.6 14.1 21574 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total SH-Intent Requests 359.6 14.1 21574 7.3 %

Page & Row Lock HashTable
Lookups 4394.5 172.1 263668 n/a
Avg Chain Length n/a n/a 0.00154 n/a
Spinlock Contention n/a n/a n/a 0.0 %

Exclusive Page
Granted 0.3 0.0 18 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total EX-Page Requests 0.3 0.0 18 0.0 %

Update Page
Granted 0.0 0.0 2 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total UP-Page Requests 0.0 0.0 2 0.0 %

Shared Page
Granted 49.4 1.9 2965 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total SH-Page Requests 49.4 1.9 2965 1.0 %


Exclusive Row
Granted 40.6 1.6 2434 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total EX-Row Requests 40.6 1.6 2434 0.8 %

Update Row
Granted 22.1 0.9 1328 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total UP-Row Requests 22.1 0.9 1328 0.5 %

Shared Row
Granted 4266.1 167.1 255968 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total SH-Row Requests 4266.1 167.1 255968 87.0 %


Next-Key
Total Next-Key Requests 0.0 0.0 0 n/a

Address Lock Hashtable
Lookups 119.9 4.7 7192 n/a
Avg Chain Length n/a n/a 0.00000 n/a
Spinlock Contention n/a n/a n/a 0.0 %

Exclusive Address
Granted 2.3 0.1 138 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total EX-Address Requests 2.3 0.1 138 0.0 %

Shared Address
Granted 117.6 4.6 7054 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total SH-Address Requests 117.6 4.6 7054 2.4 %


Last Page Locks on Heaps
Granted 1257.5 49.2 75448 100.0 %
Waited 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total Last Pg Locks 1257.5 49.2 75448 100.0 %


Deadlocks by Lock Type per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total Deadlocks 0.0 0.0 0 n/a


Deadlock Detection
Deadlock Searches 0.0 0.0 0 n/a


Lock Promotions
Total Lock Promotions 0.0 0.0 0 n/a


Lock Timeouts by Lock Type per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total Timeouts 0.0 0.0 0 n/a


===============================================================================

Data Cache Management
---------------------

Cache Statistics Summary (All Caches)
-------------------------------------
per sec per xact count % of total
------------ ------------ ---------- ----------

Cache Search Summary
Total Cache Hits 12725.2 498.4 763512 98.2 %
Total Cache Misses 236.2 9.3 14173 1.8 %
------------------------- ------------ ------------ ----------
Total Cache Searches 12961.4 507.6 777685

Cache Turnover
Buffers Grabbed 342.7 13.4 20562 n/a
Buffers Grabbed Dirty 0.0 0.0 0 0.0 %

Cache Strategy Summary
Cached (LRU) Buffers 12903.8 505.4 774225 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Large I/Os Performed 58.3 2.3 3495 73.4 %

Large I/Os Denied due to
Pool < Prefetch Size 21.1 0.8 1265 26.6 %
Pages Requested
Reside in Another
Buffer Pool 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Large I/O Requests 79.3 3.1 4760

Large I/O Effectiveness
Pages by Lrg I/O Cached 233.0 9.1 13980 n/a
Pages by Lrg I/O Used 80.6 3.2 4834 34.6 %

Asynchronous Prefetch Activity
APFs Issued 33.2 1.3 1992 7.4 %
APFs Denied Due To
APF I/O Overloads 0.0 0.0 0 0.0 %
APF Limit Overloads 0.0 0.0 0 0.0 %
APF Reused Overloads 8.3 0.3 500 1.8 %
APF Buffers Found in Cache
With Spinlock Held 0.0 0.0 0 0.0 %
W/o Spinlock Held 410.0 16.1 24599 90.8 %
------------------------- ------------ ------------ ----------
Total APFs Requested 451.5 17.7 27091

Other Asynchronous Prefetch Statistics
APFs Used 33.3 1.3 2000 n/a
APF Waits for I/O 3.7 0.1 221 n/a
APF Discards 0.0 0.0 0 n/a

Dirty Read Behavior
Page Requests 0.0 0.0 0 n/a

-------------------------------------------------------------------------------
Cache: default data cache
per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Spinlock Contention n/a n/a n/a 0.0 %

Utilization n/a n/a n/a 60.8 %

Cache Searches
Cache Hits 7722.7 302.5 463362 98.1 %
Found in Wash 199.7 7.8 11979 2.6 %
Cache Misses 153.5 6.0 9208 1.9 %
------------------------- ------------ ------------ ----------
Total Cache Searches 7876.2 308.5 472570

Pool Turnover
2 Kb Pool
LRU Buffer Grab 133.2 5.2 7991 70.3 %
Grabbed Dirty 0.0 0.0 0 0.0 %
8 Kb Pool
LRU Buffer Grab 56.2 2.2 3372 29.7 %
Grabbed Dirty 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Turnover 189.4 7.4 11363

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash Section Yet

Cache Strategy
Cached (LRU) Buffers 7796.7 305.4 467800 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Large I/Os Performed 56.2 2.2 3372 84.0 %

Large I/Os Denied due to
Pool < Prefetch Size -5.3 -0.2 -317 -7.9 %
Pages Requested
Reside in Another
Buffer Pool 16.0 0.6 957 23.9 %
------------------------- ------------ ------------ ----------
Total Large I/O Requests 66.9 2.6 4012

Large I/O Detail
8 Kb Pool
Pages Cached 224.8 8.8 13488 n/a
Pages Used 80.6 3.2 4834 35.8 %

Dirty Read Behavior
\t Page Requests 0.0 0.0 0 n/a

Tuning Recommendations for Data cache : default data cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

-------------------------------------------------------------------------------
Cache: log_cache
per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Spinlock Contention n/a n/a n/a 0.0 %

Utilization n/a n/a n/a 0.2 %

Cache Searches
Cache Hits 23.8 0.9 1428 100.0 %
Found in Wash 0.0 0.0 0 0.0 %
Cache Misses 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Searches 23.8 0.9 1428

Pool Turnover
2 Kb Pool
LRU Buffer Grab 0.1 0.0 3 2.4 %
Grabbed Dirty 0.0 0.0 0 0.0 %
8 Kb Pool
LRU Buffer Grab 2.1 0.1 123 97.6 %
Grabbed Dirty 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Turnover 2.1 0.1 126

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash Section Yet

Cache Strategy
Cached (LRU) Buffers 5.2 0.2 309 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Large I/Os Performed 2.1 0.1 123 97.6 %

Large I/Os Denied due to
Pool < Prefetch Size 0.0 0.0 0 0.0 %
Pages Requested
Reside in Another
Buffer Pool 0.1 0.0 3 2.4 %
------------------------- ------------ ------------ ----------
Total Large I/O Requests 2.1 0.1 126

Large I/O Detail
8 Kb Pool
Pages Cached 8.2 0.3 492 n/a
Pages Used 0.0 0.0 0 0.0 %

Dirty Read Behavior
\t Page Requests 0.0 0.0 0 n/a

Tuning Recommendations for Data cache : log_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

-------------------------------------------------------------------------------
Cache: member_cache
per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Spinlock Contention n/a n/a n/a 0.0 %

Utilization n/a n/a n/a 0.9 %

Cache Searches
Cache Hits 121.3 4.7 7276 100.0 %
Found in Wash 2.6 0.1 158 2.2 %
Cache Misses 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Searches 121.3 4.7 7276

Pool Turnover 0.0 0.0 0 n/a

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash Section Yet

Cache Strategy
Cached (LRU) Buffers 115.3 4.5 6915 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Total Large I/O Requests 0.0 0.0 0 n/a

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
\t Page Requests 0.0 0.0 0 n/a

Tuning Recommendations for Data cache : member_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

- Consider adding a large I/O pool for this cache.

-------------------------------------------------------------------------------
Cache: rule_cache
per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Spinlock Contention n/a n/a n/a 0.0 %

Utilization n/a n/a n/a 2.0 %

Cache Searches
Cache Hits 259.4 10.2 15561 100.0 %
Found in Wash 37.5 1.5 2250 14.5 %
Cache Misses 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Searches 259.4 10.2 15561

Pool Turnover 0.0 0.0 0 n/a

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash Section Yet

Cache Strategy
Cached (LRU) Buffers 254.7 10.0 15279 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Total Large I/O Requests 0.0 0.0 0 n/a

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
\t Page Requests 0.0 0.0 0 n/a

Tuning Recommendations for Data cache : rule_cache
-------------------------------------
- Consider using 'relaxed LRU replacement policy'
for this cache.

- Consider adding a large I/O pool for this cache.

-------------------------------------------------------------------------------
Cache: tempdb_cache
per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Spinlock Contention n/a n/a n/a 0.0 %

Utilization n/a n/a n/a 36.1 %

Cache Searches
Cache Hits 4598.1 180.1 275885 98.2 %
Found in Wash 0.0 0.0 0 0.0 %
Cache Misses 82.8 3.2 4965 1.8 %
------------------------- ------------ ------------ ----------
Total Cache Searches 4680.8 183.3 280850

Pool Turnover
2 Kb Pool
LRU Buffer Grab 151.2 5.9 9073 100.0 %
Grabbed Dirty 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Cache Turnover 151.2 5.9 9073

Buffer Wash Behavior
Statistics Not Available - No Buffers Entered Wash Section Yet

Cache Strategy
Cached (LRU) Buffers 4732.0 185.3 283922 100.0 %
Discarded (MRU) Buffers 0.0 0.0 0 0.0 %

Large I/O Usage
Large I/Os Performed 0.0 0.0 0 0.0 %

Large I/Os Denied due to
Pool < Prefetch Size 10.4 0.4 622 100.0 %
Pages Requested
Reside in Another
Buffer Pool 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Large I/O Requests 10.4 0.4 622

Large I/O Detail
No Large Pool(s) In This Cache

Dirty Read Behavior
\t Page Requests 0.0 0.0 0 n/a

Tuning Recommendations for Data cache : tempdb_cache
-------------------------------------
- Consider adding a large I/O pool for this cache.

===============================================================================

Procedure Cache Management per sec per xact count % of total
--------------------------- ------------ ------------ ---------- ----------
Procedure Requests 224.2 8.8 13450 n/a
Procedure Reads from Disk 0.2 0.0 9 0.1 %
Procedure Writes to Disk 0.0 0.0 0 0.0 %
Procedure Removals 17.4 0.7 1043 n/a
Procedure Recompilations 0.1 0.0 7 n/a

Recompilations Requests:
Execution Phase 0.1 0.0 7 100.0 %
Compilation Phase 0.0 0.0 0 0.0 %
Execute Cursor Execution 0.0 0.0 0 0.0 %
Redefinition Phase 0.0 0.0 0 0.0 %

Recompilation Reasons:
Table Missing 0.0 0.0 0 n/a
Temporary Table Missing 0.0 0.0 0 n/a
Schema Change 0.0 0.0 0 n/a
Index Change 0.0 0.0 0 n/a
Isolation Level Change 0.1 0.0 7 n/a
Permissions Change 0.0 0.0 0 n/a
Cursor Permissions Change 0.0 0.0 0 n/a

SQL Statement Cache:
Statements Cached 0.1 0.0 8 n/a
Statements Found in Cache 37.0 1.4 2217 n/a
Statements Not Found 0.2 0.0 9 n/a
Statements Dropped 0.4 0.0 26 n/a
Statements Restored 0.0 0.0 1 n/a
Statements Not Cached 0.0 0.0 0 n/a


===============================================================================

Memory Management per sec per xact count % of total
--------------------------- ------------ ------------ ---------- ----------
Pages Allocated 648.4 25.4 38905 n/a
Pages Released 648.3 25.4 38895 n/a

===============================================================================

Recovery Management
-------------------

Checkpoints per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
# of Normal Checkpoints 0.1 0.0 3 100.0 %
# of Free Checkpoints 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ----------
Total Checkpoints 0.1 0.0 3

Avg Time per Normal Chkpt 0.00000 seconds

===============================================================================

Disk I/O Management
-------------------

Max Outstanding I/Os per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Server n/a n/a 0 n/a
Engine 0 n/a n/a 272 n/a
Engine 1 n/a n/a 270 n/a


I/Os Delayed by
Disk I/O Structures n/a n/a 0 n/a
Server Config Limit n/a n/a 0 n/a
Engine Config Limit n/a n/a 0 n/a
Operating System Limit n/a n/a 0 n/a


Total Requested Disk I/Os 244.9 9.6 14696

Completed Disk I/O's
Engine 0 114.5 4.5 6868 47.0 %
Engine 1 129.1 5.1 7744 53.0 %
------------------------- ------------ ------------ ----------
Total Completed I/Os 243.5 9.5 14612


Device Activity Detail
----------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-1
data9 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-10
data18 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 1.3 0.1 79 10.0 %
Non-APF 11.8 0.5 709 90.0 %
Writes 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 13.1 0.5 788 5.4 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-2
data10 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 14.3 0.6 857 45.7 %
Non-APF 16.9 0.7 1016 54.2 %
Writes 0.0 0.0 1 0.1 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 31.2 1.2 1874 12.8 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-3
data11 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 5.6 0.2 338 45.1 %
Non-APF 6.9 0.3 411 54.9 %
Writes 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 12.5 0.5 749 5.1 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-4
data12 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-5
data13 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 1.5 0.1 91 12.6 %
Non-APF 10.3 0.4 616 85.6 %
Writes 0.2 0.0 13 1.8 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 12.0 0.5 720 4.9 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-6
data14 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.5 0.0 31 96.9 %
Writes 0.0 0.0 1 3.1 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.5 0.0 32 0.2 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-7
data15 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.0 0.0 2 66.7 %
Writes 0.0 0.0 1 33.3 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.1 0.0 3 0.0 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-8
data16 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 4.6 0.2 278 7.2 %
Non-APF 59.9 2.3 3596 92.8 %
Writes 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 64.6 2.5 3874 26.4 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_DATA01/rLV_DB15PROD01_DATA01-9
data17 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 5.8 0.2 349 11.0 %
Non-APF 47.0 1.8 2821 89.0 %
Writes 0.0 0.0 0 0.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 52.8 2.1 3170 21.6 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_TEMPDBDATA/rLV_DB15PROD01_TEMPDBDATA1
tempdat1 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 2.2 0.1 130 6.6 %
Writes 30.7 1.2 1843 93.4 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 32.9 1.3 1973 13.4 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_TEMPDBDATA/rLV_DB15PROD01_TEMPDBDATA2
tempdat2 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 2.1 0.1 127 20.8 %
Writes 8.1 0.3 483 79.2 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 10.2 0.4 610 4.2 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01_TEMPDBLOG/rLV_DB15PROD01_TEMPDBLOG1
templog1 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01LOG01/rLV_DB15PROD01LOG1
log3 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.0 0.0 1 0.2 %
Writes 9.1 0.4 544 99.8 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 9.1 0.4 545 3.7 %


-----------------------------------------------------------------------------

Device:
/dev/VG_DB15PROD01LOG01/rLV_DB15PROD01LOG2
log4 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.0 0.0 0 0.0 %
Writes 4.5 0.2 270 100.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 4.5 0.2 270 1.8 %


-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P1master.dat
master per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.1 0.0 8 9.8 %
Writes 1.2 0.0 74 90.2 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 1.4 0.1 82 0.6 %


-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P1sybsysdb.dat
systemdbdev per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %


-----------------------------------------------------------------------------

Device:
/opt/sybase/data/P1sysprocs.dat
sysprocsdev per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 0 0.0 %
Non-APF 0.0 0.0 0 0.0 %
Writes 0.0 0.0 1 100.0 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 1 0.0 %


-----------------------------------------------------------------------------



===============================================================================

Network I/O Management
----------------------

Total Network I/O Requests 1212.6 47.5 72756 n/a
Network I/Os Delayed 0.0 0.0 0 0.0 %


Total TDS Packets Received per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 782.4 30.6 46944 74.0 %
Engine 1 275.1 10.8 16505 26.0 %
------------------------- ------------ ------------ ----------
Total TDS Packets Rec'd 1057.5 41.4 63449


Total Bytes Received per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 56708.5 2221.0 3402509 79.0 %
Engine 1 15082.5 590.7 904950 21.0 %
------------------------- ------------ ------------ ----------
Total Bytes Rec'd 71791.0 2811.7 4307459


Avg Bytes Rec'd per Packet n/a n/a 67 n/a

-----------------------------------------------------------------------------

Total TDS Packets Sent per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 968.9 37.9 58133 73.0 %
Engine 1 357.7 14.0 21460 27.0 %
------------------------- ------------ ------------ ----------
Total TDS Packets Sent 1326.6 52.0 79593


Total Bytes Sent per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 674140.5 26402.4 40448428 73.5 %
Engine 1 242787.0 9508.6 14567218 26.5 %
------------------------- ------------ ------------ ----------
Total Bytes Sent 916927.4 35911.0 55015646


Avg Bytes Sent per Packet n/a n/a 691 n/a

===============================================================================

Replication Agent
-----------------

Replication Agent: SIGMA
Replication Server: CPTREP01_RS

per sec per xact count % of total
------------ ------------ ---------- ----------
Log Scan Summary
Log Records Scanned n/a n/a 967 n/a
Log Records Processed n/a n/a 281 n/a

Log Scan Activity
Updates n/a n/a 12 n/a
Inserts n/a n/a 17 n/a
Deletes n/a n/a 6 n/a
Store Procedures n/a n/a 0 n/a
DDL Log Records n/a n/a 0 n/a
Writetext Log Records n/a n/a 0 n/a
Text/Image Log Records n/a n/a 0 n/a
CLRs n/a n/a 0 n/a
Checkpoints Processed n/a n/a 0 n/a
Transactions purged n/a n/a 0 n/a

Transaction Activity
Opened n/a n/a 123 n/a
Commited n/a n/a 123 n/a
Aborted n/a n/a 0 n/a
Prepared n/a n/a 0 n/a
Delayed Commit n/a n/a 0 n/a
Maintenance User n/a n/a 0 n/a

Log Extension Wait
Count n/a n/a 132 n/a
Amount of time (ms) n/a n/a 58782 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 445.3 n/a

Schema Cache Lookups
Forward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a
Backward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a

Truncation Point Movement
Moved n/a n/a 0 n/a
Gotten from RS n/a n/a 0 n/a

Connections to Replication Server
Success n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Network Packet Information
Packets Sent n/a n/a 133 n/a
Full Packets Sent n/a n/a 10 n/a
Largest Packet n/a n/a 0 n/a
Amount of Bytes Sent n/a n/a 95465 n/a
Average Packet n/a n/a 717.8 n/a

I/O Wait from RS
Count n/a n/a 133 n/a
Amount of Time (ms) n/a n/a 85 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Wait (ms) n/a n/a 0.6 n/a

--------------------------------------------------------------------------------
Replication Agent: NIMAS
Replication Server: CPTREP01_RS

per sec per xact count % of total
------------ ------------ ---------- ----------
Log Scan Summary
Log Records Scanned n/a n/a 5514 n/a
Log Records Processed n/a n/a 1265 n/a

Log Scan Activity
Updates n/a n/a 210 n/a
Inserts n/a n/a 104 n/a
Deletes n/a n/a 97 n/a
Store Procedures n/a n/a 0 n/a
DDL Log Records n/a n/a 0 n/a
Writetext Log Records n/a n/a 0 n/a
Text/Image Log Records n/a n/a 0 n/a
CLRs n/a n/a 0 n/a
Checkpoints Processed n/a n/a 0 n/a
Transactions purged n/a n/a 0 n/a

Transaction Activity
Opened n/a n/a 427 n/a
Commited n/a n/a 427 n/a
Aborted n/a n/a 0 n/a
Prepared n/a n/a 0 n/a
Delayed Commit n/a n/a 0 n/a
Maintenance User n/a n/a 0 n/a

Log Extension Wait
Count n/a n/a 429 n/a
Amount of time (ms) n/a n/a 54625 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 127.3 n/a

Schema Cache Lookups
Forward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a
Backward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a

Truncation Point Movement
Moved n/a n/a 1 n/a
Gotten from RS n/a n/a 1 n/a

Connections to Replication Server
Success n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Network Packet Information
Packets Sent n/a n/a 830 n/a
Full Packets Sent n/a n/a 403 n/a
Largest Packet n/a n/a 0 n/a
Amount of Bytes Sent n/a n/a 1212549 n/a
Average Packet n/a n/a 1460.9 n/a

I/O Wait from RS
Count n/a n/a 831 n/a
Amount of Time (ms) n/a n/a 708 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Wait (ms) n/a n/a 0.9 n/a

--------------------------------------------------------------------------------
Replication Agent: SAMWU
Replication Server: CPTREP01_RS

per sec per xact count % of total
------------ ------------ ---------- ----------
Log Scan Summary
Log Records Scanned n/a n/a 756 n/a
Log Records Processed n/a n/a 208 n/a

Log Scan Activity
Updates n/a n/a 22 n/a
Inserts n/a n/a 13 n/a
Deletes n/a n/a 1 n/a
Store Procedures n/a n/a 0 n/a
DDL Log Records n/a n/a 0 n/a
Writetext Log Records n/a n/a 0 n/a
Text/Image Log Records n/a n/a 0 n/a
CLRs n/a n/a 0 n/a
Checkpoints Processed n/a n/a 0 n/a
Transactions purged n/a n/a 0 n/a

Transaction Activity
Opened n/a n/a 86 n/a
Commited n/a n/a 86 n/a
Aborted n/a n/a 0 n/a
Prepared n/a n/a 0 n/a
Delayed Commit n/a n/a 0 n/a
Maintenance User n/a n/a 0 n/a

Log Extension Wait
Count n/a n/a 95 n/a
Amount of time (ms) n/a n/a 60469 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 636.5 n/a

Schema Cache Lookups
Forward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a
Backward Schema
Count n/a n/a 0 n/a
Total Wait (ms) n/a n/a 0 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Time (ms) n/a n/a 0.0 n/a

Truncation Point Movement
Moved n/a n/a 0 n/a
Gotten from RS n/a n/a 0 n/a

Connections to Replication Server
Success n/a n/a 0 n/a
Failed n/a n/a 0 n/a

Network Packet Information
Packets Sent n/a n/a 105 n/a
Full Packets Sent n/a n/a 19 n/a
Largest Packet n/a n/a 0 n/a
Amount of Bytes Sent n/a n/a 94157 n/a
Average Packet n/a n/a 896.7 n/a

I/O Wait from RS
Count n/a n/a 105 n/a
Amount of Time (ms) n/a n/a 54 n/a
Longest Wait (ms) n/a n/a 0 n/a
Average Wait (ms) n/a n/a 0.5 n/a

--------------------------------------------------------------------------------
=============================== End of Report =================================
(return status = 0)


John Everitt Posted on 2008-06-09 17:26:39.0Z
Sender: d12.484d662b.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <484d67cf.d6d.1681692777@sybase.com>
References: <48472b38$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 9 Jun 2008 10:26:39 -0700
X-Trace: forums-1-dub 1213032399 10.22.241.41 (9 Jun 2008 10:26:39 -0700)
X-Original-Trace: 9 Jun 2008 10:26:39 -0700, 10.22.241.41
Lines: 13
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10912
Article PK: 89522

Hi Mark,

It does seem that this problem is a database fragementation
issue. I have reorg'ed a couple of our live databases and
initial results seem to be positive. I need to monitor it
for a bit longer to confirm. One thing though, we do have a
number of clustered indexes which are on big tbles that we
use alot. They are so big in fact that we will not be able
to drop and recreate them within our maintenace window. Is
there any alternative ways to reduce a clustered index's
fragmentation ?

John


Sherlock, Kevin Posted on 2008-06-09 18:44:54.0Z
From: "Sherlock, Kevin" <kevin.sherlock@teamsybase.com>
Newsgroups: sybase.public.ase.performance+tuning
References: <48472b38$1@forums-1-dub> <484d67cf.d6d.1681692777@sybase.com>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 27
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-RFC2646: Format=Flowed; Original
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484d7a26$1@forums-1-dub>
Date: 9 Jun 2008 11:44:54 -0700
X-Trace: forums-1-dub 1213037094 10.22.241.152 (9 Jun 2008 11:44:54 -0700)
X-Original-Trace: 9 Jun 2008 11:44:54 -0700, vip152.sybase.com
X-Authenticated-User: teamsybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10914
Article PK: 89524

I suggest the reorg on the indexes that were being used to get to the data.
In your case, this was indeed a clustered index, which apparently isn't a
unique clustered index. Are the other clustered indexes mentioned below
unique, or non-unique?

If you don't have time, or space to periodically reorganize these clustered
indexes, you are stuck unless you can find time to get them recreated. If
you can find that time, my only suggestion is to find another column (or
columns) in the table (or add an identity colum) to add to the index
definition to make the clustered index unique.

<John Everitt> wrote in message news:484d67cf.d6d.1681692777@sybase.com...
> Hi Mark,
>
> It does seem that this problem is a database fragementation
> issue. I have reorg'ed a couple of our live databases and
> initial results seem to be positive. I need to monitor it
> for a bit longer to confirm. One thing though, we do have a
> number of clustered indexes which are on big tbles that we
> use alot. They are so big in fact that we will not be able
> to drop and recreate them within our maintenace window. Is
> there any alternative ways to reduce a clustered index's
> fragmentation ?
>
> John


"Mark A. Parsons" <iron_horse Posted on 2008-06-09 23:39:34.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <48472b38$1@forums-1-dub> <484d67cf.d6d.1681692777@sybase.com>
In-Reply-To: <484d67cf.d6d.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484dbf36$1@forums-1-dub>
Date: 9 Jun 2008 16:39:34 -0700
X-Trace: forums-1-dub 1213054774 10.22.241.152 (9 Jun 2008 16:39:34 -0700)
X-Original-Trace: 9 Jun 2008 16:39:34 -0700, vip152.sybase.com
Lines: 28
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10916
Article PK: 89527

Have you considered using semantic partitioning for your larger tables? The idea being to partition you data in one
of 2 ways:

1 - most recent/modified data in one partition, older/non-changing/historical data in other partitions; then defragging
the one partition should take much less time

or

2 - split data across partitions in some other means whereby all partitions may be hit by your queries; then defragging
consists of finding time to defrag a 'smaller' table/partition at a time

Obviously (?) you'll want to look at using local indexes so that your defrag operations work on just a single partition
at at time.

John Everitt wrote:
> Hi Mark,
>
> It does seem that this problem is a database fragementation
> issue. I have reorg'ed a couple of our live databases and
> initial results seem to be positive. I need to monitor it
> for a bit longer to confirm. One thing though, we do have a
> number of clustered indexes which are on big tbles that we
> use alot. They are so big in fact that we will not be able
> to drop and recreate them within our maintenace window. Is
> there any alternative ways to reduce a clustered index's
> fragmentation ?
>
> John


"Mark A. Parsons" <iron_horse Posted on 2008-06-10 00:24:26.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <48472b38$1@forums-1-dub> <484d67cf.d6d.1681692777@sybase.com> <484dbf36$1@forums-1-dub>
In-Reply-To: <484dbf36$1@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484dc9ba$1@forums-1-dub>
Date: 9 Jun 2008 17:24:26 -0700
X-Trace: forums-1-dub 1213057466 10.22.241.152 (9 Jun 2008 17:24:26 -0700)
X-Original-Trace: 9 Jun 2008 17:24:26 -0700, vip152.sybase.com
Lines: 47
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10917
Article PK: 89528

Got ahead of myself ...

You can't currently drop an index from a partition, you can only drop it from the table.

Which would leave you with either:

a) dropping/re-adding a partition in order to defrag the partition (can get a little tricky if the partition you're
defragging isn't the last/highest partition when using range-based partitioning)

or

b) truncating the partition (truncate table <table_name> partition <partition_name>) and then reloading the data
(preferably in clustered index key order)

Obviously (?) both methods would require making a copy of the data before zapping it.

Mark A. Parsons wrote:
> Have you considered using semantic partitioning for your larger
> tables? The idea being to partition you data in one of 2 ways:
>
> 1 - most recent/modified data in one partition,
> older/non-changing/historical data in other partitions; then defragging
> the one partition should take much less time
>
> or
>
> 2 - split data across partitions in some other means whereby all
> partitions may be hit by your queries; then defragging consists of
> finding time to defrag a 'smaller' table/partition at a time
>
> Obviously (?) you'll want to look at using local indexes so that your
> defrag operations work on just a single partition at at time.
>
> John Everitt wrote:
>> Hi Mark,
>>
>> It does seem that this problem is a database fragementation
>> issue. I have reorg'ed a couple of our live databases and
>> initial results seem to be positive. I need to monitor it
>> for a bit longer to confirm. One thing though, we do have a
>> number of clustered indexes which are on big tbles that we
>> use alot. They are so big in fact that we will not be able
>> to drop and recreate them within our maintenace window. Is
>> there any alternative ways to reduce a clustered index's
>> fragmentation ?
>>
>> John


Sherlock, Kevin Posted on 2008-06-09 18:37:44.0Z
From: "Sherlock, Kevin" <kevin.sherlock@teamsybase.com>
Newsgroups: sybase.public.ase.performance+tuning
References: <4844786d$1@forums-1-dub> <4846e7f0.62ae.1681692777@sybase.com> <48472b38$1@forums-1-dub>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 34
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-RFC2646: Format=Flowed; Response
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484d7878$1@forums-1-dub>
Date: 9 Jun 2008 11:37:44 -0700
X-Trace: forums-1-dub 1213036664 10.22.241.152 (9 Jun 2008 11:37:44 -0700)
X-Original-Trace: 9 Jun 2008 11:37:44 -0700, vip152.sybase.com
X-Authenticated-User: teamsybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10913
Article PK: 89523


"Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
news:48472b38$1@forums-1-dub...
>
...
> ----------------------
>
> The original query has 4 where clauses like "c_status <> '{L|R|D|F}'".
>
> How many different c_status values are there?
>
> What percentage of records (for a given m_member) would you expect to have
> c_status not in ('L','R','D','F')?
>
> If there are relatively few other c_status values *and* they represent a
> relatively small number of the records for a given m_member, then an index
> on (m_member, c_status) *may* help reduce the number of IOs by insuring
> you only read the smaller set of desired records from disk. ("Duh, Mark!"
> ?)
>
> Follow-on options would be the use of a non-clustered index and/or
> rewriting the query to insure the new index is used to filter out the
> undersirable records before performing the disk IOs to pull back the
> desired records.

I think this may be version specific, but the application of such a
predicate clause ( c_status not in ('L','R','D','F') ) wouldn't be applied
until the QP engine reaches the data page. So, such an index suggested
above would be a waste of space, and not particularly helpful, if not
harmful to the performance of this kind of query. Easy enough (and probably
necessary) to test out though. I know for sure that I had observed this
behavior before.


"Mark A. Parsons" <iron_horse Posted on 2008-06-09 23:28:37.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.14 (Windows/20071210)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
References: <4844786d$1@forums-1-dub> <4846e7f0.62ae.1681692777@sybase.com> <48472b38$1@forums-1-dub> <484d7878$1@forums-1-dub>
In-Reply-To: <484d7878$1@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484dbca5$1@forums-1-dub>
Date: 9 Jun 2008 16:28:37 -0700
X-Trace: forums-1-dub 1213054117 10.22.241.152 (9 Jun 2008 16:28:37 -0700)
X-Original-Trace: 9 Jun 2008 16:28:37 -0700, vip152.sybase.com
Lines: 51
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10915
Article PK: 89526

That's not where I was going with my questioning.

For example:

If the c_status has 6 possible values (L, R, D, F, Y, N), and the 'Y' and 'N' values represent a relatively small number
of total records for a given m_member value, then an index that includes c_status *would* help as long as the query is
also rewritten with:

c_status in ('Y','N') (or possibly broken out into 2 queries - one for each c_status - then union'd together; would
need to do some testing)

And yeah, I've seen this kind of indexing/coding reduce logical IO counts by a couple magnitudes.

Now, would it apply in this particular situation? *shrug* That's why I was asking the questions.

Sherlock, Kevin wrote:
> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
> news:48472b38$1@forums-1-dub...
> ...
>> ----------------------
>>
>> The original query has 4 where clauses like "c_status <> '{L|R|D|F}'".
>>
>> How many different c_status values are there?
>>
>> What percentage of records (for a given m_member) would you expect to have
>> c_status not in ('L','R','D','F')?
>>
>> If there are relatively few other c_status values *and* they represent a
>> relatively small number of the records for a given m_member, then an index
>> on (m_member, c_status) *may* help reduce the number of IOs by insuring
>> you only read the smaller set of desired records from disk. ("Duh, Mark!"
>> ?)
>>
>> Follow-on options would be the use of a non-clustered index and/or
>> rewriting the query to insure the new index is used to filter out the
>> undersirable records before performing the disk IOs to pull back the
>> desired records.
>
> I think this may be version specific, but the application of such a
> predicate clause ( c_status not in ('L','R','D','F') ) wouldn't be applied
> until the QP engine reaches the data page. So, such an index suggested
> above would be a waste of space, and not particularly helpful, if not
> harmful to the performance of this kind of query. Easy enough (and probably
> necessary) to test out though. I know for sure that I had observed this
> behavior before.
>
>


Sherlock, Kevin Posted on 2008-06-10 03:20:31.0Z
From: "Sherlock, Kevin" <ksherlock@tconl.com>
Newsgroups: sybase.public.ase.performance+tuning
References: <4844786d$1@forums-1-dub> <4846e7f0.62ae.1681692777@sybase.com> <48472b38$1@forums-1-dub> <484d7878$1@forums-1-dub> <484dbca5$1@forums-1-dub>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 59
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
X-RFC2646: Format=Flowed; Response
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484df2ff$1@forums-1-dub>
Date: 9 Jun 2008 20:20:31 -0700
X-Trace: forums-1-dub 1213068031 10.22.241.152 (9 Jun 2008 20:20:31 -0700)
X-Original-Trace: 9 Jun 2008 20:20:31 -0700, vip152.sybase.com
X-Authenticated-User: teamsybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10919
Article PK: 89530

Got it. I didn't see (or read into) the additional "in" clause suggestion
in the previous post.

"Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
news:484dbca5$1@forums-1-dub...
> That's not where I was going with my questioning.
>
> For example:
>
> If the c_status has 6 possible values (L, R, D, F, Y, N), and the 'Y' and
> 'N' values represent a relatively small number of total records for a
> given m_member value, then an index that includes c_status *would* help as
> long as the query is also rewritten with:
>
> c_status in ('Y','N') (or possibly broken out into 2 queries - one for
> each c_status - then union'd together; would need to do some testing)
>
> And yeah, I've seen this kind of indexing/coding reduce logical IO counts
> by a couple magnitudes.
>
> Now, would it apply in this particular situation? *shrug* That's why I
> was asking the questions.
>
>
>
>
> Sherlock, Kevin wrote:
>> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
>> news:48472b38$1@forums-1-dub...
>> ...
>>> ----------------------
>>>
>>> The original query has 4 where clauses like "c_status <> '{L|R|D|F}'".
>>>
>>> How many different c_status values are there?
>>>
>>> What percentage of records (for a given m_member) would you expect to
>>> have c_status not in ('L','R','D','F')?
>>>
>>> If there are relatively few other c_status values *and* they represent a
>>> relatively small number of the records for a given m_member, then an
>>> index on (m_member, c_status) *may* help reduce the number of IOs by
>>> insuring you only read the smaller set of desired records from disk.
>>> ("Duh, Mark!" ?)
>>>
>>> Follow-on options would be the use of a non-clustered index and/or
>>> rewriting the query to insure the new index is used to filter out the
>>> undersirable records before performing the disk IOs to pull back the
>>> desired records.
>>
>> I think this may be version specific, but the application of such a
>> predicate clause ( c_status not in ('L','R','D','F') ) wouldn't be
>> applied until the QP engine reaches the data page. So, such an index
>> suggested above would be a waste of space, and not particularly helpful,
>> if not harmful to the performance of this kind of query. Easy enough
>> (and probably necessary) to test out though. I know for sure that I had
>> observed this behavior before.


John Everitt Posted on 2008-06-10 20:53:16.0Z
Sender: d12.484d662b.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <484ee9bc.4aa8.1681692777@sybase.com>
References: <484df2ff$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 10 Jun 2008 13:53:16 -0700
X-Trace: forums-1-dub 1213131196 10.22.241.41 (10 Jun 2008 13:53:16 -0700)
X-Original-Trace: 10 Jun 2008 13:53:16 -0700, 10.22.241.41
Lines: 32
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10922
Article PK: 89533

Mark, Kevin,

Thanks for the feedback. I can confirm that the problem is
related to fragmentation. I reorg'd much of the database
over the weekend (not all clustered indexes for reasons
detailed earlier) and have seen a 40% improvement in
performance in our claims processing loop. So, thank you
both for your assistance.

For reference, it seems that fragmented databases are
characterised by a low percentage in the pages used line of
the large i/o detail section of the cache metrics in a
sysmon. Before fragmentation I was seeing figures around
35%. I believe it's not unreasonable to expect 80% to 90%
here ... BTW, is there any way of retrieving this figure
from the MDA tables ? I've tried looking at monCachePool but
this doesn't appear to give a figure comparable to the
sysmon.

I am looking at Semantic Partitioning currently and will be
range partitioning a claims table on one of our smaller
accounts in a ouple of weeks time. There are a number of
advantages of partitioning of which easier index maintenance
is one. Notably, shorter update statistics is another.

I will look into the comments concerning the "in" clause
when I get a chance. However, I am currently nursing my IQ
installation through an unpredictable phase at the moment.

Thanks once again.

John


Cory Sane Posted on 2008-06-11 03:12:29.0Z
Reply-To: "Cory Sane" <cory!=sane>
From: "Cory Sane" <cory!=sane>
Newsgroups: sybase.public.ase.performance+tuning
References: <484df2ff$1@forums-1-dub> <484ee9bc.4aa8.1681692777@sybase.com>
Subject: Re: Performance Problem on ASE 15.0.1
Lines: 43
Organization: [TeamSybase Intern}
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
X-RFC2646: Format=Flowed; Original
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <484f429d$1@forums-1-dub>
Date: 10 Jun 2008 20:12:29 -0700
X-Trace: forums-1-dub 1213153949 10.22.241.152 (10 Jun 2008 20:12:29 -0700)
X-Original-Trace: 10 Jun 2008 20:12:29 -0700, vip152.sybase.com
X-Authenticated-User: TeamSybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10924
Article PK: 89535

use optdiag
look for data page cluster ratio <.8 on the cluster indexes.

--
Cory Sane
[Member of TeamSybase]
Certified Sybase Associate DBA for ASE 15.0
not a Sybase Inc. employee

<John Everitt> wrote in message news:484ee9bc.4aa8.1681692777@sybase.com...
> Mark, Kevin,
>
> Thanks for the feedback. I can confirm that the problem is
> related to fragmentation. I reorg'd much of the database
> over the weekend (not all clustered indexes for reasons
> detailed earlier) and have seen a 40% improvement in
> performance in our claims processing loop. So, thank you
> both for your assistance.
>
> For reference, it seems that fragmented databases are
> characterised by a low percentage in the pages used line of
> the large i/o detail section of the cache metrics in a
> sysmon. Before fragmentation I was seeing figures around
> 35%. I believe it's not unreasonable to expect 80% to 90%
> here ... BTW, is there any way of retrieving this figure
> from the MDA tables ? I've tried looking at monCachePool but
> this doesn't appear to give a figure comparable to the
> sysmon.
>
> I am looking at Semantic Partitioning currently and will be
> range partitioning a claims table on one of our smaller
> accounts in a ouple of weeks time. There are a number of
> advantages of partitioning of which easier index maintenance
> is one. Notably, shorter update statistics is another.
>
> I will look into the comments concerning the "in" clause
> when I get a chance. However, I am currently nursing my IQ
> installation through an unpredictable phase at the moment.
>
> Thanks once again.
>
> John


Chris N. Brown Posted on 2008-05-30 16:15:21.0Z
From: "Chris N. Brown" <chris.dot.brown@sybase.dot.com>
Subject: Re: Performance Problem on ASE 15.0.1
Newsgroups: sybase.public.ase.performance+tuning
References: <483f35e3@forums-1-dub> <483fb103.5221.1681692777@sybase.com>
User-Agent: Pan/0.132 (Waxed in Black)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <48402819$1@forums-1-dub>
Date: 30 May 2008 09:15:21 -0700
X-Trace: forums-1-dub 1212164121 10.22.241.152 (30 May 2008 09:15:21 -0700)
X-Original-Trace: 30 May 2008 09:15:21 -0700, vip152.sybase.com
Lines: 127
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10886
Article PK: 89497

A few thoughts, and bear in mind I'm still on my first cup of coffee:

> I see that we get consistently between 8ms and 10ms for a single i/o.

For a SINGLE I/O, no matter if this is a single-stream insert, or a
loaded-down ASE server? I personally think that's a bit on the high side
if you are on a SAN, but that's just my personal opinion. Did you get
this information from monDeviceIO and monIOQueue? As you alluded a bit
to in your email, it seems that you looked at them a bit, but without
knowing what they are telling me (by running some before / after
queries), it's hard to tell for sure if this is consisitent for 1 single
process (say, BCP) or the server as a whole. But still, 10ms or higher
seems a bit on the high side to me.

> We’ve now been on Sybase 15 for around 6 months and I’m afraid I
don’t
> have figures for the execution of the query in question on 12.5. I have
> performance figures for various sections of our primary application for
> the last 9 months and this shows a degradation in performance at the
> time of the upgrade. With help from local Sybase support here we
> suspected disk performance. We have now ruled out disk performance and
> still have the problem and the local Sybase guys are scratching their

> heads. So, this is where I am now …



How did they determine that there were disk problems? And what was
"changed" or what did they do to address that? Not that what they
suggested was wrong, but just curious. Have you looked at the output of
monSysWaits joined with monWaitEventInfo at all to get an idea of what
the server is "waiting" on? These 2 tables would tell you exactly where
the server is spending most of it's time (waiting for I/O, run queue,
etc). Invaluable if performance has gone down (and sysmon will really
never give you this level of detail).


>
> When upgrading, we did a dump and load on to a Sybase server on the same
> hardware and SAN infrastructure. We didn’t change the page size. The
> reason for the 4k i/o in the query plan is that I have a 4k i/o pool
> defined on the default data cache. The server uses a 2k page size. The
> 4k i/o pool was the start of a test to see what effects varying sizes of
> i/o pool would have on performance, once again with the hope of reducing
> physical i/o and therefore improving elapsed time of the query.
>


monCachePool will give some good info on your pool use here (whether they
are too small, too large, etc). You might take a look and see if your
caches are mis-sized (is that even a word? Come on, Starbucks, get me
out of this fog!).


> We have always used DOL locking scheme. It’s a restriction of the 4GL
we
> use (Powerhouse). The indexes have not been changed since 12.5 and 15.
>
> The Sybase server I am currently running my tests on resides on a 8 x
> Itanium HP-UX box with 32GB of RAM. It is configured with 3 engines and
> shares the machine with another Sybase server that is a near clone of
> this one. It’s just used by another client.
>


Try looking at monOpenObjectActivity and seeing how your indexes are
being used. That table will tell you, at an object level, how many table
scans, "partial" table scans, object scans, index scans, etc that the
server is using.



> How do I know whether I’ve max’d out the number of outstanding disk i/
os
> ? I see no i/os delayed by lack of disk i/o structures, server, engine
> or os config limit if that’s what you mean although I do see that the
> total number of requested disk i/os is always larger than the number of
> total completed i/os which the Sybase support guy tutted over. This
> prompted a change to the “i/o polling process count” (10 to 50) but
this
> seemed to have little measurable affect on performance.
>


If ASE is requesting more I/Os than the underlying SAN infrastructure can
handle, then it's not an ASE problem it's a SAN issue IMHO (don't take
that as gospel, but I have seen this more than once). Again monDeviceIO
and monIOQueue are your friends here.



> One thing that also may be relevant is that I “sniffed” the
> communication at the client level when I ran the same stored proc for a
> different member number. The whole communication took 1.5 seconds but I
> received the first row after 270ms so the remainder of the 500 rows took
> a further 1.2 seconds to send with the delay being on the server side
> i.e. the client “ACK’d” quickly and then there was a significant
delay
> for the next result set packet. Rerunning the command immediately after
> produced the results in just a few milliseconds. For me, this ruled out
> any network issues or client response problems and it was interesting
> that most of the time was spent between sending the first row and
> sending the last. Is there any significance to this ?


Hmm .... gain I'd look at monSysWaits as a rule of thumb to try and
figure out what the server might be waiting on here, then based on what
it's telling you, go from there. It will tell you why (from an ASE
perspective) it's doing what it's doing.

As you can see, I've pretty much abandoned sp_sysmon for troubleshooting
and, starting with 15.x now only use MDA ... with few exceptions
(repagent, spinlock contention, some stored proc info) it will tell you
MUCH more than sp_sysmon and with greater detail. I've found in Cluster
Edition it's even more valuable than in classic ASE because I can see at
the instance level *and* the cluster level what's going on at the same
time (not possible with sysmon right now as far as I know). I think I
speak for all 3 of us in the data management focus area (JT, RV, me) in
evangelism when I say we don't use sp_sysmon anymore, we pretty muck only
use MDA and highly encourage its use.

Good luck,

--Chris


John Everitt Posted on 2008-05-30 17:27:24.0Z
Sender: 3d64.483ef73a.1804289383@sybase.com
From: John Everitt
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: Performance Problem on ASE 15.0.1
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <484038fc.658d.1681692777@sybase.com>
References: <48402819$1@forums-1-dub>
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 30 May 2008 10:27:24 -0700
X-Trace: forums-1-dub 1212168444 10.22.241.41 (30 May 2008 10:27:24 -0700)
X-Original-Trace: 30 May 2008 10:27:24 -0700, 10.22.241.41
Lines: 69
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:10887
Article PK: 89498

Chris,

Hope the coffee is taking effect. It&#8217;s Friday night
here so apologies if the beer starts clouding my thoughts
&#8230;

I&#8217;m a big fan of MDA tables and am using monDeviceIO
to calculate device i/o times. I have a little perl script
that captures the MDA data every 5 minutes and then a cgi
that prints pretty graphs for me. One of the most useful is
the monDeviceIO graph and it is interesting that the SAN
performance is very consistent except for when the backups
are running. Then you do see a drop in performance. I also
check device performance using sar &#8211;d on the OS
(HP-UX) and we also run a script on the EVA management box
that generates metrics for performance. All these figures
match up and suggest I am getting somewhere between 8 and
10ms consistently across all devices. As far as I
understand, and I&#8217;m far from a SAN expert, it is
reasonable to expect consistent performance until you hit
the max i/o threshold for a given DG. This threshold is
dependent on the spindle count in the DG(Disk Group). 8ms to
10ms appears to also be not an unreasonable performance
figure for SAN based storage but I&#8217;d be interested to
hear from anyone that disagrees.

I also monitor the monSysWaits table and after filtering
id&#8217;s 250, 19, 179, 61, 57, 222, 178, 104 as I believe
these are relatively uninteresting, I find id 29 runs at
about 300ms for a 5 minute period while 251 and then 215
come in second and third. Sorry but I didn&#8217;t get
terribly excited about this information &#8230;

I don&#8217;t watch the monCachePool table yet but then
I&#8217;ve oversized the data cache to ensure that this is
not relevant to the diagnosis (1GB 2k, 1GB 4k &#8230;)

I have also looked at the monOpenObjectActivity table. This
is probably the most powerful table of the lot.
Unfortunately in this case it hasn&#8217;t shed much light
on what&#8217;s going on here. From what I can gather we may
have some unused indexes but we very rarely table scan
especially the large tables. It does show that the Claims
table is heavily used and requires a lot of Physical Reads
but then it&#8217;s very big and we tend to touch all parts
of it due to the nature of the data so we will always
require plenty of physical reads. My personal feeling is
that on version 15 the physical reading is just, well,
slower &#8230;

As I&#8217;ve eluded to before, we have thoroughly checked
out the SAN infrastructure, and a significant part of this
checking is to ensure that you do not go over the maximum
iops for a given DG. I now know that we don&#8217;t get
close to the theoretical iops maximum on either of the 2
used DGs. I can also confirm that the SAN switches are not
over utilized and the two 4GB HBA&#8217;s in the HP machine
still have plenty of bandwidth remaining &#8230;

Is there anything I can &#8220;turn on&#8221; in the Sybase
Server that potentially could show me more about
what&#8217;s going on internally while the query is
executing ? Much of the monitoring available is around the
optimizer but as this is a very simple query, it
doesn&#8217;t seem to be an optimizer related issue.
Anything anyone can think of that may shed a little more
light ? Much appreciated &#8230;

John