Sybase NNTP forums - End Of Life (EOL)

The NNTP forums from Sybase - forums.sybase.com - are now closed.

All new questions should be directed to the appropriate forum at the SAP Community Network (SCN).

Individual products have links to the respective forums on SCN, or you can go to SCN and search for your product in the search box (upper right corner) to find your specific developer center.

Achieving concurrent run of a process !!

2 posts in Commercial ISV's Last posting was on 2009-10-19 14:40:28.0Z
krishna Posted on 2009-10-14 15:52:41.0Z
Sender: 4364.4ad5c88b.1804289383@sybase.com
From: Krishna
Newsgroups: sybase.public.commercial-isv.general
Subject: Achieving concurrent run of a process !!
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4ad5f3c9.49f5.1681692777@sybase.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
NNTP-Posting-Host: 10.22.241.41
X-Original-NNTP-Posting-Host: 10.22.241.41
Date: 14 Oct 2009 08:52:41 -0700
X-Trace: forums-1-dub 1255535561 10.22.241.41 (14 Oct 2009 08:52:41 -0700)
X-Original-Trace: 14 Oct 2009 08:52:41 -0700, 10.22.241.41
Lines: 95
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.commercial-isv.general:322
Article PK: 104033

Please provide you suggestions to achieve concurrent run of
a process (each run for one group)


It working fine for restricted number of records.


1. For 3 -4 lakhs of record the job fails due to
deadlock scenario with data pages lock schema.

2. Getting lack of locks for data rows lock schema.

3. Had CLCL_ID and GRGR_ID as clustered index in
tables and with data pages lock schema, we were able to run
the process in concurrent mode but it didn’t work all
times.

None of the approach was success. Last solution now we have
is to have each query run in loops, which involves total
restructure of the code, more effort.


________________________________________________________________

if there are no deadlocks, sp_sysmon does not display any
detail information, it only prints the “Total Deadlocks”
row with zero values.

To pinpoint where deadlocks occur, use one or both of the
following methods:

Use sp_object_stats. See “Identifying tables where
concurrency is a problem” on page 88 in Performance and
Tuning: Locking for more information.
Enable printing of detailed deadlock information to the log.
See “Printing deadlock information to the error log” on
page 85 Performance and Tuning: Monitoring and Analyzing for
Performance .



Deadlock can occur in Sybase Adaptive Server Enterprise
(ASE) when two user processes each have a lock on a separate
page or table and each wants to acquire a lock on the same
page or table held by the other process. The transaction
with the least accumulated CPU time is killed and all of its
work is rolled back.

There are ways to avoid deadlocks. These are general good
practices:

Write transactions to acquire locks in the same order on the
system. This will help to reduce both lock contention and
deadlocks.
Write only those statements which must be committed or
rolled back as a group within a transaction. This will help
reduce lock contention and deadlocks.
Avoid hot spots. Hot spots are those points in the data or
index where many queries access. If too many queries are
accessing the same pages for Selects and DML operations,
then lock contention and/or frequent deadlocks could result.
For example, use a clustered index on the table to avoid
heap.
Avoid heap tables (i.e. tables with no clustered index).
They may cause many issues, including deadlocks, because of
the hot spot. That is a "hot spot" on the last page of the
table leading to lock contention and frequent deadlocks.
Applications that are not impacted by dirty reads (isolation
level 0), may have better concurrency and reduced deadlocks
when accessing the same data by setting transaction
isolation level 0 at the beginning of each session. An
example is an application that finds the momentary average
balance for all savings accounts stored in a table. Since it
requires only a snapshot of the current average balance,
which probably changes frequently in an active table, the
application should query the table using isolation level 0.
Those tables commonly involved in deadlocks are candidates
for row-level locking (datarows or datapages).
Use stored procedures, as opposed to dynamic SQL, where
feasible. There are many advantages to using stored
procedures including: minimize blocks and deadlocks. it is a
lot easier to handle a deadlock if the entire transaction is
performed in one database request, also locks will be held
for a shorter time, improving concurrency and potentially
reducing the number of deadlocks. Furthermore, it is easier
to ensure that all tables are accessed in a consistent order
if code is stored centrally, rather than dispersed amongst a
number of applications.
On OLTP systems, avoid creating a clustered index on a
sequential key on a transaction busy table. This would
create a hot spot on the last page of the table. In
addition, despite row-level locking, this can still cause
frequent deadlocks and blocks.
Avoid select alter #table, create #tables with constraints
and defaults.


Roland Smith [TeamSybase] Posted on 2009-10-19 14:40:28.0Z
From: "Roland Smith [TeamSybase]" <rsmith_at_trusthss_dot_com>
Newsgroups: sybase.public.commercial-isv.general
References: <4ad5f3c9.49f5.1681692777@sybase.com>
Subject: Re: Achieving concurrent run of a process !!
Lines: 100
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.5512
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
X-RFC2646: Format=Flowed; Original
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4adc7a5c@forums-1-dub>
Date: 19 Oct 2009 07:40:28 -0700
X-Trace: forums-1-dub 1255963228 10.22.241.152 (19 Oct 2009 07:40:28 -0700)
X-Original-Trace: 19 Oct 2009 07:40:28 -0700, vip152.sybase.com
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.commercial-isv.general:324
Article PK: 104034

Try running your queries with Autocommit set to True.

<Krishna> wrote in message news:4ad5f3c9.49f5.1681692777@sybase.com...
Please provide you suggestions to achieve concurrent run of
a process (each run for one group)


It working fine for restricted number of records.


1. For 3 -4 lakhs of record the job fails due to
deadlock scenario with data pages lock schema.

2. Getting lack of locks for data rows lock schema.

3. Had CLCL_ID and GRGR_ID as clustered index in
tables and with data pages lock schema, we were able to run
the process in concurrent mode but it didn't work all
times.

None of the approach was success. Last solution now we have
is to have each query run in loops, which involves total
restructure of the code, more effort.


________________________________________________________________

if there are no deadlocks, sp_sysmon does not display any
detail information, it only prints the "Total Deadlocks"
row with zero values.

To pinpoint where deadlocks occur, use one or both of the
following methods:

Use sp_object_stats. See "Identifying tables where
concurrency is a problem" on page 88 in Performance and
Tuning: Locking for more information.
Enable printing of detailed deadlock information to the log.
See "Printing deadlock information to the error log" on
page 85 Performance and Tuning: Monitoring and Analyzing for
Performance .



Deadlock can occur in Sybase Adaptive Server Enterprise
(ASE) when two user processes each have a lock on a separate
page or table and each wants to acquire a lock on the same
page or table held by the other process. The transaction
with the least accumulated CPU time is killed and all of its
work is rolled back.

There are ways to avoid deadlocks. These are general good
practices:

Write transactions to acquire locks in the same order on the
system. This will help to reduce both lock contention and
deadlocks.
Write only those statements which must be committed or
rolled back as a group within a transaction. This will help
reduce lock contention and deadlocks.
Avoid hot spots. Hot spots are those points in the data or
index where many queries access. If too many queries are
accessing the same pages for Selects and DML operations,
then lock contention and/or frequent deadlocks could result.
For example, use a clustered index on the table to avoid
heap.
Avoid heap tables (i.e. tables with no clustered index).
They may cause many issues, including deadlocks, because of
the hot spot. That is a "hot spot" on the last page of the
table leading to lock contention and frequent deadlocks.
Applications that are not impacted by dirty reads (isolation
level 0), may have better concurrency and reduced deadlocks
when accessing the same data by setting transaction
isolation level 0 at the beginning of each session. An
example is an application that finds the momentary average
balance for all savings accounts stored in a table. Since it
requires only a snapshot of the current average balance,
which probably changes frequently in an active table, the
application should query the table using isolation level 0.
Those tables commonly involved in deadlocks are candidates
for row-level locking (datarows or datapages).
Use stored procedures, as opposed to dynamic SQL, where
feasible. There are many advantages to using stored
procedures including: minimize blocks and deadlocks. it is a
lot easier to handle a deadlock if the entire transaction is
performed in one database request, also locks will be held
for a shorter time, improving concurrency and potentially
reducing the number of deadlocks. Furthermore, it is easier
to ensure that all tables are accessed in a consistent order
if code is stored centrally, rather than dispersed amongst a
number of applications.
On OLTP systems, avoid creating a clustered index on a
sequential key on a transaction busy table. This would
create a hot spot on the last page of the table. In
addition, despite row-level locking, this can still cause
frequent deadlocks and blocks.
Avoid select alter #table, create #tables with constraints
and defaults.