Sybase NNTP forums - End Of Life (EOL)

The NNTP forums from Sybase - forums.sybase.com - are now closed.

All new questions should be directed to the appropriate forum at the SAP Community Network (SCN).

Individual products have links to the respective forums on SCN, or you can go to SCN and search for your product in the search box (upper right corner) to find your specific developer center.

log space for inserts?

4 posts in Performance and Tuning Last posting was on 2012-06-20 23:31:24.0Z
rick_806 Posted on 2012-06-19 17:53:32.0Z
Sender: 3916.4fe0bb85.1804289383@sybase.com
From: rick_806
Newsgroups: sybase.public.ase.performance+tuning
Subject: log space for inserts?
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4fe0bc9c.3949.1681692777@sybase.com>
NNTP-Posting-Host: 172.20.134.41
X-Original-NNTP-Posting-Host: 172.20.134.41
Date: 19 Jun 2012 10:53:32 -0700
X-Trace: forums-1-dub 1340128412 172.20.134.41 (19 Jun 2012 10:53:32 -0700)
X-Original-Trace: 19 Jun 2012 10:53:32 -0700, 172.20.134.41
Lines: 14
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:13159
Article PK: 91632

ASE-CE 15.5 ESD4

I had a temp table with 400,000 records in it. According to
sp_spaceused this table is 67MB. The records in this table
match the table I am going to insert them into. The
destination table has 14 indices. When I start inserting
these records I use up 1.5GB of log space? The destimation
table is allpages, not sure what else is relevant, but how
can 67mb of data take up 1.5GB in log for straight inserts?
And based on this, how would I estimate how much space a
batch of inserts is going to take?

thx,
rick_806


"Mark A. Parsons" <iron_horse Posted on 2012-06-19 22:16:57.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101207 Lightning/1.0b2 Thunderbird/3.1.7
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: log space for inserts?
References: <4fe0bc9c.3949.1681692777@sybase.com>
In-Reply-To: <4fe0bc9c.3949.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Antivirus: avast! (VPS 120603-1, 06/03/2012), Outbound message
X-Antivirus-Status: Clean
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4fe0fa59$1@forums-1-dub>
Date: 19 Jun 2012 15:16:57 -0700
X-Trace: forums-1-dub 1340144217 10.22.241.152 (19 Jun 2012 15:16:57 -0700)
X-Original-Trace: 19 Jun 2012 15:16:57 -0700, vip152.sybase.com
Lines: 41
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:13160
Article PK: 91634

What do you mean by 'The records in this table match the table I am going to insert them into' ? Are you adding a full
set of duplicates?

Is the target table empty before you start the insert, or does it already have rows in it?

Does the table have an insert trigger?

Is this a straight insert/select or are you joining with other tables?

Are you doing a insert/select-where-not-already-exists-in-target?

Does the query plan show a deferred operation?

If the target table has a clustered index ... are you inserting the rows in the same order as the clustered index?

If the target table has a clustered index ... is it unique or non-unique? And if non-unique, do you have some lengthy
overflow pages chains?

Assuming you're running your insert/select in a quiet dataserver, have you tried wrapping the insert with a sp_sysmon
start/end so that you can capture details on transactional activity (eg, volume of page splits)?

Have you tried running your insert with some/most of your indexes off of the table ... then add the indexes after you've
populated your data?

On 06/19/2012 11:53, rick_806 wrote:
> ASE-CE 15.5 ESD4
>
> I had a temp table with 400,000 records in it. According to
> sp_spaceused this table is 67MB. The records in this table
> match the table I am going to insert them into. The
> destination table has 14 indices. When I start inserting
> these records I use up 1.5GB of log space? The destimation
> table is allpages, not sure what else is relevant, but how
> can 67mb of data take up 1.5GB in log for straight inserts?
> And based on this, how would I estimate how much space a
> batch of inserts is going to take?
>
> thx,
> rick_806


rick_806 Posted on 2012-06-20 13:27:01.0Z
Sender: 67d5.4fe1ce7f.1804289383@sybase.com
From: rick_806
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: log space for inserts?
X-Mailer: WebNews to Mail Gateway v1.1t
Message-ID: <4fe1cfa5.680b.1681692777@sybase.com>
References: <4fe0fa59$1@forums-1-dub>
NNTP-Posting-Host: 172.20.134.41
X-Original-NNTP-Posting-Host: 172.20.134.41
Date: 20 Jun 2012 06:27:01 -0700
X-Trace: forums-1-dub 1340198821 172.20.134.41 (20 Jun 2012 06:27:01 -0700)
X-Original-Trace: 20 Jun 2012 06:27:01 -0700, 172.20.134.41
Lines: 57
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:13161
Article PK: 91635


> What do you mean by 'The records in this table match the
> table I am going to insert them into' ? Are you adding a
> full set of duplicates? They have the same datatypes
>
> Is the target table empty before you start the insert, or
> does it already have rows in it? No, it is not empty
>
> Does the table have an insert trigger? No triggers
>
> Is this a straight insert/select or are you joining with
> other tables? insert
>
> Are you doing a
> insert/select-where-not-already-exists-in-target?no, just
insert select
>
> Does the query plan show a deferred operation? no
>
> If the target table has a clustered index ... are you
> inserting the rows in the same order as the clustered
> index? it has a unique contraint on primary key and I am
not inserting in key order
>
> If the target table has a clustered index ... is it unique
> or non-unique? And if non-unique, do you have some
> lengthy overflow pages chains?unique, don't know how to
check for "overflow pages chain"
>
> Assuming you're running your insert/select in a quiet
> dataserver, have you tried wrapping the insert with a
> sp_sysmon start/end so that you can capture details on
> transactional activity (eg, volume of page splits)? no,
haven't tried that
>
> Have you tried running your insert with some/most of your
> indexes off of the table ... then add the indexes after
> you've populated your data? no, the table has many

millions of rows and I can't drop indices except in
maintenance window on weekend
>
>
>
> On 06/19/2012 11:53, rick_806 wrote:
> > ASE-CE 15.5 ESD4
> >
> > I had a temp table with 400,000 records in it. According
> > to sp_spaceused this table is 67MB. The records in this
> > table match the table I am going to insert them into.
> > The destination table has 14 indices. When I start
> > inserting these records I use up 1.5GB of log space? The
> > destimation table is allpages, not sure what else is
> > relevant, but how can 67mb of data take up 1.5GB in log
> > for straight inserts? And based on this, how would I
> > estimate how much space a batch of inserts is going to
> take? >
> > thx,
> > rick_806


Bret Halford Posted on 2012-06-20 23:31:24.0Z
From: Bret Halford <bret@sybase.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
Newsgroups: sybase.public.ase.performance+tuning
Subject: Re: log space for inserts?
References: <4fe0fa59$1@forums-1-dub> <4fe1cfa5.680b.1681692777@sybase.com>
In-Reply-To: <4fe1cfa5.680b.1681692777@sybase.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4fe25d4c$1@forums-1-dub>
Date: 20 Jun 2012 16:31:24 -0700
X-Trace: forums-1-dub 1340235084 10.22.241.152 (20 Jun 2012 16:31:24 -0700)
X-Original-Trace: 20 Jun 2012 16:31:24 -0700, vip152.sybase.com
Lines: 173
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.performance+tuning:13163
Article PK: 91640


On 6/19/2012 11:53 AM, rick_806 wrote:

> ASE-CE 15.5 ESD4
>
> I had a temp table with 400,000 records in it. According to
> sp_spaceused this table is 67MB. The records in this table
> match the table I am going to insert them into. The
> destination table has 14 indices. When I start inserting
> these records I use up 1.5GB of log space? The destimation
> table is allpages, not sure what else is relevant, but how
> can 67mb of data take up 1.5GB in log for straight inserts?
> And based on this, how would I estimate how much space a
> batch of inserts is going to take?
>
> thx,
> rick_806


Hi Rick,

The width of the table and each of the indexes
would be particularly relevant.

Here is what I recommend you do to understand the log
space usage and be able to predict the space
used for various batch sizes.

First create a new database so nobody else will be generating
log records in it, minimum size should be fine.
Create a copy of the table and create the indexes
on the table. Truncate the log, then insert a
single typical row of data. Use DBCC LOG to see the log
records generated by a single insert.

Example:

create database mydb on default
go
use mydb
go
create table t1 (c1 int)
go
create index i1 on t1 (c1)
go
dump tran mydb with truncate_only
go
-- using a named transaction helps you find
-- specific transactions in dbcc log output
begin tran my_insert
go
insert t1 values (255)
go
commit tran
go

dbcc traceon(3604)
go
dbcc log(mydb)
go
1> 2> LOG SCAN DEFINITION:
Database id : 10
Forward scan: starting at beginning of log

LOG RECORDS:
BEGINXACT (1286,0) sessionid=1286,0
attcnt=1 rno=0 op=0 padlen=2 sessionid=1286,0 len=64
odc_stat=0x0000 (0x0000)
loh_status: 0x0 (0x00000000)
masterxsid=(invalid sessionid)
xstat=XBEG_ENDXACT,
spid=28 suid=1 uid=1 masterdbid=0 dtmcord=0
name=$ins time=Jun 20 2012 4:32:52:226PM
[...snipped some older rows here...]

BEGINXACT (1286,17) sessionid=1286,17
attcnt=1 rno=17 op=0 padlen=5 sessionid=1286,17 len=72
odc_stat=0x0000 (0x0000)
loh_status: 0x0 (0x00000000)
masterxsid=(invalid sessionid)
xstat=XBEG_ENDXACT,
spid=28 suid=1 uid=1 masterdbid=0 dtmcord=0
name=my_insert time=Jun 20 2012 4:33:52:526PM

INSERT (1286,18) sessionid=1286,17
attcnt=1 rno=18 op=4 padlen=6 sessionid=1286,17 len=64
odc_stat=0x0000 (0x0000)
loh_status: 0x0 (0x00000000)
objid=720002565 ptnid=720002565 pageno=258 offset=38
status=0x800 (0x0800 (XSTAT_EXPAGE))
cid=0 indid=0
old ts=0x0000 0x00003a96 new ts=0x0000 0x00003aa8
xrow: length=6
000b0dd534 ( 0): 00010000 00ff ......

IINSERT (1286,19) sessionid=1286,17
attcnt=1 rno=19 op=7 padlen=1 sessionid=1286,17 len=64
odc_stat=0x0000 (0x0000)
loh_status: 0x0 (0x00000000)
objid=720002565 ptnid=720002565 pageno=913 offset=43
status=0x800 (0x0800 (XSTAT_EXPAGE))
cid=0 indid=2
old ts=0x0000 0x00003a97 new ts=0x0000 0x00003aa9
xrow: length=11
000b0dd574 ( 0): 00000000 ff000001 020001 ...........

ENDXACT (1286,20) sessionid=1286,17
attcnt=1 rno=20 op=30 padlen=4 sessionid=1286,17 len=32
odc_stat=0x0000 (0x0000)
loh_status: 0x0 (0x00000000)
endstat=COMMIT time=Jun 20 2012 4:33:52:526PM
xstat=0x0 []

(There would be just one BEGINXACT and one ENDXACT for every
batch of inserts, so you can ignore their lengths for large
batches)

So there is one INSERT for the actual data row, and one IINSERT
for each index.
The actual data row stored in the table on disk is 6 bytes
(2 bytes of overhead + a 4 byte int value [rows
have more overhead when there are nullable or variable length
columns])

The INSERT log record itself includes a copy of the data row
and has a total length of 64 bytes (see the "len=64" field?)
- so there is 58 bytes of syslogs overhead for every row.

Likewise the IINSERT log record also has a total length of 64
bytes for this example. (11 bytes index row length + 53
bytes of log record overhead).

But to the point, each insert into this very small table will
use 128 bytes (64 byte INSERT, 64 byte IINSERT)
of log space for each row inserted. In your
case, for each row inserted, you would sum the "len" values
of the INSERT and each of the 14 IINSERTS. Your lengths would
be different depending on the width of the table and each
index.

There will be occasional other log records types dealing
with page and extent allocations, but the bulk of the
space used will be for the INSERT and IINSERT records.

There is an additional complication in that these records
have to fit within pages in syslogs. Each page has a 32
byte header, so a 2K page size server has ~2016 bytes
available for records. The records will rarely fit
exactly, so there will typcially be some wasted space
at the bottom of each page. (as a worst case, consider
a table without indexes with rows that are larger than
~1008 bytes. Only one such INSERT record will fit
per page, so each INSERT record would effectively use
a full 2k page). Another factor, more likely to be seen
on 16k page size server and tables with very short rows
is that there is a limit of 256 rows per syslogs page.


To get a better estimate of the log space used for
larger batches, truncate the log and insert 1,000
or 10,000 sample rows (you will need a bigger test database.)
See how much log space that used
(it will now include some allocation log records
in the mix). That value can be used to reasonably
estimate the log space needed per 1K or 10K rows
for larger batches.

Milage may additionally vary due to variable length
column values.

-bret