Could someone look to this performance and give me some advise?
Server: Netware 6.0 P4-2.4Ghz 1GigRAM ADS.NLM version 220.127.116.11
Client : Windows XP. P4-2.4Ghz
Time to insert 1000 records = 765 msec.
When we use a trigger:
Time to insert 1000 records = 2.6 sec
Time to insert 1000 records when implicit transaction to maintain data
integrity is ON = 6.844 sec.
We use the INSTEAD OF INSERT
Trigger is define :
update __new set clef =-clef_autoinc
where clef is null;
insert into TableName select * from __new
Is it a normal performance? Is there other way to do this?
In the past 10 years, we were using Pervasive Products...
The autoinc field with Pervasive is using a different behavior.
When the autoinc field is null or equal to 0 then the database engine
generate a value.
If the value of autoinc different of null or 0 then the datebase engine try
to use this value.
This feature is very useful because when you copy table the database keep
the same value for the autoinc field.
From: "Yves Moreau" <email@example.com>
Date: Fri, 3 Sep 2004 18:40:32 -0400
X-Newsreader: Microsoft Outlook Express 6.00.2800.1158
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
X-Trace: 3 Sep 2004 16:41:16 -0700, 18.104.22.168
Xref: solutions.advantagedatabase.com Advantage.Trigger:124
Article PK: 1136190
Subject: Re: Performance
Date: Tue, 7 Sep 2004 15:04:51 -0600
Organization: Extended Systems
Content-Type: text/plain; charset="iso-8859-15"
X-Trace: 7 Sep 2004 15:06:07 -0700, 22.214.171.124
Xref: solutions.advantagedatabase.com Advantage.Trigger:125
Article PK: 1136192
If you send the app and data you used to test I can try it here. You can
send it to firstname.lastname@example.org, attn JD
The transaction time is going to be fairly large, especially if you
don't already have a transaction active when you call INSERT. Most of
the overhead will be from creating a transaction log on disk, then
deleteing it. This will happen for every update operation, which will be
The time to insert 1000 records with a trigger (without transactions)
seems kind of high, but I can run it here and see what is going on. It
might be the general overhead of parsing and running the INSERT
statement twice for each update operation.
In article <email@example.com>, firstname.lastname@example.org
References: <email@example.com> <MPG.firstname.lastname@example.org>
Subject: Re: Performance
Date: Fri, 17 Sep 2004 10:37:53 +0800
X-Newsreader: Microsoft Outlook Express 6.00.2800.1437
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1441
X-Trace: 16 Sep 2004 20:32:16 -0700, 126.96.36.199
Xref: solutions.advantagedatabase.com Advantage.Trigger:129
Article PK: 1136197
Any GLOBAL ACTION on tables with triggers will be SLOW.
You would do much better to have a script doing everything including the
trigger actions in a sequential order.
My guess is that the response would be at least a 1000 times faster.
UNFORTUNATELY to do that you'll have to disable the triggers for the GLOBAL
update which can only happen now if you DROP TRIGGERS before and REINSTATE
BUT if any other user has the trigger table open the DROP TRIGGER will be
ignored until he has closed his connection or you have kicked him off.