I was wondering if anyone has experience with C++ trigger containers and its
performance. I created a c++ dll exporting a 'MyTrigger', simply returning
0, and no further actions.
Attaching this trigger to a table with +/- 50.000 records as "AFTER UPDATE",
make a update query take 35 secs. Without this trigger it takes just 2 secs.
Using local server on the same machine, it takes 5 secs with trigger.
Is this normal behavior, or am I doing something wrong?
thanks for any reply.
From: "m.sneijders" <firstname.lastname@example.org>
Subject: C++ trigger container performance
Date: Mon, 11 Feb 2008 16:07:12 +0100
Organization: Florisoft B.V.
X-Newsreader: Microsoft Outlook Express 6.00.2900.2670
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2670
X-RFC2646: Format=Flowed; Original
X-Trace: 11 Feb 2008 08:01:42 -0700, 18.104.22.168
Xref: solutions.advantagedatabase.com Advantage.Trigger:364
Article PK: 1136426
Subject: Re: C++ trigger container performance
Date: Fri, 7 Mar 2008 10:47:35 -0700
X-Newsreader: Microsoft Outlook Express 6.00.2900.3138
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3198
X-RFC2646: Format=Flowed; Response
X-Trace: 7 Mar 2008 10:41:59 -0700, 10.24.38.219
Xref: solutions.advantagedatabase.com Advantage.Trigger:370
Article PK: 1136430
I did some testing and found similar result. The extra time are the overhead
from using the trigger. If you divide 35 second by 50000, then the overhead
for each update is only about 0.7ms. I found that you can reduce the
overhead by doing the following:
1) Do not use implicit transaction for the trigger. Using transaction will
cause the transaction log file to be created.
2) Disable DLL caching. Do the following command in ARC after connected to
the database "execute procedure sp_modifydatabase( 'DISABLE_DLL_CACHING',