Sybase NNTP forums - End Of Life (EOL)

The NNTP forums from Sybase - forums.sybase.com - are now closed.

All new questions should be directed to the appropriate forum at the SAP Community Network (SCN).

Individual products have links to the respective forums on SCN, or you can go to SCN and search for your product in the search box (upper right corner) to find your specific developer center.

Database load from a compressed dump runs any faster

8 posts in Backup and Recovery Last posting was on 2009-06-16 14:12:39.0Z
Audrey Won Posted on 2009-06-11 15:27:41.0Z
From: "Audrey Won" <annieseewhy@gmail.com>
Newsgroups: sybase.public.ase.backup+recovery
Subject: Database load from a compressed dump runs any faster
Lines: 14
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.5512
X-RFC2646: Format=Flowed; Original
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a31226d$1@forums-1-dub>
Date: 11 Jun 2009 08:27:41 -0700
X-Trace: forums-1-dub 1244734061 10.22.241.152 (11 Jun 2009 08:27:41 -0700)
X-Original-Trace: 11 Jun 2009 08:27:41 -0700, vip152.sybase.com
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3978
Article PK: 48066

We have a production server and a test server that are located at different
facilities. Every night we dump the production database (170G) and load the
dump to the test server. Both server are ASE12.5.4 and run on IBM AIX. The
data load to the test server usually runs 8.5 hours. Recently because of
network saturation, the load can take more than 18 hours on some days. I am
wondering if we do a compressed dump and load from the much smaller dump
file would reduce the network traffic volume. I couldn't figure out whether
the data gets decompressed remotely on the production server side or locally
on the test server side. I could manage to set up an environment to test
this out. But I would like to know the theory before even test it.

Any help is appreciated.


Jason L. Froebe [TeamSybase] Posted on 2009-06-11 17:15:47.0Z
Newsgroups: sybase.public.ase.backup+recovery
Followup-To: sybase.public.ase.backup+recovery
Lines: 23
From: "Jason L. Froebe [TeamSybase]" <jason@froebe.net>
Subject: Re: Database load from a compressed dump runs any faster
References: <4a31226d$1@forums-1-dub>
Organization: TeamSybase
User-Agent: KNode/0.99.01
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: 7Bit
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a313bc3@forums-1-dub>
Date: 11 Jun 2009 10:15:47 -0700
X-Trace: forums-1-dub 1244740547 10.22.241.152 (11 Jun 2009 10:15:47 -0700)
X-Original-Trace: 11 Jun 2009 10:15:47 -0700, vip152.sybase.com
X-Authenticated-User: TeamSybase
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3979
Article PK: 48067


Audrey Won wrote:

> We have a production server and a test server that are located at
> different facilities. Every night we dump the production database (170G)
> and load the dump to the test server. Both server are ASE12.5.4 and run on
> IBM AIX. The data load to the test server usually runs 8.5 hours. Recently
> because of network saturation, the load can take more than 18 hours on
> some days. I am wondering if we do a compressed dump and load from the
> much smaller dump file would reduce the network traffic volume. I couldn't
> figure out whether the data gets decompressed remotely on the production
> server side or locally on the test server side. I could manage to set up
> an environment to test this out. But I would like to know the theory
> before even test it.
>
> Any help is appreciated.

If you're talking about using two backup servers, then communication between
the two backup servers is uncompressed. You would want to dump compressed,
ftp/scp the file to the test server and load it locally.

--
Join other Sybase enthusiasts on the #sybase IRC channel!
(irc.freenode.net)


Audrey Won Posted on 2009-06-11 19:02:22.0Z
From: "Audrey Won" <annieseewhy@gmail.com>
Newsgroups: sybase.public.ase.backup+recovery
References: <4a31226d$1@forums-1-dub> <4a313bc3@forums-1-dub>
Subject: Re: Database load from a compressed dump runs any faster
Lines: 38
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.5512
X-RFC2646: Format=Flowed; Original
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a3154be@forums-1-dub>
Date: 11 Jun 2009 12:02:22 -0700
X-Trace: forums-1-dub 1244746942 10.22.241.152 (11 Jun 2009 12:02:22 -0700)
X-Original-Trace: 11 Jun 2009 12:02:22 -0700, vip152.sybase.com
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3982
Article PK: 48071

Thanks, Jason.

Yes, each of the server uses its own backup server. I will try FTP and see
how it works.

"Jason L. Froebe [TeamSybase]" <jason@froebe.net> wrote in message
news:4a313bc3@forums-1-dub...
> Audrey Won wrote:
>
>> We have a production server and a test server that are located at
>> different facilities. Every night we dump the production database (170G)
>> and load the dump to the test server. Both server are ASE12.5.4 and run
>> on
>> IBM AIX. The data load to the test server usually runs 8.5 hours.
>> Recently
>> because of network saturation, the load can take more than 18 hours on
>> some days. I am wondering if we do a compressed dump and load from the
>> much smaller dump file would reduce the network traffic volume. I
>> couldn't
>> figure out whether the data gets decompressed remotely on the production
>> server side or locally on the test server side. I could manage to set up
>> an environment to test this out. But I would like to know the theory
>> before even test it.
>>
>> Any help is appreciated.
>
> If you're talking about using two backup servers, then communication
> between
> the two backup servers is uncompressed. You would want to dump
> compressed,
> ftp/scp the file to the test server and load it locally.
>
> --
> Join other Sybase enthusiasts on the #sybase IRC channel!
> (irc.freenode.net)
>


"Mark A. Parsons" <iron_horse Posted on 2009-06-11 17:16:57.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.10 (Windows/20070221)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.backup+recovery
Subject: Re: Database load from a compressed dump runs any faster
References: <4a31226d$1@forums-1-dub>
In-Reply-To: <4a31226d$1@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Antivirus: avast! (VPS 090606-0, 06/06/2009), Outbound message
X-Antivirus-Status: Clean
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a313c09$1@forums-1-dub>
Date: 11 Jun 2009 10:16:57 -0700
X-Trace: forums-1-dub 1244740617 10.22.241.152 (11 Jun 2009 10:16:57 -0700)
X-Original-Trace: 11 Jun 2009 10:16:57 -0700, vip152.sybase.com
Lines: 73
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3980
Article PK: 48068

It's not clear (to me) from your description where the dump file(s) reside and how you're getting the dump loaded into
the test server (eg, NFS mounted dump file, remote backupserver access, copy file and then load with local
backupserver), so fwiw ...

-------------------

First some background ...

When performing a 'local' database dump the backupserver will spawn 2 sybmultbuf processes for each dump stripe (one to
read from the database device(s), one to write to the dump device).

When performing a 'local' database load the backupserver will spawn 2 sybmultbuf processes for each dump stripe (one to
read from the dump device, one to write to the database devices(s)).

When performing a 'remote' database dump (or load) via a remote backupserver, the local backupserver will spawn 2
sybmultbuf processes for each dump stripe (one to read/write the database device(s), one to communicate with the remote
backupserver's sybmultbuf process). The remote backupserver will spawn a single sybmultbuf process for each dump
stripe; said sybmultbuf process then communicates with the local backupserver's sybmultbuf process *and* performs the
reads/writes of the dump device.

During compressed dumps/loads the sybmultbuf process that manages the dump device will perform the actual compression.
For a remote dump (load) this means the full/uncompressed data stream is sent over the network before (after) the remote
sybmultbuf process performs the compression (decompression) step.

NOTE #1: The sybmultbuf process performing the (de)compression tends to be a cpu-intensive process. This can have a
negative effect on other processes running on an over-utilized machine.

NOTE #2: Compressed dumps/loads tend to take longer to perform than uncompressed dumps/loads. The higher the
compression level the longer the dump/load typically takes to complete.

-------------------

Whether or not compressed dumps will help with your situation depends on where the network comes into play in your
dump/load scenarios.

If dumping/loading using only a 'local' backupserver then compression could help reduce network traffic where the dump
device(s) is accessed over a network (eg, NFS, SAN, etc); in this case the sybmultbuf process will perform the
compression *before* the network is accessed.

If dumping/loading using a 'remote' backupserver then compression may not be of any benefit since the uncompressed data
has to cross the network before (after) the remote sybmultbuf process performs its compression (for dumps) or
decompression (for loads).

If you perform a local, uncompressed dump on the production host, copy the dump file(s) to a local directory on the test
machine, and then perform a local load on the test machine, then you may see some benefit. Obviously (?) the copying of
the dump file(s) will incur a network overhead. Obviously (?) the network overhead could be reduced if you use
compression to reduce the size of the dump file(s).

-------------------

Depending on cpu resources (on production and test machines), network capabilities, and your dump/load topography ...
the compressed dumps may or may not help. In a *worse* case scenario you experience the same network overhead you
currently deal with *and* you incur additional cpu overhead for the (de)compression *and* you negatively affect other
processes running on the machine where the (de)compression occurs.

So, will compression help you? yes ... no ... maybe ... *shrug* ...

Audrey Won wrote:
> We have a production server and a test server that are located at different
> facilities. Every night we dump the production database (170G) and load the
> dump to the test server. Both server are ASE12.5.4 and run on IBM AIX. The
> data load to the test server usually runs 8.5 hours. Recently because of
> network saturation, the load can take more than 18 hours on some days. I am
> wondering if we do a compressed dump and load from the much smaller dump
> file would reduce the network traffic volume. I couldn't figure out whether
> the data gets decompressed remotely on the production server side or locally
> on the test server side. I could manage to set up an environment to test
> this out. But I would like to know the theory before even test it.
>
> Any help is appreciated.
>
>


Audrey Won Posted on 2009-06-11 18:50:16.0Z
From: "Audrey Won" <annieseewhy@gmail.com>
Newsgroups: sybase.public.ase.backup+recovery
References: <4a31226d$1@forums-1-dub> <4a313c09$1@forums-1-dub>
Subject: Re: Database load from a compressed dump runs any faster
Lines: 114
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.5512
X-RFC2646: Format=Flowed; Response
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a3151e8$1@forums-1-dub>
Date: 11 Jun 2009 11:50:16 -0700
X-Trace: forums-1-dub 1244746216 10.22.241.152 (11 Jun 2009 11:50:16 -0700)
X-Original-Trace: 11 Jun 2009 11:50:16 -0700, vip152.sybase.com
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3981
Article PK: 48070

Mark:
Thank you VERY much for your detailed explanation of all the dumping
(loading) processes.

The dump file is made available to the test server through a
mounted NFS. The servers do use a 'local' backupserver to do the dump
(load).

According to your theory about using a 'local' backupserver to
load from a dump
device(s) accessed over a network (eg, NFS, SAN, etc), the sybmultbuf
process will perform the decompression *after* the network is accessed. So I
think it will reduce network load. The CPU on our test server is pretty much
under-utilized, so I think doing the decompression will not necessarily slow
down the loading process.

I will also try your suggestion of copying the compressed file to a
local directory on the test server and then load it.

Thanks again.

"Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
news:4a313c09$1@forums-1-dub...
> It's not clear (to me) from your description where the dump file(s) reside
> and how you're getting the dump loaded into the test server (eg, NFS
> mounted dump file, remote backupserver access, copy file and then load
> with local backupserver), so fwiw ...
>
> -------------------
>
> First some background ...
>
> When performing a 'local' database dump the backupserver will spawn 2
> sybmultbuf processes for each dump stripe (one to read from the database
> device(s), one to write to the dump device).
>
> When performing a 'local' database load the backupserver will spawn 2
> sybmultbuf processes for each dump stripe (one to read from the dump
> device, one to write to the database devices(s)).
>
> When performing a 'remote' database dump (or load) via a remote
> backupserver, the local backupserver will spawn 2 sybmultbuf processes for
> each dump stripe (one to read/write the database device(s), one to
> communicate with the remote backupserver's sybmultbuf process). The
> remote backupserver will spawn a single sybmultbuf process for each dump
> stripe; said sybmultbuf process then communicates with the local
> backupserver's sybmultbuf process *and* performs the reads/writes of the
> dump device.
>
> During compressed dumps/loads the sybmultbuf process that manages the dump
> device will perform the actual compression. For a remote dump (load) this
> means the full/uncompressed data stream is sent over the network before
> (after) the remote sybmultbuf process performs the compression
> (decompression) step.
>
> NOTE #1: The sybmultbuf process performing the (de)compression tends to
> be a cpu-intensive process. This can have a negative effect on other
> processes running on an over-utilized machine.
>
> NOTE #2: Compressed dumps/loads tend to take longer to perform than
> uncompressed dumps/loads. The higher the compression level the longer the
> dump/load typically takes to complete.
>
> -------------------
>
> Whether or not compressed dumps will help with your situation depends on
> where the network comes into play in your dump/load scenarios.
>
> If dumping/loading using only a 'local' backupserver then compression
> could help reduce network traffic where the dump device(s) is accessed
> over a network (eg, NFS, SAN, etc); in this case the sybmultbuf process
> will perform the compression *before* the network is accessed.
>
> If dumping/loading using a 'remote' backupserver then compression may not
> be of any benefit since the uncompressed data has to cross the network
> before (after) the remote sybmultbuf process performs its compression (for
> dumps) or decompression (for loads).
>
> If you perform a local, uncompressed dump on the production host, copy the
> dump file(s) to a local directory on the test machine, and then perform a
> local load on the test machine, then you may see some benefit. Obviously
> (?) the copying of the dump file(s) will incur a network overhead.
> Obviously (?) the network overhead could be reduced if you use compression
> to reduce the size of the dump file(s).
>
> -------------------
>
> Depending on cpu resources (on production and test machines), network
> capabilities, and your dump/load topography ... the compressed dumps may
> or may not help. In a *worse* case scenario you experience the same
> network overhead you currently deal with *and* you incur additional cpu
> overhead for the (de)compression *and* you negatively affect other
> processes running on the machine where the (de)compression occurs.
>
> So, will compression help you? yes ... no ... maybe ... *shrug* ...
>
>
> Audrey Won wrote:
>> We have a production server and a test server that are located at
>> different facilities. Every night we dump the production database (170G)
>> and load the dump to the test server. Both server are ASE12.5.4 and run
>> on IBM AIX. The data load to the test server usually runs 8.5 hours.
>> Recently because of network saturation, the load can take more than 18
>> hours on some days. I am wondering if we do a compressed dump and load
>> from the much smaller dump file would reduce the network traffic volume.
>> I couldn't figure out whether the data gets decompressed remotely on the
>> production server side or locally on the test server side. I could manage
>> to set up an environment to test this out. But I would like to know the
>> theory before even test it.
>>
>> Any help is appreciated.


"Mark A. Parsons" <iron_horse Posted on 2009-06-11 19:07:51.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.10 (Windows/20070221)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.backup+recovery
Subject: Re: Database load from a compressed dump runs any faster
References: <4a31226d$1@forums-1-dub> <4a313c09$1@forums-1-dub> <4a3151e8$1@forums-1-dub>
In-Reply-To: <4a3151e8$1@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Antivirus: avast! (VPS 090606-0, 06/06/2009), Outbound message
X-Antivirus-Status: Clean
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a315607$1@forums-1-dub>
Date: 11 Jun 2009 12:07:51 -0700
X-Trace: forums-1-dub 1244747271 10.22.241.152 (11 Jun 2009 12:07:51 -0700)
X-Original-Trace: 11 Jun 2009 12:07:51 -0700, vip152.sybase.com
Lines: 132
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3983
Article PK: 48072

Don't forget to check for cpu utilization on the production machine, ie, make sure you've got the extra cpu cycles to
support the sybmultbuf's compression requirements on your production machine.

You'll also need to test which compression level you wish to use.

Most clients I've worked with use a compression level somewhere between 1 and 3:

1 : decent compression (25-45%) and typically adds very little time (2-5%) to the dump/load process

2,3 : slightly better compression (5-15% better than 1), with a bump up (5-15%) in total dump/load times

NOTE: The compression percentage usually comes down to an issue of the types/volumes of data ... character data
compresses quite well, while numeric/binary data doesn't compress as well.

When going above compression=3 you'll typically see a quick drop off in benefits, ie, very little additional reduction
in space usage and ever-increasing times (50-400%) for the dump/load process.

Audrey Won wrote:
> Mark:
> Thank you VERY much for your detailed explanation of all the dumping
> (loading) processes.
>
> The dump file is made available to the test server through a
> mounted NFS. The servers do use a 'local' backupserver to do the dump
> (load).
>
> According to your theory about using a 'local' backupserver to
> load from a dump
> device(s) accessed over a network (eg, NFS, SAN, etc), the sybmultbuf
> process will perform the decompression *after* the network is accessed. So I
> think it will reduce network load. The CPU on our test server is pretty much
> under-utilized, so I think doing the decompression will not necessarily slow
> down the loading process.
>
> I will also try your suggestion of copying the compressed file to a
> local directory on the test server and then load it.
>
> Thanks again.
>
>
> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
> news:4a313c09$1@forums-1-dub...
>> It's not clear (to me) from your description where the dump file(s) reside
>> and how you're getting the dump loaded into the test server (eg, NFS
>> mounted dump file, remote backupserver access, copy file and then load
>> with local backupserver), so fwiw ...
>>
>> -------------------
>>
>> First some background ...
>>
>> When performing a 'local' database dump the backupserver will spawn 2
>> sybmultbuf processes for each dump stripe (one to read from the database
>> device(s), one to write to the dump device).
>>
>> When performing a 'local' database load the backupserver will spawn 2
>> sybmultbuf processes for each dump stripe (one to read from the dump
>> device, one to write to the database devices(s)).
>>
>> When performing a 'remote' database dump (or load) via a remote
>> backupserver, the local backupserver will spawn 2 sybmultbuf processes for
>> each dump stripe (one to read/write the database device(s), one to
>> communicate with the remote backupserver's sybmultbuf process). The
>> remote backupserver will spawn a single sybmultbuf process for each dump
>> stripe; said sybmultbuf process then communicates with the local
>> backupserver's sybmultbuf process *and* performs the reads/writes of the
>> dump device.
>>
>> During compressed dumps/loads the sybmultbuf process that manages the dump
>> device will perform the actual compression. For a remote dump (load) this
>> means the full/uncompressed data stream is sent over the network before
>> (after) the remote sybmultbuf process performs the compression
>> (decompression) step.
>>
>> NOTE #1: The sybmultbuf process performing the (de)compression tends to
>> be a cpu-intensive process. This can have a negative effect on other
>> processes running on an over-utilized machine.
>>
>> NOTE #2: Compressed dumps/loads tend to take longer to perform than
>> uncompressed dumps/loads. The higher the compression level the longer the
>> dump/load typically takes to complete.
>>
>> -------------------
>>
>> Whether or not compressed dumps will help with your situation depends on
>> where the network comes into play in your dump/load scenarios.
>>
>> If dumping/loading using only a 'local' backupserver then compression
>> could help reduce network traffic where the dump device(s) is accessed
>> over a network (eg, NFS, SAN, etc); in this case the sybmultbuf process
>> will perform the compression *before* the network is accessed.
>>
>> If dumping/loading using a 'remote' backupserver then compression may not
>> be of any benefit since the uncompressed data has to cross the network
>> before (after) the remote sybmultbuf process performs its compression (for
>> dumps) or decompression (for loads).
>>
>> If you perform a local, uncompressed dump on the production host, copy the
>> dump file(s) to a local directory on the test machine, and then perform a
>> local load on the test machine, then you may see some benefit. Obviously
>> (?) the copying of the dump file(s) will incur a network overhead.
>> Obviously (?) the network overhead could be reduced if you use compression
>> to reduce the size of the dump file(s).
>>
>> -------------------
>>
>> Depending on cpu resources (on production and test machines), network
>> capabilities, and your dump/load topography ... the compressed dumps may
>> or may not help. In a *worse* case scenario you experience the same
>> network overhead you currently deal with *and* you incur additional cpu
>> overhead for the (de)compression *and* you negatively affect other
>> processes running on the machine where the (de)compression occurs.
>>
>> So, will compression help you? yes ... no ... maybe ... *shrug* ...
>>
>>
>> Audrey Won wrote:
>>> We have a production server and a test server that are located at
>>> different facilities. Every night we dump the production database (170G)
>>> and load the dump to the test server. Both server are ASE12.5.4 and run
>>> on IBM AIX. The data load to the test server usually runs 8.5 hours.
>>> Recently because of network saturation, the load can take more than 18
>>> hours on some days. I am wondering if we do a compressed dump and load
>>> from the much smaller dump file would reduce the network traffic volume.
>>> I couldn't figure out whether the data gets decompressed remotely on the
>>> production server side or locally on the test server side. I could manage
>>> to set up an environment to test this out. But I would like to know the
>>> theory before even test it.
>>>
>>> Any help is appreciated.
>
>


Audrey Won Posted on 2009-06-16 13:27:24.0Z
From: "Audrey Won" <annieseewhy@gmail.com>
Newsgroups: sybase.public.ase.backup+recovery
References: <4a31226d$1@forums-1-dub> <4a313c09$1@forums-1-dub> <4a3151e8$1@forums-1-dub> <4a315607$1@forums-1-dub>
Subject: Re: Database load from a compressed dump runs any faster
Lines: 150
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2900.5512
X-RFC2646: Format=Flowed; Response
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a379dbc$1@forums-1-dub>
Date: 16 Jun 2009 06:27:24 -0700
X-Trace: forums-1-dub 1245158844 10.22.241.152 (16 Jun 2009 06:27:24 -0700)
X-Original-Trace: 16 Jun 2009 06:27:24 -0700, vip152.sybase.com
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3986
Article PK: 48074

I tested it and it worked!

We use default level which is level 1. This level of compression works very
well for us. The compressed dump file is only 16% of the original size. The
production dump time increased about 30% and the test restore time reduced
by 50%.

Thank you very much for your help, Mark.

"Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
news:4a315607$1@forums-1-dub...
> Don't forget to check for cpu utilization on the production machine, ie,
> make sure you've got the extra cpu cycles to support the sybmultbuf's
> compression requirements on your production machine.
>
> You'll also need to test which compression level you wish to use.
>
> Most clients I've worked with use a compression level somewhere between 1
> and 3:
>
> 1 : decent compression (25-45%) and typically adds very little time (2-5%)
> to the dump/load process
>
> 2,3 : slightly better compression (5-15% better than 1), with a bump up
> (5-15%) in total dump/load times
>
> NOTE: The compression percentage usually comes down to an issue of the
> types/volumes of data ... character data compresses quite well, while
> numeric/binary data doesn't compress as well.
>
> When going above compression=3 you'll typically see a quick drop off in
> benefits, ie, very little additional reduction in space usage and
> ever-increasing times (50-400%) for the dump/load process.
>
> Audrey Won wrote:
>> Mark:
>> Thank you VERY much for your detailed explanation of all the
>> dumping (loading) processes.
>>
>> The dump file is made available to the test server through a
>> mounted NFS. The servers do use a 'local' backupserver to do the dump
>> (load).
>>
>> According to your theory about using a 'local' backupserver to
>> load from a dump
>> device(s) accessed over a network (eg, NFS, SAN, etc), the sybmultbuf
>> process will perform the decompression *after* the network is accessed.
>> So I think it will reduce network load. The CPU on our test server is
>> pretty much under-utilized, so I think doing the decompression will not
>> necessarily slow down the loading process.
>>
>> I will also try your suggestion of copying the compressed file
>> to a local directory on the test server and then load it.
>>
>> Thanks again.
>>
>>
>> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
>> news:4a313c09$1@forums-1-dub...
>>> It's not clear (to me) from your description where the dump file(s)
>>> reside and how you're getting the dump loaded into the test server (eg,
>>> NFS mounted dump file, remote backupserver access, copy file and then
>>> load with local backupserver), so fwiw ...
>>>
>>> -------------------
>>>
>>> First some background ...
>>>
>>> When performing a 'local' database dump the backupserver will spawn 2
>>> sybmultbuf processes for each dump stripe (one to read from the database
>>> device(s), one to write to the dump device).
>>>
>>> When performing a 'local' database load the backupserver will spawn 2
>>> sybmultbuf processes for each dump stripe (one to read from the dump
>>> device, one to write to the database devices(s)).
>>>
>>> When performing a 'remote' database dump (or load) via a remote
>>> backupserver, the local backupserver will spawn 2 sybmultbuf processes
>>> for each dump stripe (one to read/write the database device(s), one to
>>> communicate with the remote backupserver's sybmultbuf process). The
>>> remote backupserver will spawn a single sybmultbuf process for each dump
>>> stripe; said sybmultbuf process then communicates with the local
>>> backupserver's sybmultbuf process *and* performs the reads/writes of the
>>> dump device.
>>>
>>> During compressed dumps/loads the sybmultbuf process that manages the
>>> dump device will perform the actual compression. For a remote dump
>>> (load) this means the full/uncompressed data stream is sent over the
>>> network before (after) the remote sybmultbuf process performs the
>>> compression (decompression) step.
>>>
>>> NOTE #1: The sybmultbuf process performing the (de)compression tends to
>>> be a cpu-intensive process. This can have a negative effect on other
>>> processes running on an over-utilized machine.
>>>
>>> NOTE #2: Compressed dumps/loads tend to take longer to perform than
>>> uncompressed dumps/loads. The higher the compression level the longer
>>> the dump/load typically takes to complete.
>>>
>>> -------------------
>>>
>>> Whether or not compressed dumps will help with your situation depends on
>>> where the network comes into play in your dump/load scenarios.
>>>
>>> If dumping/loading using only a 'local' backupserver then compression
>>> could help reduce network traffic where the dump device(s) is accessed
>>> over a network (eg, NFS, SAN, etc); in this case the sybmultbuf process
>>> will perform the compression *before* the network is accessed.
>>>
>>> If dumping/loading using a 'remote' backupserver then compression may
>>> not be of any benefit since the uncompressed data has to cross the
>>> network before (after) the remote sybmultbuf process performs its
>>> compression (for dumps) or decompression (for loads).
>>>
>>> If you perform a local, uncompressed dump on the production host, copy
>>> the dump file(s) to a local directory on the test machine, and then
>>> perform a local load on the test machine, then you may see some benefit.
>>> Obviously (?) the copying of the dump file(s) will incur a network
>>> overhead. Obviously (?) the network overhead could be reduced if you use
>>> compression to reduce the size of the dump file(s).
>>>
>>> -------------------
>>>
>>> Depending on cpu resources (on production and test machines), network
>>> capabilities, and your dump/load topography ... the compressed dumps may
>>> or may not help. In a *worse* case scenario you experience the same
>>> network overhead you currently deal with *and* you incur additional cpu
>>> overhead for the (de)compression *and* you negatively affect other
>>> processes running on the machine where the (de)compression occurs.
>>>
>>> So, will compression help you? yes ... no ... maybe ... *shrug* ...
>>>
>>>
>>> Audrey Won wrote:
>>>> We have a production server and a test server that are located at
>>>> different facilities. Every night we dump the production database
>>>> (170G) and load the dump to the test server. Both server are ASE12.5.4
>>>> and run on IBM AIX. The data load to the test server usually runs 8.5
>>>> hours. Recently because of network saturation, the load can take more
>>>> than 18 hours on some days. I am wondering if we do a compressed dump
>>>> and load from the much smaller dump file would reduce the network
>>>> traffic volume. I couldn't figure out whether the data gets
>>>> decompressed remotely on the production server side or locally on the
>>>> test server side. I could manage to set up an environment to test this
>>>> out. But I would like to know the theory before even test it.
>>>>
>>>> Any help is appreciated.
>>


"Mark A. Parsons" <iron_horse Posted on 2009-06-16 14:12:39.0Z
From: "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com>
User-Agent: Thunderbird 1.5.0.10 (Windows/20070221)
MIME-Version: 1.0
Newsgroups: sybase.public.ase.backup+recovery
Subject: Re: Database load from a compressed dump runs any faster
References: <4a31226d$1@forums-1-dub> <4a313c09$1@forums-1-dub> <4a3151e8$1@forums-1-dub> <4a315607$1@forums-1-dub> <4a379dbc$1@forums-1-dub>
In-Reply-To: <4a379dbc$1@forums-1-dub>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Antivirus: avast! (VPS 090612-0, 06/12/2009), Outbound message
X-Antivirus-Status: Clean
NNTP-Posting-Host: vip152.sybase.com
X-Original-NNTP-Posting-Host: vip152.sybase.com
Message-ID: <4a37a857$1@forums-1-dub>
Date: 16 Jun 2009 07:12:39 -0700
X-Trace: forums-1-dub 1245161559 10.22.241.152 (16 Jun 2009 07:12:39 -0700)
X-Original-Trace: 16 Jun 2009 07:12:39 -0700, vip152.sybase.com
Lines: 170
Path: forums-1-dub!not-for-mail
Xref: forums-1-dub sybase.public.ase.backup+recovery:3987
Article PK: 48078

84% reduction of the dump file? very nice!

The extra 30% during the dump sounds a little high ... are you seeing 100% utilization on any of your production cpu's
during the database dump?

----------

If you've got extra/free cpu cycles on the production and test machines you may also want to look at striping the
compressed dump across multiple devices.

Main objective being to see if you can speed up the dump/load process by spreading the (de)compression across multiple
cpu's.

Obviously this may not be a good idea if one/both of your machines are already cpu bound.

Another potential downside to this scenario is that you'll have multiple sybmultbuf processes reading from your
production database devices; if you have a lot of user activity in the production database while performing the
multi-striped dump then you could see some contention on the database devices. (The amount of disk contention will
depend on the capabilities of your disk subsystem.)

Audrey Won wrote:
> I tested it and it worked!
>
> We use default level which is level 1. This level of compression works very
> well for us. The compressed dump file is only 16% of the original size. The
> production dump time increased about 30% and the test restore time reduced
> by 50%.
>
> Thank you very much for your help, Mark.
>
>
> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
> news:4a315607$1@forums-1-dub...
>> Don't forget to check for cpu utilization on the production machine, ie,
>> make sure you've got the extra cpu cycles to support the sybmultbuf's
>> compression requirements on your production machine.
>>
>> You'll also need to test which compression level you wish to use.
>>
>> Most clients I've worked with use a compression level somewhere between 1
>> and 3:
>>
>> 1 : decent compression (25-45%) and typically adds very little time (2-5%)
>> to the dump/load process
>>
>> 2,3 : slightly better compression (5-15% better than 1), with a bump up
>> (5-15%) in total dump/load times
>>
>> NOTE: The compression percentage usually comes down to an issue of the
>> types/volumes of data ... character data compresses quite well, while
>> numeric/binary data doesn't compress as well.
>>
>> When going above compression=3 you'll typically see a quick drop off in
>> benefits, ie, very little additional reduction in space usage and
>> ever-increasing times (50-400%) for the dump/load process.
>>
>> Audrey Won wrote:
>>> Mark:
>>> Thank you VERY much for your detailed explanation of all the
>>> dumping (loading) processes.
>>>
>>> The dump file is made available to the test server through a
>>> mounted NFS. The servers do use a 'local' backupserver to do the dump
>>> (load).
>>>
>>> According to your theory about using a 'local' backupserver to
>>> load from a dump
>>> device(s) accessed over a network (eg, NFS, SAN, etc), the sybmultbuf
>>> process will perform the decompression *after* the network is accessed.
>>> So I think it will reduce network load. The CPU on our test server is
>>> pretty much under-utilized, so I think doing the decompression will not
>>> necessarily slow down the loading process.
>>>
>>> I will also try your suggestion of copying the compressed file
>>> to a local directory on the test server and then load it.
>>>
>>> Thanks again.
>>>
>>>
>>> "Mark A. Parsons" <iron_horse@no_spamola.compuserve.com> wrote in message
>>> news:4a313c09$1@forums-1-dub...
>>>> It's not clear (to me) from your description where the dump file(s)
>>>> reside and how you're getting the dump loaded into the test server (eg,
>>>> NFS mounted dump file, remote backupserver access, copy file and then
>>>> load with local backupserver), so fwiw ...
>>>>
>>>> -------------------
>>>>
>>>> First some background ...
>>>>
>>>> When performing a 'local' database dump the backupserver will spawn 2
>>>> sybmultbuf processes for each dump stripe (one to read from the database
>>>> device(s), one to write to the dump device).
>>>>
>>>> When performing a 'local' database load the backupserver will spawn 2
>>>> sybmultbuf processes for each dump stripe (one to read from the dump
>>>> device, one to write to the database devices(s)).
>>>>
>>>> When performing a 'remote' database dump (or load) via a remote
>>>> backupserver, the local backupserver will spawn 2 sybmultbuf processes
>>>> for each dump stripe (one to read/write the database device(s), one to
>>>> communicate with the remote backupserver's sybmultbuf process). The
>>>> remote backupserver will spawn a single sybmultbuf process for each dump
>>>> stripe; said sybmultbuf process then communicates with the local
>>>> backupserver's sybmultbuf process *and* performs the reads/writes of the
>>>> dump device.
>>>>
>>>> During compressed dumps/loads the sybmultbuf process that manages the
>>>> dump device will perform the actual compression. For a remote dump
>>>> (load) this means the full/uncompressed data stream is sent over the
>>>> network before (after) the remote sybmultbuf process performs the
>>>> compression (decompression) step.
>>>>
>>>> NOTE #1: The sybmultbuf process performing the (de)compression tends to
>>>> be a cpu-intensive process. This can have a negative effect on other
>>>> processes running on an over-utilized machine.
>>>>
>>>> NOTE #2: Compressed dumps/loads tend to take longer to perform than
>>>> uncompressed dumps/loads. The higher the compression level the longer
>>>> the dump/load typically takes to complete.
>>>>
>>>> -------------------
>>>>
>>>> Whether or not compressed dumps will help with your situation depends on
>>>> where the network comes into play in your dump/load scenarios.
>>>>
>>>> If dumping/loading using only a 'local' backupserver then compression
>>>> could help reduce network traffic where the dump device(s) is accessed
>>>> over a network (eg, NFS, SAN, etc); in this case the sybmultbuf process
>>>> will perform the compression *before* the network is accessed.
>>>>
>>>> If dumping/loading using a 'remote' backupserver then compression may
>>>> not be of any benefit since the uncompressed data has to cross the
>>>> network before (after) the remote sybmultbuf process performs its
>>>> compression (for dumps) or decompression (for loads).
>>>>
>>>> If you perform a local, uncompressed dump on the production host, copy
>>>> the dump file(s) to a local directory on the test machine, and then
>>>> perform a local load on the test machine, then you may see some benefit.
>>>> Obviously (?) the copying of the dump file(s) will incur a network
>>>> overhead. Obviously (?) the network overhead could be reduced if you use
>>>> compression to reduce the size of the dump file(s).
>>>>
>>>> -------------------
>>>>
>>>> Depending on cpu resources (on production and test machines), network
>>>> capabilities, and your dump/load topography ... the compressed dumps may
>>>> or may not help. In a *worse* case scenario you experience the same
>>>> network overhead you currently deal with *and* you incur additional cpu
>>>> overhead for the (de)compression *and* you negatively affect other
>>>> processes running on the machine where the (de)compression occurs.
>>>>
>>>> So, will compression help you? yes ... no ... maybe ... *shrug* ...
>>>>
>>>>
>>>> Audrey Won wrote:
>>>>> We have a production server and a test server that are located at
>>>>> different facilities. Every night we dump the production database
>>>>> (170G) and load the dump to the test server. Both server are ASE12.5.4
>>>>> and run on IBM AIX. The data load to the test server usually runs 8.5
>>>>> hours. Recently because of network saturation, the load can take more
>>>>> than 18 hours on some days. I am wondering if we do a compressed dump
>>>>> and load from the much smaller dump file would reduce the network
>>>>> traffic volume. I couldn't figure out whether the data gets
>>>>> decompressed remotely on the production server side or locally on the
>>>>> test server side. I could manage to set up an environment to test this
>>>>> out. But I would like to know the theory before even test it.
>>>>>
>>>>> Any help is appreciated.
>