Wednesday, March 7, 2012

ALTER DATABASE (optimization job)

Hi all
I have a database (warehouse) with a data file 16GB with Recovery model FULL
And each week I do a night run for optimization with the options "Reorganize
data and index pages" and "Change free space per page percentage to 10%"
When this night run occurs, the transaction log on my database increases to
17GB
cause to recreation of indexes and so on.
I was wondering about the following, so I could solve the problem of 17GB
transaction logs. The disks are not cheap in an external sub-system with
mirrors and stripes.
1) Before the optimization job start backup the database
2) After the backup change the database recovery model to simple (so no log
will be recorded
3) Backup again database (now on simple mode) so transaction log will be
shrink
4) Run the optimization job
5) Change the database recovery model back to FULL
6) Backup again database (now in FULL mode)
That's the solution I have thought.and all that will be done by the night
run
How dangerous is to change the recovery model of the database before you run
a job' Is the risk high'
Is there an other way to perform the task, without having my transaction log
increased so match?
Thanks in advance
Dimitris Dimolas
Web Programmer
GreeceWhy do you want to shrink the file each week? SQL Server is designed to work
with pre-allocation of space. By
not shrinking, you don't have to do anything special at all!
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Dimitris" <seeyou_gr@.hotmail.com> wrote in message news:uW1IwixPEHA.3216@.TK2MSFTNGP12.phx.g
bl...
> Hi all
> I have a database (warehouse) with a data file 16GB with Recovery model FU
LL
> And each week I do a night run for optimization with the options "Reorgani
ze
> data and index pages" and "Change free space per page percentage to 10%"
> When this night run occurs, the transaction log on my database increases t
o
> 17GB
> cause to recreation of indexes and so on.
> I was wondering about the following, so I could solve the problem of 17GB
> transaction logs. The disks are not cheap in an external sub-system with
> mirrors and stripes.
> 1) Before the optimization job start backup the database
> 2) After the backup change the database recovery model to simple (so no lo
g
> will be recorded
> 3) Backup again database (now on simple mode) so transaction log will be
> shrink
> 4) Run the optimization job
> 5) Change the database recovery model back to FULL
> 6) Backup again database (now in FULL mode)
>
> That's the solution I have thought.and all that will be done by the night
> run
>
> How dangerous is to change the recovery model of the database before you r
un
> a job' Is the risk high'
>
> Is there an other way to perform the task, without having my transaction l
og
> increased so match?
>
> Thanks in advance
>
> Dimitris Dimolas
> Web Programmer
> Greece
>|||In addition to the other responses, see the whitepaper at
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
for index maintenance tips.
Hope this helps.
Dan Guzman
SQL Server MVP
"Dimitris" <seeyou_gr@.hotmail.com> wrote in message
news:uW1IwixPEHA.3216@.TK2MSFTNGP12.phx.gbl...
> Hi all
> I have a database (warehouse) with a data file 16GB with Recovery model
FULL
> And each week I do a night run for optimization with the options
"Reorganize
> data and index pages" and "Change free space per page percentage to 10%"
> When this night run occurs, the transaction log on my database increases
to
> 17GB
> cause to recreation of indexes and so on.
> I was wondering about the following, so I could solve the problem of 17GB
> transaction logs. The disks are not cheap in an external sub-system with
> mirrors and stripes.
> 1) Before the optimization job start backup the database
> 2) After the backup change the database recovery model to simple (so no
log
> will be recorded
> 3) Backup again database (now on simple mode) so transaction log will be
> shrink
> 4) Run the optimization job
> 5) Change the database recovery model back to FULL
> 6) Backup again database (now in FULL mode)
>
> That's the solution I have thought.and all that will be done by the night
> run
>
> How dangerous is to change the recovery model of the database before you
run
> a job' Is the risk high'
>
> Is there an other way to perform the task, without having my transaction
log
> increased so match?
>
> Thanks in advance
>
> Dimitris Dimolas
> Web Programmer
> Greece
>|||My transactions in a day are about 4gb
So why should i have a log of 17gb'
And the question isn't why.
But how i can done it
You can understand ofcource that if you have a log of 4 GB per day there is
no reason to have a log of 17GB when you r trying to optimize your database.
There is no need.
Thanks
"Dimitris" <seeyou_gr@.hotmail.com> wrote in message
news:uW1IwixPEHA.3216@.TK2MSFTNGP12.phx.gbl...
> Hi all
> I have a database (warehouse) with a data file 16GB with Recovery model
FULL
> And each week I do a night run for optimization with the options
"Reorganize
> data and index pages" and "Change free space per page percentage to 10%"
> When this night run occurs, the transaction log on my database increases
to
> 17GB
> cause to recreation of indexes and so on.
> I was wondering about the following, so I could solve the problem of 17GB
> transaction logs. The disks are not cheap in an external sub-system with
> mirrors and stripes.
> 1) Before the optimization job start backup the database
> 2) After the backup change the database recovery model to simple (so no
log
> will be recorded
> 3) Backup again database (now on simple mode) so transaction log will be
> shrink
> 4) Run the optimization job
> 5) Change the database recovery model back to FULL
> 6) Backup again database (now in FULL mode)
>
> That's the solution I have thought.and all that will be done by the night
> run
>
> How dangerous is to change the recovery model of the database before you
run
> a job' Is the risk high'
>
> Is there an other way to perform the task, without having my transaction
log
> increased so match?
>
> Thanks in advance
>
> Dimitris Dimolas
> Web Programmer
> Greece
>|||Here are a couple of things to consider:
It sounds like you are using the database maintenance plan to reindex,
is this right? If so, I would instead script out a job which will
only reindex those indexes which are fragmented. The maintenance plan
is actually running on all indexes, which will cause a large
transaction log. BOL has a nice sample in under DBCC ShowContig
examples. You can work with that sample to strategize at what level
of fragmentation you want to run the commands.
One other thing to consider, if you are using identity fields for
primary keys and clustered indexes, you are currently rebuilding these
with every run of the maintenance plan. Identity fields rarely become
fragmented, unless you have enables identity insert or if you have a
large amount of delete activity. Chances are that you can probably
get by without reindexing these very often. The script mentioned
above will make this determination.
Your plan to change the recovery model won't necessarily work. The
log will still grow large, as it will not clear the log until the
entire DBCC DBReindex (which the maintenance plan runs) completes. I
would be willing to bet that if you change the job to only address
those indexes which are in need of defragmentation, you probably won't
have a log issue. If this is not the case, and you would like to
backup the log intermittently throughout the process, you can choose
to run a DBCC IndexDefrag instead. Per BOL on IndexDefrag:
"In addition, the defragmentation is always fully logged, regardless
of the database recovery model setting (see ALTER DATABASE). The
defragmentation of a very fragmented index can generate more log than
even a fully logged index creation. The defragmentation, however, is
performed as a series of short transactions and thus does not require
a large log if log backups are taken frequently or if the recovery
model setting is SIMPLE. "
"Dimitris" <seeyou_gr@.hotmail.com> wrote in message news:<uW1IwixPEHA.3216@.TK2MSFTNGP12.phx.
gbl>...
> Hi all
> I have a database (warehouse) with a data file 16GB with Recovery model FU
LL
> And each week I do a night run for optimization with the options "Reorgani
ze
> data and index pages" and "Change free space per page percentage to 10%"
> When this night run occurs, the transaction log on my database increases t
o
> 17GB
> cause to recreation of indexes and so on.
> I was wondering about the following, so I could solve the problem of 17GB
> transaction logs. The disks are not cheap in an external sub-system with
> mirrors and stripes.
> 1) Before the optimization job start backup the database
> 2) After the backup change the database recovery model to simple (so no lo
g
> will be recorded
> 3) Backup again database (now on simple mode) so transaction log will be
> shrink
> 4) Run the optimization job
> 5) Change the database recovery model back to FULL
> 6) Backup again database (now in FULL mode)
>
> That's the solution I have thought.and all that will be done by the night
> run
>
> How dangerous is to change the recovery model of the database before you r
un
> a job' Is the risk high'
>
> Is there an other way to perform the task, without having my transaction l
og
> increased so match?
>
> Thanks in advance
>
> Dimitris Dimolas
> Web Programmer
> Greece|||> My transactions in a day are about 4gb
> So why should i have a log of 17gb'
Because the db is 16 GB and you reorg all indexes in the db. Assuming that y
ou have clustered index on all
tables, then you rebuild 16GB worth of data, and everything is logged. I thi
nk that the big question is
whether you perform regular log backups or not. If not, just run the db in s
imple recovery mode, and the
rebuilds are minimally logged. Then if the working space for a days work for
the log is 4GB, then you can just
keep the log the size it needs. Or shrink it, the article is just to explain
side effects of shrinking! If you
do run regular log backups, then you could do something like:
1. Backup log
2. Db to simple recovery
3. Do the index rebuilds
4. (Do the shrink)
5. Db to full recovery
6. Backup db
You now have a window in time where you cannot do point in time recovery. Th
is is (inclusive) from 2 to 6.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Dimitris" <seeyou_gr@.hotmail.com> wrote in message news:%23GPRtwzPEHA.3044@.TK2MSFTNGP10.phx
.gbl...
> My transactions in a day are about 4gb
> So why should i have a log of 17gb'
> And the question isn't why.
> But how i can done it
> You can understand ofcource that if you have a log of 4 GB per day there i
s
> no reason to have a log of 17GB when you r trying to optimize your databas
e.
> There is no need.
> Thanks
>
> "Dimitris" <seeyou_gr@.hotmail.com> wrote in message
> news:uW1IwixPEHA.3216@.TK2MSFTNGP12.phx.gbl...
> FULL
> "Reorganize
> to
> log
> run
> log
>|||Another option is to defrag using DBCC INDEXDEFRAG. Depending on the fragmen
tation, you might end up with less
in the log file. Also, I suggest that you don't defrag unless you have to (D
BCC SHOWCONTIG tell you the
fragmentation level). More info at:
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message
news:ecf6z8zPEHA.904@.TK2MSFTNGP12.phx.gbl...
> Because the db is 16 GB and you reorg all indexes in the db. Assuming that
you have clustered index on all
> tables, then you rebuild 16GB worth of data, and everything is logged. I t
hink that the big question is
> whether you perform regular log backups or not. If not, just run the db in
simple recovery mode, and the
> rebuilds are minimally logged. Then if the working space for a days work for the l
og is 4GB, then you can
just
> keep the log the size it needs. Or shrink it, the article is just to explain side
effects of shrinking! If
you
> do run regular log backups, then you could do something like:
> 1. Backup log
> 2. Db to simple recovery
> 3. Do the index rebuilds
> 4. (Do the shrink)
> 5. Db to full recovery
> 6. Backup db
> You now have a window in time where you cannot do point in time recovery.
This is (inclusive) from 2 to 6.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Dimitris" <seeyou_gr@.hotmail.com> wrote in message news:%23GPRtwzPEHA.304
4@.TK2MSFTNGP10.phx.gbl...
>

No comments:

Post a Comment