We are trying to manage our transaction log files a little bit
differently as we found out in many cases our tran log files have grown
way too big. The db's are set to full recovery mode and we do take
tran log backups daily at night but that is not sufficient and because
we have the tran log to grow with no limit it is getting really big.
So what I am planning to do is set a maximum size for the tran log and
then also set up an alert to back up the log when it is 70% full.
However the one thing that I am not clear about is how SQL Server
handles the log, for example let's say if I set the max size of tran
log to 200MB and let's say that there is a transaction or a process
that rebuilds indexes that would take more than 200MB to complete the
entire taks would SQL Server give me an error or will it kick off the
alert and do a log back up and continue with the process?
In that scenario will I have to set the max size of the tran log larger
than the largest transaction size that my application may run?
Any help or advice in this regard will be greatly appreciated.
Thanksshub wrote:
> We are trying to manage our transaction log files a little bit
> differently as we found out in many cases our tran log files have grown
> way too big. The db's are set to full recovery mode and we do take
> tran log backups daily at night but that is not sufficient and because
> we have the tran log to grow with no limit it is getting really big.
> So what I am planning to do is set a maximum size for the tran log and
> then also set up an alert to back up the log when it is 70% full.
> However the one thing that I am not clear about is how SQL Server
> handles the log, for example let's say if I set the max size of tran
> log to 200MB and let's say that there is a transaction or a process
> that rebuilds indexes that would take more than 200MB to complete the
> entire taks would SQL Server give me an error or will it kick off the
> alert and do a log back up and continue with the process?
> In that scenario will I have to set the max size of the tran log larger
> than the largest transaction size that my application may run?
> Any help or advice in this regard will be greatly appreciated.
> Thanks
>
Have a look at my backup script:
http://realsqlguy.com/twiki/bin/vie...realsqlguy.com|||Thanks for sharing your code, I will take a look at it and see how I
could implement it.
I just wanted to clarify your response, since the backup won't be able
to flush out the log because of uncomitted transaction will I possibly
get an error of transaction being full unless I set the max size
greater than the size what the rebuild/or any other process would take.
What is the best way to handle such operations without setting the max
size of tran log file to a large number?
I just did few tests and it appears after I do rebuilding of indexes
it seems the transaction log file grows approximately to the size of
the data file, so does that mean that I would need to set the max size
for tran log to be greater than that?
Thanks
Tracy McKibben wrote:
> shub wrote:
> Have a look at my backup script:
> http://realsqlguy.com/twiki/bin/vie...realsqlguy.com|||shub wrote:
> Thanks for sharing your code, I will take a look at it and see how I
> could implement it.
> I just wanted to clarify your response, since the backup won't be able
> to flush out the log because of uncomitted transaction will I possibly
> get an error of transaction being full unless I set the max size
> greater than the size what the rebuild/or any other process would take.
> What is the best way to handle such operations without setting the max
> size of tran log file to a large number?
>
Reindexing generates a lot of transactional activity, and those
transactions tend to be large. Bottom line is, if the transaction log
isn't large enough to hold a transaction, you will receive an error, and
the process generating the transaction will fail.
That said, you have a couple of options:
1. Don't rebuild every index, only rebuild those that are fragmented.
I have another script on my site that will automate this process for
you. Consider running this script once per week, with a fragmentation
threshold of 20-30%.
2. If you're not doing log shipping, you might consider putting the
database into Simple mode before reindexing. This won't reduce the size
of the log, but it will help flush out the committed transactions. If
you're doing frequent t-log backups (5 minute intervals), Simple mode
won't really gain you anything.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
No comments:
Post a Comment