Think of your savings jar, example: large coca cola savings jar… if the jar got full you will empty it out and take the saved money to the bank or spend it, perhaps to make room for additional savings. Why do that? When you could open a bank account and have continuous savings without having to worry about savings jar getting 100%.The point here is, you’re going to be continually doing same tasks when the savings jar gets full as no further saving can be put in the jar: counting the money, taking the money to the bank etc. So, the process will be non-stop it will continue forever until you stop using the savings jar or use an option which allows greater quantity. Moral of the story here is you know you’re saving jar will just grow again even if you empty the jar. Therefore, consider the root cause of this continuous issue and use a solution that will expand the amount you are allowed to save, that will avoid the repeating issue and remediation tasks required. Because not only it will save you time it will make the process more efficient.
Let’s take a look into this from SQL server perspective?
Many people will experience growth issues with their database files typically .ldf with larger databases you may come across (.ndf). Once either of those files starts hitting 100% in usage or grow to the maximum space, the disk utilization would get triggered accordingly to alarm threshold. That would raise the alert and the DBA will start to investigate/remediate the problem to minimize outage.
Firstly, let me explain the log usage of 100%. Well, if your usage of log file got to 100% (Full recovery model) you would start getting error ‘9002 Transaction Log full error”. So, any subsequent transaction log growth requests would fail returning” 9002 Transaction Log full error” messages. As the active segments would require logging and potentially impact other database transaction logs residing on the same disk volume. Whereby, with the right recovery model and backup strategy you’re less likely to get flooded with the ‘9002 Transaction log full error’ why?
Because database recovery models that are set to FULL recovery model it will cause the non-active segments to persist in the transaction log file until they are backed-up. As such, in busy environments, the transaction log can grow significantly and thus consume more disk space. If there are many databases on the SQL instance that need to be switched from SIMPLE to FULL recovery mode, then sufficient disk space would need to be provisioned or separate disk volume(s) made available. But increasing the transaction log backup frequency to every 30 mins can help keep log sizes in check (Truncate) because the simple mode or transaction log backups, in contrast, will automatically re-use non-active segments and so will only grow to accommodate the active transactions.
Well, what would a DBA do then? straight away some DBA will think let’s shrink it’s an urgency.
I would ask that DBA to stop at that moment and review what’s going on as there could be multiple factors for a root cause to an issue, taking the easy way out won’t resolve the problem and could make matters worse.
When it comes to Shrinking there are two options:
The ShrinkDatabase() would do all files including the.MDF, .NDF, .LDF I would stay away from this because why would you want to shrink all files if they are not 100%, as you would cause more fragmentation and if you did have all files at 100% usage or disk drives at 100% then I would urge to review your SQL Server and storage urgently as they may not have been configured/sized per best practice.
With the ShrinkFile() you can choose individual files you’d like to shrink. But as always these are not your only options and shouldn’t be your first choice.
Point to note:
Truncate frees space within a file e.g. when transaction log backup takes place or if the database is simple recovery model. Alternatively, shrink removes that “free” space to make the physical file smaller) the log file(s) of your database (.LDF).
So, what impact does shrinking actually have?
The log file growing and shrinking regularly cause physical file fragmentation and have an adverse effect on the overall performance of the database. It would also increase the time taken to complete the growth of the log file that would impact performance.
Reading the above, a DBA may still ask what should I do then?
I would advise not to shrink the log file anymore and to review the issue properly because the log file will be likely to grow again.
Check the file properties of the database first, and the actual disk to get understanding of what configuration is in place for the database and how much storage there is. Then check the log_reuse_wait_desc, sometimes the database needs a log backup to make space available, also check the sqlperf (logspace) to see usage details. (Commands below)
If it’s your database .mdf growing then consider archiving data or else further expanding the storage to accommodate future growth.
In the meantime, take in consideration my hints below to review this issue.
- Per best practice use warning, a critical threshold so you get alerted even before the log file reaches 100%. No point leaving the issue to be alerted when critical as it just causes further problems such as service outage.
- Review your backup strategy
- Review your storage.
- Don’t set to limited if you are expecting your database to grow.
- Set your auto growth settings per best practice.
- DO NOT enable AUTO SHRINK!
- Review your queries and indexes that are potentially causing the growth.
- Configure extended event or additional monitoring to establish the root offender for the growth.
DBCC sqlperf (logspace) — Used percentage of log.
SELECT log_reuse_wait_desc FROM sys.databases WHERE name = ‘<DBNAME>’