Effective Database Management and Maintenance – Data and Log File Management

by Emily on January 6, 2015

in Articles

databaseAnyone who is in database management will know the importance of maintaining an effective and streamlined production database. This is particularly a challenge when implementing a new database solution or performing some major restructuring and therefore the fine-tuning practices also need changing according to the new database requirements and properties.

Database maintenance and tuning, just as every other existing IT solution, does not have a one-size-fits-all solution. However, there are key areas across the board that need addressing and these are:

Statistics

Index fragmentation

Data and log file management

Backups

Corruption detection

If a database has poor maintenance, it is likely to develop performance glitches that have their root in the above. These eventually interfere with application performance, resulting in downtimes and even data losses in serious cases.

Data and log file management

This should be the first stop for anyone who is taking over database operations, [tp lang=”en” only=”y”]as a remoteDBA[/tp][tp not_in=”en”]as a remoteDBA[/tp] or otherwise. You will be interested to find out settings relating to the data and log (transaction) file management system in operation. In particular, you want to confirm the following:

Data and log files should be kept separate from each other, and in a location isolated from every other thing as well

Configuration for auto-growth was properly done

Configuration for instant file initialization is also proper

The auto-shrink option is disabled and does not form part of maintenance schedule

Importance of separation

Data and log files are supposed to be separate because when they share volumes with applications that have file creation or expansion capabilities, the potential for fragmentation of files increases. Excessive fragmentation in data files is a known cause for poor [tp lang=”en” only=”y”]performance[/tp][tp not_in=”en”]performance[/tp] of queries.

In log files, the effect on performance has more pronouncement, particularly where the auto-growth setting has been configured to increase the file size by a predetermined amount every time it is required.

Best practices

Create data and log files with an appropriate size initially, taking into account the expansion by additional data in the short term. This eliminates the need to grow it several times soon after creation. Log files are more complex, since other factors should be in consideration: transaction size and log backup frequency.

After set-up, file sizes need observation at various times of the day, and manually grown at the times of least use. The auto-grow function should only be left as a precautionary measure for abnormal events. The reason for manual growth is that implementing auto-grow for the whole database increases the occurrence of file fragmentation. It may also kick in at inappropriate times, stalling the workload while it runs. The auto-grow size should be specifically configured to a value rather than a percentage.

Lastly, you must ensure that shrink functions are disabled. Shrink reduce sizes of data and log files, but it is a very resource-hungry and intrusive, causing a high degree of fragmentation and hence poor performance. Under special circumstances, you can use manual shrinking, but you must do this carefully and sparingly.

The auto-shrink option is the greatest offender, starting every half hour, commandeering large amounts of resources and operating unpredictably. You must ensure that shrink is always disabled and you do not include a manual shrink command in your maintenance plan. The database will need that space in which to effectively run.

 

Comments & Leave a Comment

comments

{ 0 comments… add one now }

 

Leave a Comment

CommentLuv badge

Previous post:

Next post: