Friday, 13 October 2017

Purging old backup files by using forfiles windows tool

It is well know that most backup strategies include a step to purge backup files to keep the most recent backups in the database server so that the disk space can be used properly. It is of paramount importance to schedule this task inside a SQL job in order to avoid running out of space. Today I am going to share a script to do that that uses forfiles windows tool via cmdshell. This script is within a stored procedure which has some input paramaters such as the database name, backup type, drive, and retention days.

USE [master] 
GO
CREATE PROCEDURE [dbo].[sp_DBA_Backup_FilesCleanup] (
@DatabaseName VARCHAR(200),
@BackupType VARCHAR(100),
@DriveName VARCHAR(1), 
@RetentionDays VARCHAR(4))
WITH ENCRYPTION
AS 
BEGIN
    SET NOCOUNT ON

    DECLARE @strcmd VARCHAR(4000)
    DECLARE @directory VARCHAR(4000)
    
    SET @directory=@DriveName + ':\SQLBackup\' + @DatabaseName --+ '\'  + @BackupType 
    SET @strcmd='forfiles /p "'+@directory+'" /s /d -'+ @RetentionDays +' /c "cmd /c del /q @path"'
      -- print @strcmd
    EXEC master.dbo.xp_cmdshell   @strcmd 
   
   SET NOCOUNT OFF
END
GO

The logic deletes old backups files located on a path with this pattern '<Drive>:\<BackupDirectory>\<DatabaseName>\<BackupType>'. For instance, if we want to delete Full + Diff + Log Backup Files of the database 'MyDB' older than one week and supposing that those backups files are located on the drive 'G' then the full path would be 'G:\SQLBackup\MyDB\Full' for Full Backups, 'G:\SQLBackup\MyDB\Diff' for Differential Backups, and 'G:\SQLBackup\MyDB\Log' for Log Backups. So, using the following stored procedure and according to the example above, we should execute it with the following parameters:

USE [master] 
GO
EXEC dbo.sp_DBA_Backup_FilesCleanup  @DatabaseName='MyDB' , @BackupType='FULL',@DriveName='G', @RetentionDays='7'
GO
EXEC dbo.sp_DBA_Backup_FilesCleanup  @DatabaseName='MyDB' , @BackupType='Diff',@DriveName='G', @RetentionDays='7'
GO
EXEC dbo.sp_DBA_Backup_FilesCleanup  @DatabaseName='MyDB' , @BackupType='Log',@DriveName='G', @RetentionDays='7'

That is all for now. Let me know any remarks you may have. Stay tuned.

Monday, 2 October 2017

Getting useful information of data & log files for all databases

It is a common DBA task to check the unused space of all database files in order to make the decision of extending the file sizes and provide them with more disk hard space in the likely event of running out of it. Today I have two scripts I would like to share with you to get that info easily. The first one is to report some very important information about every data & log file for all databases such as name, file size in GB, space used in GB, free space in GB and percentage, growth settings in tandem with other information at database level such as collation, compatibility level, owner, and more. Here you are:

USE master
GO
IF EXISTS (SELECT name FROM master.sys.tables WHERE name='TmpFileSpace')
    DROP TABLE TmpFileSpace

CREATE TABLE [dbo].[TmpFileSpace](
    [DatabaseName] [nvarchar](128) NULL,
    [FileName] [sysname] NOT NULL,
    [FileSizeGB] [decimal](10, 2) NULL,
    [SpaceUsedGB] [decimal](10, 1) NULL,
    [SpaceFreeGB] [decimal](10, 1) NULL,
    [SpaceFree%] [decimal](10, 1) NULL
) ON [PRIMARY]
 
EXEC sp_MSforeachdb '
USE [?]
INSERT INTO master.dbo.[TmpFileSpace]([DatabaseName], [FileName], [FileSizeGB], [SpaceUsedGB], [SpaceFreeGB], [SpaceFree%])
SELECT DB_NAME() DatabaseName, name FileName, 
CAST(size/128.0/1024.0 AS DECIMAL(10,2)) SizeGB, 
CAST(FILEPROPERTY(name,''SpaceUsed'') /128.0/1024.0  AS DECIMAL(10,1)) SpaceUsedGB, 
CAST((size - FILEPROPERTY(name,''SpaceUsed'')) /128.0/1024.0 AS DECIMAL(10,1)) SpaceFreeGB, 
CAST(((size - FILEPROPERTY(name,''SpaceUsed''))/(size*1.0)) *100 AS DECIMAL(10,1)) [SpaceFree%]
FROM sys.database_files'

SELECT db.name DatabaseName,db.collation_name,db.compatibility_level, SUSER_SNAME(owner_sid) OwnerName,
db.page_verify_option_desc, db.is_auto_close_on, 
db.is_auto_create_stats_on,db.is_auto_shrink_on, db.is_auto_update_stats_on,
db.is_auto_update_stats_async_on,db.name DatabaseName,  fs.FileName,
fs.FileSizeGB , fs.SpaceUsedGB, fs.SpaceFreeGB, fs.[SpaceFree%],
  physical_name, cast(size/128.0/1024.0 as decimal(10,2)) FileSizeGB,
db.state_desc,max_size,growth,is_percent_growth 
FROM sys.master_files mf
INNER JOIN sys.databases db ON mf.database_id = db.database_id
INNER JOIN TmpFileSpace FS ON mf.database_id=db_id(Fs.DatabaseName) AND mf.name=fs.FileName

DROP TABLE [TmpFileSpace]

But if you only want to get information about the size in GB, space used in GB, free space in GB of all data & log files for a specific database you can use this:

USE [YourDatabaseName]
SELECT DB_NAME() DatabaseName, name FileName, 
CAST(size/128.0/1024.0 AS DECIMAL(10,2)) SizeGB, 
CAST(FILEPROPERTY(name,'SpaceUsed') /128.0/1024.0  AS DECIMAL(10,1)) SpaceUsedGB, 
CAST((size - FILEPROPERTY(name,'SpaceUsed') ) /128.0/1024.0 AS DECIMAL(10,1)) SpaceFreeGB, 
CAST(((size - FILEPROPERTY(name,'SpaceUsed') )/(size*1.0)) *100 AS DECIMAL(10,1)) [SpaceFree%]
FROM SYS.database_files

That is all for now. Let me know any remarks you may have. 

Thursday, 21 September 2017

Quickly find a cached execution plan of an Ad-Hoc query

It is clear that sometimes we just need to find a cached execution plan as quickly as possible so that it can be analysed and then use it to optimise the code in question. This work might be harder if the workload of a database server is based on ad-hoc queries because they do not reuse cached plan as good as stored procedures do especially if Ad-Hoc queries work with parameters, so in this context, we might find many cached plans for only one Ad-Hoc query and it will then make more difficult spot what we are looking for. It is of paramount importance to remember that optimisation and tuning is a never-ending task and it is worthy of our attention every so often. Today I am coming with the following script to help quickly find a cached plan execution plan for an ad-hoc query. This query filters out by the specific text you are looking for in the ad-query.

SELECT TOP 10 st.text AS [SQLText], cp.cacheobjtype CacheObjType, cp.objtype ObjType,
COALESCE(DB_NAME(st.dbid), DB_NAME(CAST(pa.value AS INT))+'*',
'Resource') AS [DatabaseName], cp.usecounts AS [PlanUsage], qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
OUTER APPLY sys.dm_exec_plan_attributes(cp.plan_handle) pa
WHERE pa.attribute = 'dbid'
AND st.text LIKE '%some text of the object or ad-hoc query%'

It well worth noting that this is why it is highly advisable using stored procedures so that CPU and memory resources can be used more efficiently. That is all for now, let me know any remarks you may have. Stay tuned.

Thursday, 7 September 2017

Getting information about memory used by SQL Server

Naturally, every DBA is asked to report information about the memory used by SQL Server. Common questions are related to memory reserved and memory used currently, and sometimes we might not know how to complete this task. How many times did we take a look at the Windows Manager Task to find out that info? Many of us might have ended up quite frustrated time and time again because it did not help much. Over time Microsoft decided to provide Administrators with more useful tools and released Resource Manager tool whereby useful memory information per process is available, nevertheless, whether we want to get that info from SQL Server it might be an uphill battle as it would need that we write certain complex code at windows level.  For the time being, thinking about this situation I made the snap decision of sharing with you some helpful scripts that will alleviate the pain.

In the likely event that you might need to get the total buffer pool memory used by all databases at SQL instance level, this script is for it.

SELECT cast( cast( COUNT(*) /128.0/1024.0 as decimal(10,2)) as varchar(10)) + 'GB'  AS TotalUsageBufferPool
FROM sys.dm_os_buffer_descriptors

And if you wanted to know the buffer pool memory used by each database, you can use this:

SELECT CASE database_id 
        WHEN 32767 THEN 'ResourceDb'  ELSE db_name(database_id)         END AS DatabaseName,
        cast( COUNT(*) /128.0 as decimal(10,2)) AS [BufferPool(MB)]
FROM sys.dm_os_buffer_descriptors
GROUP BY DB_NAME(database_id) ,database_id
ORDER BY [BufferPool(MB)] DESC

Finally, and more importantly, answering the question about the total memory used by the whole SQL instance:
-- SQL2012/2014/2016/2017
select cast(cast(physical_memory_kb /1024.0/1024.0  as decimal(10,2))  as varchar(10)) + 'GB' TotalPhysicalRAM, 
cast(cast(visible_target_kb /1024.0/1024.0  as decimal(10,2)) as varchar(10)) + 'GB' MaxRAM, -- max memory configure at sql server level
cast(cast(committed_target_kb /1024.0/1024.0  as decimal(10,2)) as  varchar(10)) + 'GB' ReservedRAM,  --memory reserved
cast(cast(committed_kb /1024.0/1024.0  as decimal(10,2)) as  varchar(10)) + 'GB' UsedRAM --memory used currently
FROM sys.dm_os_sys_info

-- for SQL2005/2008/2008R2
SELECT cast(cast(physical_memory_in_bytes /1024.0/1024.0  as decimal(10,2))  as varchar(10)) + 'GB' TotalPhysicalRAM, 
cast(cast(bpool_visible /128.0/1024.0  as decimal(10,2)) as varchar(10)) + 'GB' MaxRAM, -- max memory configure at sql server level
cast(cast(bpool_commit_target /128.0/1024.0  as decimal(10,2)) as  varchar(10)) + 'GB' ReservedRAM,  --memory reserved
cast(cast(bpool_committed /128.0/1024.0  as decimal(10,2)) as  varchar(10)) + 'GB' UsedRAM --memory used currently
FROM sys.dm_os_sys_info

That is all for now. I hope you find these scripts helpful. Let me know any remarks you may have.

Tuesday, 15 August 2017

Altering SQL Jobs without granting SysAdmin privilege

What would you do if you were asked to grant a few users the permission for altering SQL Jobs? It is a tricky task to carry out although many of us would think that granting SQLAgentOperatorRole role might be enough, however, many years ago no sooner had I done it than I realised it did not work as expected, and now I think most of us faced up this issue at first.
The SQLAgentOperatorRole role allows user to alter SQL jobs as long as the user is the owner of the SQL job, otherwise, the user need to be SysAdmin at SQL instance level. What’s more, one of the best security practices says that nobody but DBA must be SysAdmin, and we should use Windows Authentication. Nevertheless, when it comes to owners for SQL jobs they should use 'sa' as owner which does not mean having enable that account, it should be disable. In this sense, it is highly advisable to have the disabled account “sa” as the owner of all SQL jobs and avoid granting SysAdmin privilege. So, it is of paramount importance not to use a windows user as owner of a SQL job because SQL Server will always validate windows users against the Active Directory and it is likely to get unforeseen errors during that process. 
Today I am going to share with you a stored procedure to enable users to alter SQL Jobs without the need of granting SysAdmin privilege. This stored procedure consists of a logic that will allow a specific user to take the ownership of a SQL Job so that the user can be able to alter it and after making the changes the user can change the ownership to ‘sa’ (or the original owner). This is the stored procedure that I mentioned above.

USE [msdb]
GO
CREATE PROC[dbo].[usp_change_owner_job] @jobname varchar(max), @newowner varchar(max)
with execute as owner
as
begin
    declare @old_owner varchar(max)
    select @old_owner= s.name from msdb.dbo.sysjobs j inner join sys.server_principals s
    on j.owner_sid= s.sid where j.name=@jobname

    --declare @newowner varchar(max)
    --set @newowner=ORIGINAL_LOGIN()
    EXEC msdb.dbo.sp_update_job @job_name=@jobname, @owner_login_name=@newowner

    --print @newowner
    declare @msg varchar(max)
    set @msg= 'The owner of ' +@jobname + ' job was changed from ''' + @old_owner + ''' to ''' + @newowner + ''''
    print @msg    
end

So, for instance, if you have a SQL job whose owner is 'sa' and your user is 'User2', you will not be able to alter the job until you take the ownership of it temporarily. Here are all the steps you must follow:

1. Create the stored procedure 'usp_change_owner_job' based on the code above.
2. Grant EXECUTE permission on 'usp_change_owner_job' to 'User2'.
3. Grant SQLAgentOperatorRole role to 'User2'.
4. Change the ownership of the SQL Job to 'User2' by using the stored procedure 'usp_change_owner_job'.

EXEC msdb.dbo.[usp_change_owner_job] 'BusinessJob01', 'User2'

The owner of BusinessJob01 job was changed from 'sa' to 'User2'

5. Now the User2 has the ownership of the SQL job and is now able to alter it.
6. After making the changes on the SQL Job, the 'User2' must change the ownership to 'sa'.

The owner of BusinessJob01 job was changed from 'User2' to 'sa'

The owner of BusinessJob01 job was changed from 'User2' to 'sa'

Having successfully completed all the steps will you be able to alter any SQL Job without granting SysAdmin privilege to users. That is all for now. Let me know any remarks you may have.

Tuesday, 8 August 2017

Dealing with physical database corruptions

Beyond all doubt, not every single database in the world has a DBA dedicated to monitoring it 24x7 hours. Moreover, many database environments are unwittingly implemented/installed where there was no consideration to meet the basic software and hardware minimum requirements. This situation is compounded by the fact that many companies do not put much attention on databases from the beginning, maybe because at first the database are quite small and serving not many transactions per second. Unsurprisingly, all databases are becoming bigger and bigger with the passing of time so it is not rocket science to foresee that everything will get worse in terms of performance and physical integrity. In this new context, it is much more critical to have everything in place so as to prevent databases from getting damaged. However, many times it is too late when one realises that the database got damaged because of poor implementation. The overwhelmingly majority of physical corruption issues are not sparked by SQL Server on its own, but poor hardware implementation. Thus, if you find yourself working with databases struggling this problem, I wholeheartedly recommend reinstalling the whole server from scratch taking minimum requirements on board.
After having a good implementation of a database server, it is of paramount importance to carry out maintenance tasks at least once a month, and it should include executing full DBCC CHECKDB, and if possible DBCC CHECKDB WITH PHYSICAL_ONLY option once a week. It is also understandable that DBCC CHECKDB makes intensive use of resources, especially CPU and Disk, nevertheless it is possible to use MAXDOP option with DBCC CHECKDB to limit CPU usage and ease the pain. So, nowadays there is no feasible excuse to avoid executing it regularly.

Having said that, I would like to share with you some steps to follow in order to try to fix a physical database corruption:

1. To begin with, try to back your database up.
2. Set your database to SINGLE_USER mode, and then execute DBCC CHECKDB WITH PHYSICAL_ONLY to determine whether physical corruption exists or not.

ALTER DATABASE MyEnterpriseDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
DBCC CHECKDB(MyEnterpriseDB) WITH PHYSICAL_ONLY 

If there is physical corruption, SQL Server will display some errors like these:

Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown),
page ID (1:219356) contains an incorrect page ID in its page header.
The PageId in the page header = (0:0).
Msg 8928, Level 16, State 1, Line 1
Object ID 501576825, index ID 1, partition ID 72057594040549376,
alloc unit ID 72057594044940288 (type In-row data): Page (1:1408252) could not be processed.

3. Sometimes only non-clustered indexes are damaged, and luckily in this case, you just need to recreate those indexes. If you want to figure out which indexes are damaged so that you can recreate them, you can have a look at this article I wrote many years ago.
4. However, if the clustered index or heap structure is damaged then you may need set the database to EMERGENCY mode and execute DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS option which may not always be the best option for bringing a database to a physically consistent state but when the clustered index or heap structure of a table is heavily damaged there is no other option, just do it at the cost of some data loss. It is also worth noting that if you want to find out which objects are damaged so that only execute DBCC CHECKDB repair process on them then you can also check out the same article. Here is this example, we are going to execute on the whole database.

ALTER DATABASE MyEnterpriseDB SET EMERGENCY;
GO
DBCC CHECKDB (MyEnterpriseDB, REPAIR_ALLOW_DATA_LOSS) WITH PHYSICAL_ONLY 
GO
ALTER DATABASE MyEnterpriseDB SET ONLINE WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE MyEnterpriseDB SET MULTI_USER WITH ROLLBACK IMMEDIATE;

5. Having executed DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS option does not always ensure that the database will be repaired, however, many times it is enough. So, after getting your database repaired, try to go to bottom of the problem, and fix the problem at hardware level because it is most likely that disk storage is not working properly. Bearing in mind that database corruption issue is just the tip of the iceberg so consider DBCC CHECKDB as a quick fix, not a final solution.
6. Finally, you should take a full database backup.

That is all for now, I hope you find this post helpful. Let me know any remarks you may have. Stay tuned.

Friday, 4 August 2017

Troubleshooting timeout expired errors

Having .NET Applications working with SQL Server, some unforeseen timeout errors can be raised for two possible causes: bad-written SQL code (especially Ad-Hoc queries) and application issues. Under this circumstance, developers will generally blame SQL Server at first glance because they may think it is a misconfiguration related to timeout in SQL Server. In my experience troubleshooting these issues, I would say that lots of them are not mostly related to SQL Server configuration itself. Additionally, when it comes to application as the root cause, reading the SQL Server error log, you will not find any error about timeout or failed login events. What does it mean? it simply means that the application never tried to connect to SQL Server because it was still working at application level when the error was raised. Thus, the cause of timeout error is inside the application at Net SqlClient Data Provider level (see the CommandTimeout property value configured for the application connection). This 'CommandTimeout' property specifies the number of seconds that a application provider should wait for result sets before sending a timeout signal. It is well known that the default is 30 seconds, therefore, it may not be good enough due to application performance problems.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
In order to get it fixed, firstly, developers should check .NET code within applications. In some cases, the developers I worked with found infinite loops or slow codes which spent much time before sending SQL queries to the database engine, as a result of this, the timeout threshold was reached at application level and, obviously, the error was raised. By the way, the following error could also be related to timeout:
Unable to connect to SQL Server session database.
Having checked the application, you can increase the 'CommandTimeout' property value to 60 seconds (if it is not enough then change to 120 or 180, and also considerate using 0 (unlimited) which should be a workaround while the problem is being traced and fixed). That is all for now, let me know any remarks you may have. Thanks for reading. Stay tuned.

Tuesday, 1 August 2017

Updating so-called 'column statistics' for the whole SQL Server database

Definitely, statistics in SQL Server are vital to ensure stable database performance as a whole. Every DBA must know the importance of keeping statistics up to date, however, not everyone who is in charge of a database knows much about it, especially, those who do not have this role or are DBA by accident. So, I will just mention that statistics are objects that contain data distribution, density, selectivity, cardinality, etc. for all columns that were at least once queried or linked to indexes. This information helps the SQL optimizer make the right decision of what indexes and resources should use so as to execute and process the queries as fast as possible. Put differently, statistics allow SQL optimizer to create the best possible execution plan to run queries efficiently, otherwise, when statistics are not up to date bad execution plans may be created which will make inefficient use of resources bringing down the performance of the whole database and, even worse the whole server. So, it is a basic task to routinely update statistics at least once a month as part of your database maintenance tasks.
It is worth noting that statistics that linked to indexes (so-called 'index statistics') are full updated automatically when indexes are rebuilt. So, those statistics do not need explicit update. Nevertheless, statistics linked to columns (so-called 'column statistics') still need of an explicit update and this is what we need to do after rebuilding all the indexes. Thinking about it, today I am going to share a script to only run the update of columns statistics:

SET NOCOUNT ON     
DECLARE @schema_name varchar(max)      
DECLARE @table_name varchar(max)     
DECLARE @stat_name varchar(max)        
DECLARE @update_stat_cmd varchar(max)        
DECLARE @update_stat_msg_header   varchar(max)      
DECLARE update_stat_cursor CURSOR FOR    
          select  schema_name(o.[schema_id]),  object_name(s1.[object_id]) , s1.name   
          from (  
            select s.[object_id], s.name from sys.stats s  
            left join sys.indexes i on s.name=i.name  
            where i.name is null) s1  
          inner join sys.objects o on o.[object_id]=s1.[object_id]  
          where  o.type='U'  
          order by schema_name(o.[schema_id]),  object_name(s1.[object_id]) , s1.name   
  
       
OPEN update_stat_cursor        
FETCH NEXT FROM update_stat_cursor INTO  @schema_name, @table_name,  @stat_name         
WHILE (@@fetch_status = 0)        
  BEGIN    
       DECLARE @ini DATETIME, @fin DATETIME     
       SET @update_stat_msg_header =  '->Updating ['+ RTRIM(@schema_name)  +'].[' + RTRIM(@table_name) + ']([' +@stat_name + '])'      
       PRINT @update_stat_msg_header    
       SET @update_stat_cmd ='UPDATE STATISTICS ['+ RTRIM(@schema_name)  +'].[' + RTRIM(@table_name) + ']([' +@stat_name + '])'    
       SET @ini=GETDATE()    
       EXEC (@update_stat_cmd)      
       SET @fin=GETDATE()    
      
       FETCH NEXT FROM update_stat_cursor INTO @schema_name, @table_name,  @stat_name            
  END        
 
PRINT ' '       
PRINT '----------------------------------------------------------------------------- '        
SET @update_stat_msg_header = '*************  THERE ARE NO MORE STATISTICS TO BE UPDATED **************'         
PRINT @update_stat_msg_header        
PRINT ' '        
PRINT 'All statistics not linked to any index were rebuilt!'        
  
CLOSE update_stat_cursor        
DEALLOCATE update_stat_cursor    
SET NOCOUNT OFF

That is all for now. Thanks for reading. Let me know any remarks you may have. Stay tuned.
HELLO, I'M PERCY REYES! I've been working as a senior SQL Server Database Engineer for over 20 years; I'm a three-time Microsoft Data Platform MVP. I'm a cryptographer conducting research on cryptographic Boolean functions and their applications.