Oct 18

Maintenance Plan Cleanup Tasks

My observations suggest that you can only add a maintenance cleanup task (to delete expired backups) from the machine on which SQL Server resides. When I run the maintenance plan wizard from SSMS on my own machine, I don’t get that option.

Once you successfully add a maintenance cleanup task, how do you get it to delete .bak, .dif and .trn files all in the same task? Leaving the extension box blank doesn’t result in anything getting deleted. The secret is to enter * as the extension. The period before the extension should be omitted.

Oct 17

Most Costly Queries Running Right Now

Out of memory? Use the sys.dm_exec_query_memory_grants DMV to find the heaviest query running right now and kill it.

By memory usage:

SELECT session_id, (requested_memory_kb/1024.AS [Requested Memory (MB)], grant_time, query_costtext
FROM sys.dm_exec_query_memory_grants
CROSS APPLY sys.dm_exec_sql_text(sql_handle) t
ORDER BY requested_memory_kb DESC

By query cost:

SELECT session_id, (requested_memory_kb/1024.AS [Requested Memory (MB)], grant_time, query_costtext
FROM sys.dm_exec_query_memory_grants
CROSS APPLY sys.dm_exec_sql_text(sql_handle) t
ORDER BY query_cost DESC

Oct 15

SQL Server 2008 Interesting Features

Extracted from an article on DatabaseJournal.com.

Theoretically, SQL Server 2008 will be released in Q1 of 2008, along with a new version of Visual Studio and Windows.

Data Compression

According to Microsoft, while using compression will slightly increase CPU usage, overall system performance will be improved by reducing I/O and increasing buffer-hit rates. Data and indexes can both be compressed, and it doesn’t affect other features.

Performance Data

More perf counters have been added. The data collected by the counters is stored in a centralized data warehouse. Microsoft states that running the default set of performance related monitors will consume less than 5% of CPU and memory resources.

Performance Dashboard: tool that can read saved performance data. Historical and baseline comparisons can be made and used to create action triggers. For example, if memory use exceeds a threshold for more than five minutes, a more detailed data collection can be automatically triggered.

Performance Studio: a collection of performance tools. Together they can be used for monitoring, troubleshooting, tuning and reporting. The Data Collector component of the studio is configurable and low overhead. It supports several collecting methods, including queries, traces, and perf counters. Data can also be collected programmatically. Once data is collected, there are drill-down and aggregate reporting options, including suggestions for potential performance tuning.

Installation

The installer will be able to retrieve updates from Microsoft and install SQL Server, service packs, and patches as a single step. Service packs will be uninstallable.

Data Synchronizing Features

SQL 2008, Visual Studio, and ADO.NET bring together new methods of creating disconnected applications which synchronize with a central database. SQL 2005 started by providing support for change tracking by using triggers. SQL 2008 synchronizing is better integrated and optimized.

Dates and Times

New data types in SQL 2008.

  • Date: date only, no time.
  • Time: time only, no date.
  • Date Time Offset: “UTC time-zone aware value” (whatever that means)

Table-Valued Parameters

It’s sort of a hack, but you create a UDT which is a table and pass variables of that type into sprocs.

Merge

A new TSQL command, MERGE, allows a single statement to UPDATE, INSERT, or DELETE a row depending on its condition. The example below demonstrates the MEGRE being used to combine a new list of Inventory Item descriptions into the existing Inventory Master. In addition to new Descriptions, there are some new parts included in the NewInventory table. Without the Merge statement, two commands would need to run. The first would look for a match then upgrade the Description. The second statement would look for non matches and then INSERT. With MERGE, one statement can perform both tasks as shown below.

MERGE InventoryMaster AS im
USING (SELECT InventoryID, Descr FROM NewInventory) AS src
ON im.InventoryID = src.InventoryID
WHEN MATCHED THEN
UPDATE SET im.Descr = src.Descr
WHEN NOT MATCHED THEN
INSERT (InventoryID, Descr) VALUES (src.InventoryID, src.Descr);

Oct 11

sp_betterhelpindex: A better sp_helpindex

Microsoft’s system sprocs are nice and all, but they’re rather basic. I have written an improvement to sp_helpindex which also includes key columns, included columns, reads and writes, fragmentation, and freshness of statistics. Like sp_helpindex, it goes in master but can be called from any database. Download the script with proper tabbing.

USE master
GO

IF EXISTS (SELECT 1 FROM sys.objects WHERE name = 'sp_betterhelpindex')
   DROP PROCEDURE sp_betterhelpindex
GO

CREATE PROC dbo.sp_betterhelpindex @tableName VARCHAR(255) AS

DECLARE @sql VARCHAR(MAX)

SET @sql ='
SET NOCOUNT ON

USE ' + DB_NAME() + '

DECLARE @object_id INT, @index_id INT, @indexName VARCHAR(500)
DECLARE @keyColumns VARCHAR(2000), @includedColumns VARCHAR(2000)

SET @object_id = OBJECT_ID(''' + @tableName + ''')

DECLARE @index TABLE (
   index_id INT,
   keyColumns VARCHAR(2000),
   includedColumns VARCHAR(2000)
)

DECLARE sp_betterhelpindex_indexes CURSOR FAST_FORWARD FOR
   SELECT index_id
   FROM sys.indexes
   WHERE object_id = @object_id

OPEN sp_betterhelpindex_indexes
FETCH sp_betterhelpindex_indexes INTO @index_id
WHILE @@fetch_status = 0
BEGIN
   SELECT @keyColumns = NULL, @includedColumns = NULL

   SELECT @keyColumns = ISNULL(@keyColumns + '', '', '''') +
   columns.name + (CASE WHEN is_descending_key = 1 THEN ''(-)'' ELSE '''' END)
   FROM caV2.sys.indexes indexes
   INNER JOIN caV2.sys.index_columns index_columns ON indexes.object_id = index_columns.object_id
   AND indexes.index_id = index_columns.index_id
   INNER JOIN caV2.sys.columns columns ON index_columns.object_id = columns.object_id
   AND index_columns.column_id = columns.column_id
   WHERE indexes.object_id = @object_id AND indexes.index_id = @index_id AND is_included_column = 0
   ORDER BY index_column_id

   SELECT @includedColumns = ISNULL(@includedColumns + '', '', '''') +
   columns.name + (CASE WHEN is_descending_key = 1 THEN ''(-)'' ELSE '''' END)
   FROM caV2.sys.indexes indexes
   INNER JOIN caV2.sys.index_columns index_columns ON indexes.object_id = index_columns.object_id
   AND indexes.index_id = index_columns.index_id
   INNER JOIN caV2.sys.columns columns ON index_columns.object_id = columns.object_id
   AND index_columns.column_id = columns.column_id
   WHERE indexes.object_id = @object_id AND indexes.index_id = @index_id AND is_included_column = 1
   ORDER BY index_column_id

   INSERT INTO @index VALUES (@index_id, @keyColumns, ISNULL(@includedColumns, ''''))

   FETCH sp_betterhelpindex_indexes INTO @index_id
END
CLOSE sp_betterhelpindex_indexes
DEALLOCATE sp_betterhelpindex_indexes

SELECT sys.indexes.name,
   (CASE WHEN is_primary_key = 1 THEN ''primary key, ''
   WHEN is_unique = 1 THEN ''unique, ''
   ELSE '''' END) + LOWER(sys.indexes.type_desc) AS type,
   keyColumns, includedColumns,
   sys.filegroups.name AS [filegroup],
   user_seeks + user_scans + user_lookups AS readsByQueries, user_updates AS updates,
   CAST(physicalStats.avg_fragmentation_in_percent AS INT) AS [fragmentation %],
   STATS_DATE (@object_id , i.index_id) AS statisticsUpdated, i.index_id
FROM @index i
INNER JOIN sys.indexes ON sys.indexes.object_id = @object_id AND i.index_id = sys.indexes.index_id
INNER JOIN sys.filegroups ON sys.indexes.data_space_id = sys.filegroups.data_space_id
INNER JOIN sys.dm_db_index_usage_stats usage ON usage.object_id = @object_id
   AND i.index_id = usage.index_id
   AND usage.database_id = DB_ID()
INNER JOIN sys.dm_db_index_physical_stats(DB_ID(), @object_id, NULL, NULL, NULL) physicalStats
   ON i.index_id = physicalStats.index_id AND alloc_unit_type_desc != ''LOB_DATA''
ORDER BY sys.indexes.type_desc, keyColumns, includedColumns
'

EXEC(@sql)

Attached File sp_betterhelpindex.sql_.txt